Every day, our experts identify and digitise evolving threats of hate and harm across local markets and languages. This data is captured and audited in a unique framework of attributes and risk signals.
Leverage machine learning and natural language processing to automatically identify, evaluate and score the presence of harmful content.
Our data and analysis supports 28 languages and dialects — not just English.
All data goes through a rigorous classification, review and re-review process.
We unbundle complex harms into attributes that can be scaled by technology.
Our data and analysis can be easily filtered and customised to suit your needs.
Our global network of experts analyses and encodes harmful language in 28 languages and markets.
Their topical, linguistic and cultural expertise is digitised into a risk knowledge base and machine learning models.
Our platform leverages curated data and cutting-edge machine learning to help experts identify harmful content at scale.
Real-time expert feedback continuously improves the precision of our platform.
There is no artificial intelligence that can magically solve the problem of harmful content. Automated content filters can’t solve highly complex moderation challenges.
However, human moderators just can’t cope with the volume of harmful content. Human expertise can’t scale without the help of machine learning.
Kinzen is working to solve this problem by using technology to scale the ability of human experts to prioritize the review of the most harmful online content. At its core, this ‘human in the loop’ approach relies on an ever improving feedback loop between editors and the algorithms.
Kinzen was founded by Áine Kerr and Mark Little. Mark and Áine first worked together at the social news agency Storyful, which Mark founded in 2009. Storyful pioneered open source intelligence and fact-checking techniques, and curated content for YouTube and Facebook. Mark and Áine have worked directly for global technology platforms. Mark was a VP of Media Partnerships at Twitter. Áine was director of Global Journalism Partnerships at Facebook.
Kinzen is based in Ireland, which has become a global hub for online trust and safety. But the company has built a global network of analysts and editors who have expertise in the countless linguistic, cultural and political nuances that define harmful content. The data collected by this network helps Kinzen’s engineering team build ML models which scale the detection of harmful content.
Kinzen works with experts who have a proven track record of excellence in investigating misinformation. They have published publicly on the subject of misinformation in their country, or led large research reports into the matter. They are often award-winning journalists. They are published authors. They are public speakers and media commentators.
Kinzen is a commercial enterprise. We are supported by revenue from our partnerships with global content platforms and content moderations services. Our seed investment was provided by purpose-driven investors who support our mission, including the Danish media company FST, the Irish public investor Enterprise Ireland, and Irish investment fund BVP.
Kinzen uses many techniques that have evolved from fact-checking, but we are not a fact-checking organization. We help our partners preempt and prepare for emerging strains of harmful content, and be more agile in how they respond to rapidly evolving threats.
We define harmful content as any one of the following:
At Kinzen, we recognise there will never be a perfect content moderation policy. We have learned from experts in the field of harmful content that moderation has often been counterproductive. We agree with those who argue that private companies cannot be the ultimate arbiters of ‘lawful but awful’ speech, and we actively support the development of transparency and accountability in the moderation of threats such as disinformation.
However, we believe there are categories of content so harmful they require an urgent and immediate response by any platform which hosts content, communities and conversation. Our goal is to help these platforms draw a first line of defence around content with the capacity to create significant violence, abuse, fraud and damage to public health and the integrity of the democratic process. We believe the need to address these categories of harmful content is greatest in parts of the world that have so far been ignored by large technology companies.
In an age of online outrage, we need to defend free speech from those who are using it as a weapon against others. We need to ensure an open internet is not used to accelerate real world harm or hate. But moderation must become a precision tool, not a blunt instrument. We need content moderators to make decisions that are proportionate, consistent and explainable. We need to ensure that efforts to prevent the spread of harmful content strengthen the defence of democratic values, including freedom of expression.
Our job is to understand the evolving nature of language. As movements are constantly changing, the hashtags, dog whistles and slogans are changing with them. The data we collect records those evolving quirks of language as they are used to evade moderation by platforms. We focus all our data capture and technological efforts around content and content only.