solutions

Get ahead of harmful content, detect policy violations faster and respond with speed and precision.

Prepare. Understand the nuance of complex threats in local markets and languages.

Understand

Plan ahead with early warnings of evolving narratives and trends.

Analyse complex threats with additional nuance and local context.

Make informed decisions by accessing regional expertise.

Minimise Risk

Proactively craft policies and processes with a greater understanding of emerging threats.

Deeply understand the complex policy areas of dangerous misinformation, violent content, hateful content, violent extremism and dangerous movements across multiple markets.

A graphic showing some of the narratives, areas of harm and languages that Kinzen focuses on

Identify. Detect and prioritise policy violations in audio, video and text content.

An illustration showing and example of risk analysis of an audio clip
A graphic showing two examples of potentially harmful misinformation

Actionable insights

Access an evolving knowledge base of 10k+ keywords, phrases and claims.

Identify risk signals such as harmful impacts, policy areas and toxicity.

Kickstart new markets with support for 28 languages and dialects.

Advanced tooling

Join established fact-checkers and research organisations in using our analysis dashboard.

Save countless hours of time manually analysing audio and video content.

Analysis at scale

Automatically analyse audio, video and text for the presence of harmful content.

Detect policy violations earlier and identify emering on-platform trends.

Customise and adapt results to fit your policies, workflows and tooling.

Improve review times and action rates with quality risk signals.

A graphic showing some of Kinzen's risk analysis signals
A graphic showing a Trust & Safety team working together

Respond. Manage critical events and unexpected crises with speed and precision.

Crises

Empower your teams with local language understanding and analysis during unforeseen events.

Get expert guidance and local context to inform urgent and critical content decisioning.

Elections

Proactively prepare a response ahead of time with alerts on local narratives and potential threats to the democratic process.

Respond to emerging threats in real-time with custom-built solutions and expert consultation.

why kinzen

We’re scaling the human solution to the information crisis.

A global network of local expertise

Our global network of experts analyses and encodes harmful language in 28 languages and markets.

Their topical, linguistic and cultural expertise is digitised into a risk knowledge base and machine learning models.

A platform built for identifying harm

Our platform leverages curated data and cutting-edge machine learning to help experts identify harmful content at scale.

Real-time expert feedback continuously improves the precision of our platform.

Learn More

FAQs

What problem is Kinzen trying to solve?

There is no artificial intelligence that can magically solve the problem of harmful content. Automated content filters can’t solve highly complex moderation challenges.

However, human moderators just can’t cope with the volume of harmful content. Human expertise can’t scale without the help of machine learning.

Kinzen is working to solve this problem by using technology to scale the ability of human experts to prioritize the review of the most harmful online content. At its core, this ‘human in the loop’ approach relies on an ever improving feedback loop between editors and the algorithms.

How was Kinzen created?

Kinzen was founded by Áine Kerr and Mark Little. Mark and Áine first worked together at the social news agency Storyful, which Mark founded in 2009. Storyful pioneered open source intelligence and fact-checking techniques, and curated content for YouTube and Facebook. Mark and Áine have worked directly for global technology platforms. Mark was a VP of Media Partnerships at Twitter. Áine was director of Global Journalism Partnerships at Facebook.

Kinzen is based in Ireland, which has become a global hub for online trust and safety. But the company has built a global network of analysts and editors who have expertise in the countless  linguistic, cultural and political nuances that define harmful content. The data collected by this network helps Kinzen’s engineering team build ML models which scale the detection of harmful content.

Who gets to be a member of Kinzen's ‘expert’ network?

Kinzen works with experts who have a proven track record of excellence in investigating misinformation. They have published publicly on the subject of misinformation in their country, or led large research reports into the matter. They are often award-winning journalists. They are published authors. They are public speakers and media commentators.

How is Kinzen funded?

Kinzen is a commercial enterprise. We are supported by revenue from our partnerships with global content platforms and content moderations services. Our seed investment was provided by purpose-driven investors who support our mission, including the Danish media company FST, the Irish public investor Enterprise Ireland, and Irish investment fund BVP.

Is Kinzen a fact-checking organization?

Kinzen uses many techniques that have evolved from fact-checking, but we are not a fact-checking organization. We help our partners preempt and prepare for emerging strains of harmful content, and be more agile in how they respond to rapidly evolving threats.

How does Kinzen define harmful content?

We define harmful content as any one of the following:

  • Dangerous misinformation: false, misleading or manipulated information with the potential to create real-world harm or interfere with elections or other civic processes, as well as coordinated disinformation campaigns designed to manipulate public conversations, undermine the democratic process, defraud citizens or threaten their health, security or environment.
  • Hateful content: promoting or inciting violence, hatred or discrimination against individuals and groups based on race, ethnicity, nationality or national origin, gender or sex, disability or serious disease, religious affiliation, sexual orientation. It includes mocking and promoting hate crimes.
  • Violent content: implicit or explicit statements inciting, admitting intent to commit, praising or glorifying violence against individuals or groups.
  • Violent extremism and dangerous movements: an individual or a group who justify the use of violence, advocate for others to use violence or spread conspiracy theories and hateful ideologies in order to radically change the nature of government, religion or society.
Can there ever be a solution to harmful content?

At Kinzen, we recognise there will never be a perfect content moderation policy. We have learned from experts in the field of harmful content that moderation has often been counterproductive. We agree with those who argue that private companies cannot be the ultimate arbiters of ‘lawful but awful’ speech, and we actively support the development of transparency and accountability in the moderation of threats such as disinformation.

However, we believe there are categories of content so harmful they require an urgent and immediate response by any platform which hosts content, communities and conversation. Our goal is to help these platforms draw a first line of defence around content with the capacity to create significant violence, abuse, fraud and damage to public health and the integrity of the democratic process. We believe the need to address these categories of harmful content is greatest in parts of the world that have so far been ignored by large technology companies.

How is Kinzen's work consistent with freedom of speech?

In an age of online outrage, we need to defend free speech from those who are using it as a weapon against others. We need to ensure an open internet is not used to accelerate real world harm or hate. But moderation must become a precision tool, not a blunt instrument. We need content moderators to make decisions that are proportionate, consistent and explainable. We need to ensure that efforts to prevent the spread of harmful content strengthen the defence of democratic values, including freedom of expression.

What data does Kinzen collect?

Our job is to understand the evolving nature of language. As movements are constantly changing, the hashtags, dog whistles and slogans are changing with them. The data we collect records those evolving quirks of language as they are used to evade moderation by platforms. We focus all our data capture and technological efforts around content and content only.