We are Kinzen.
Our mission is to protect the world’s public conversations from information risk.

We provide data and research to trust and safety professionals, content moderators and public policy makers, helping them get ahead - and stay ahead - of threats such as dangerous misinformation, hateful content, violent content, violent extremism and dangerous organisations.

We use a blend of human expertise and machine learning to provide early-warning of the spread of harmful content in multiple languages. Our team has developed unique technology which helps editors review large volumes of content in multiple formats, including text, video, audio and images. We have developed a particular expertise in the moderation of podcasts.

A photo of Kinzen's founders, Mark Little and Áine Kerr.

Our Work

We help our clients make more precise and consistent decisions about evolving online threats to real-world safety. We do this by focusing on harmful content with the greatest capacity to create violence, abuse or civil unrest, and performing the following tasks:

Prioritise

Prioritise countries and languages in which clients have blindspots and where cultural nuance is critical.

Decode

Decode the cultural and linguistic nuances which distinguish harmful content from place to place.

Prepare

Prepare for events during which dangerous misinformation could undermine electoral integrity, provoke violence or promote conflict.

Pre-empt

Pre-empt the spread of international misinformation narratives which threaten public health, such as anti-vaccine campaigns.

Analyse

Analyse the evolution of persistent campaigns of hateful speech, such as antisemitism.

Anticipate

Anticipate the emergence of campaigns of violent rhetoric based on identity.

Learn More

Our Team

Built on quality data

Our team is a uniquely experienced group of engineers, scientists, designers and developers.

We also employ a network of experts who have deep knowledge and lived experience of cultural differences and nuances around the globe but seek to apply universal principles in identifying harmful content. They are journalists, researchers, authors and experts in open source intelligence gathering, with the common goal of supporting consistent standards of moderation across multiple languages, cultures and events.

Our Principles

Our principles start with the two core foundations of journalism: fairness and impartiality. These concepts are at the very core of our work, influencing every aspect of our culture, our recruitment and our conduct.

Learn More

FAQs

What problem is Kinzen trying to solve?

There is no artificial intelligence that can magically solve the problem of harmful content. Automated content filters can’t solve highly complex moderation challenges.

However, human moderators just can’t cope with the volume of harmful content. Human expertise can’t scale without the help of machine learning.

Kinzen is working to solve this problem by using technology to scale the ability of human experts to prioritize the review of the most harmful online content. At its core, this ‘human in the loop’ approach relies on an ever improving feedback loop between editors and the algorithms.

How was Kinzen created?

Kinzen was founded by Áine Kerr and Mark Little. Mark and Áine first worked together at the social news agency Storyful, which Mark founded in 2009. Storyful pioneered open source intelligence and fact-checking techniques, and curated content for YouTube and Facebook. Mark and Áine have worked directly for global technology platforms. Mark was a VP of Media Partnerships at Twitter. Áine was director of Global Journalism Partnerships at Facebook.

Kinzen is based in Ireland, which has become a global hub for online trust and safety. But the company has built a global network of analysts and editors who have expertise in the countless  linguistic, cultural and political nuances that define harmful content. The data collected by this network helps Kinzen’s engineering team build ML models which scale the detection of harmful content.

Who gets to be a member of Kinzen's ‘expert’ network?

Kinzen works with experts who have a proven track record of excellence in investigating misinformation. They have published publicly on the subject of misinformation in their country, or led large research reports into the matter. They are often award-winning journalists. They are published authors. They are public speakers and media commentators.

How is Kinzen funded?

Kinzen is a commercial enterprise. We are supported by revenue from our partnerships with global content platforms and content moderations services. Our seed investment was provided by purpose-driven investors who support our mission, including the Danish media company FST, the Irish public investor Enterprise Ireland, and Irish investment fund BVP.

Is Kinzen a fact-checking organization?

Kinzen uses many techniques that have evolved from fact-checking, but we are not a fact-checking organization. We help our partners preempt and prepare for emerging strains of harmful content, and be more agile in how they respond to rapidly evolving threats.

How does Kinzen define harmful content?

We define harmful content as any one of the following:

  • Dangerous misinformation: false, misleading or manipulated information with the potential to create real-world harm or interfere with elections or other civic processes, as well as coordinated disinformation campaigns designed to manipulate public conversations, undermine the democratic process, defraud citizens or threaten their health, security or environment.
  • Hateful content: promoting or inciting violence, hatred or discrimination against individuals and groups based on race, ethnicity, nationality or national origin, gender or sex, disability or serious disease, religious affiliation, sexual orientation. It includes mocking and promoting hate crimes.
  • Violent content: implicit or explicit statements inciting, admitting intent to commit, praising or glorifying violence against individuals or groups.
  • Violent extremism and dangerous movements: an individual or a group who justify the use of violence, advocate for others to use violence or spread conspiracy theories and hateful ideologies in order to radically change the nature of government, religion or society.
Can there ever be a solution to harmful content?

At Kinzen, we recognise there will never be a perfect content moderation policy. We have learned from experts in the field of harmful content that moderation has often been counterproductive. We agree with those who argue that private companies cannot be the ultimate arbiters of ‘lawful but awful’ speech, and we actively support the development of transparency and accountability in the moderation of threats such as disinformation.

However, we believe there are categories of content so harmful they require an urgent and immediate response by any platform which hosts content, communities and conversation. Our goal is to help these platforms draw a first line of defence around content with the capacity to create significant violence, abuse, fraud and damage to public health and the integrity of the democratic process. We believe the need to address these categories of harmful content is greatest in parts of the world that have so far been ignored by large technology companies.

How is Kinzen's work consistent with freedom of speech?

In an age of online outrage, we need to defend free speech from those who are using it as a weapon against others. We need to ensure an open internet is not used to accelerate real world harm or hate. But moderation must become a precision tool, not a blunt instrument. We need content moderators to make decisions that are proportionate, consistent and explainable. We need to ensure that efforts to prevent the spread of harmful content strengthen the defence of democratic values, including freedom of expression.

What data does Kinzen collect?

Our job is to understand the evolving nature of language. As movements are constantly changing, the hashtags, dog whistles and slogans are changing with them. The data we collect records those evolving quirks of language as they are used to evade moderation by platforms. We focus all our data capture and technological efforts around content and content only.