Our platform

Build custom solutions to protect your community from harmful content.

Risk Knowledge Base

Every day, our experts identify and digitise evolving threats of hate and harm across local markets and languages. This data is captured and audited in a unique framework of attributes and risk signals.

  • Identify harmful content aligned to your policies
  • Improve the efficiency of your existing tooling
  • Train machine learning models and classifiers

Risk Analysis Engine

Leverage machine learning and natural language processing to automatically identify, evaluate and score the presence of harmful content.

  • Pinpoint harmful content in audio, video and text
  • Prioritise harmful content for manual review
  • Audit large volumes of content for insights and trends
An illustration showing and example of risk analysis of an audio clip


Our data and analysis supports 28 languages and dialects — not just English.

Obsessed with accuracy

All data goes through a rigorous classification, review and re-review process.


We unbundle complex harms into attributes that can be scaled by technology.


Our data and analysis can be easily filtered and customised to suit your needs.

why kinzen

We’re scaling the human solution to the information crisis.

A global network of local expertise

Our global network of experts analyses and encodes harmful language in 28 languages and markets.

Their topical, linguistic and cultural expertise is digitised into a risk knowledge base and machine learning models.

A platform built for identifying harm

Our platform leverages curated data and cutting-edge machine learning to help experts identify harmful content at scale.

Real-time expert feedback continuously improves the precision of our platform.