Audio moderation & verification

Detect dangerous misinformation and harmful content in audio.

An illustration showing and example of risk analysis of an audio clip
Audio content is exploding in popularity worldwide...
...but moderators and fact checkers lack the time and resources to effectively keep audio spaces safe.
Protecting audio conversation at scale requires local language expertise, context and advanced technology.
That’s where we come in.

Audio transcription

Our in-house language models generate audio transcriptions optimised for detecting dangerous misinformation in multiple languages.

audio analysis

Advanced machine learning models detect and flag harmful language, claims, narratives and policy violations within audio.

audio context

Analysis is further enhanced by data points and crucial context from human experts, including hashtags, keywords, phrases, slogans, and slurs.

audio coverage

Our unique combination of local experts, data, and technology covers multiple languages, regions and areas of harm.

A graphic showing a podcast that has been automatically transcribedA graphic showing an example of potentially harmful misinformationA graphic showing two examples of potentially harmful misinformationA graphic showing some of the narratives, areas of harm and languages that Kinzen focuses on

How we can help you

Get ahead of threats by detecting potentially harmful content in audio.

Detect and analyse text, audio and video content that has the potential to cause harm. Licence the data that powers our detection and risk analysis.

  • Data licencing
  • Bespoke in-house tooling
  • API integrations
A graphic showing some harmful content highlighted in audio text
A graphic showing a Trust & Safety team working together

Prepare and respond to evolving narratives and developing crises.

With research analysts around the world, Kinzen gives you clarity and confidence about what to do about evolving harmful narratives.

  • Reports that cover breadth and depth
  • Briefings for early warnings
  • 1:1 expert consultation

What makes us different

Expert Network

Every day, Kinzen experts track evolving threats of hate and harm across multiple platforms using various monitoring tools, including Kinzen’s proprietary dashboard. These are added into a Database of Harms.

Database of Harms

Thousands of validated and searchable data points, including hashtags, keywords, phrases, slogans, slurs and claims, are matched across local markets and languages to highlight misinformation threats and hate speech. These help train machine learning models.

Machine Learning Models

Our technology sifts through large volumes of data, generating automatic classifications of harmful content and allowing our clients to prioritise and take action on high-risk harmful content before it results in real-life harm.