Blog

The Key to a Safer Internet is Local Nuance and Deep Language Understanding

The Internet facilitates global connectivity on an unprecedented scale. Billions of people connect all around the clock to inspire and be inspired. To create and find a sense of belonging. In theory online spaces foster freedom of expression. In practice they can amplify harm and have a negative impact on democracy, human rights and society at large. Designing a spectrum of proportionate responses to these harms beyond binary content moderation requires in-depth understanding of language and awareness of socio-cultural nuance.  

The professionals who represent the first line of defence against all kinds of online harm work in challenging and ever evolving environments. Platform rules set out what is allowed and not allowed, with the objective of creating an equal user experience across global communities. Unfortunately, speech can be ambiguous and a binary approach can lead to inequitable treatment of users and creators, particularly in underserved regions. Accounting for local nuance is essential to understanding and appropriately addressing signals of imminent harm and information risk.

Context matters

Let’s consider some real-life examples. The Swastika is a symbol largely associated with a hateful ideology, but for some people it carries an important or even religious meaning. In Canada some communities protested against banning the symbol in the law, fearing it would limit their freedom of expression and religious beliefs. In Poland, independence day was celebrated with a government-organised march, during which racist and anti-immigration slogans were openly used. Photos from the event were posted to Internet services documenting, not glorifying, open promotion of racism during an official event. In neither of these cases a blanket removal would be seen as proportionate, but rather as a measure limiting freedom of speech. 

Speech targeting women is another good example of why language matters. For example, the implication that a woman is unfaithful or cunning is an insult that in countries like Turkey, Egypt and Brazil may lead to offline harm against women and could even result in stoning to death. In the West the attacks against women may seem more disguised and less harmful. For example, indirect misogynistic memes targeting women’s personal appearance, weight, clothes, etc. are treated as innocent jokes, while they directly contribute to systemic biases against women in the society.

Finally, coded language is often used to evade moderation. Originally popular in written forms, algospeak has now gained popularity in audio content. For example, in France, dictature piquouse (trans. prick dictatorship) may be used to refer to what the author believes is an unfair demand to be vaccinated. In Germany, schlumpfen (trans. smurfing) is used instead of impfen (trans. vaccinate) to stay clear of moderation flows. Understanding such local nuances can minimise online threats. 

These examples paint a clear picture that failure to embrace context and local expertise impairs the ability to create a safe environment for any type of user. It could impact the effectiveness of automated detection systems, a company’s ability to respond in a timely manner to evolving narratives or even non-compliance with industry standards or local laws. At worst, neglecting to understand local nuance could have a detrimental impact on democracy and society at large. But how can you effectively reconcile nuanced local guidelines with global platform rules?

Safety by design

Trust and Safety Professionals know that you cannot protect online conversations by removing content alone. To offer users and creators a seamless experience on a platform, trust and safety teams must be guided by the principle of cultural competence and focus on designing a suite of systemic solutions instead of case-by-case interventions. Integrity of content moderation practices and recommender systems, as well as understandable rules of engagement and enforcement, are essential in curbing the spread of harmful content. Most of all we need structures that will promote innovative interventions to reduce information risks as they evolve every single day. 

Although not exhaustive, there are a handful of important ingredients to creating context-centered moderation practices. These include:

  • Clear overarching rules - creating understandable rules to express the values and principles of a platform should serve as a general guideline for ‘do’s’ and ‘don’ts’. They should not be static, but evolve as needed based on inputs from local or regional insights and industry standards. Contextual evaluation based on human rights frameworks allows for proportionality of content remedies, which increases policies’ sensitivity to local contexts.  
  • Local policies or enforcement guidelines -  these should be consistent with the global rules, but fit-for-purpose to escalate content for assessment and action in specific, local circumstances. Policy enforcement and proper oversight to manage systemic risks requires diversity of cultures and socio-political context. There are different approaches such as localised content moderation structure, collaboration with organisations offering relevant expertise (e.g. Trusted Flaggers, fact-checkers), etc. that can effectively support that.  
  • Appropriate remedies - we need to move beyond the binary remove-leave-up approach to content governance. There already exists a wide range of diverse remedies and interventions that can be applied, e.g. counterspeech, warning interstitials, user education. More robust remedial strategies will promote safety and free expression on Internet services. They need to be designed in proportionality to the harm they can cause in various contexts. 
  • Internal alignment - Our research into Trust & Safety has shown that an organisational environment can be one of the major obstacles to developing a safety-oriented approach to content moderation. The structure and culture of the organisation should be aligned with a safety-centered approach that cannot only be tasked to Trust & Safety. The nature of the problems that we’re dealing with requires an aligned, coordinated, organisation-wide approach, as well as an appropriate investment.

Kinzen Solution

Local expertise is at the heart of Kinzen’s operations. We have built a network of experts with unique skills and knowledge central to our product, data and policy development. They identify emerging trends and help organisations to get ahead of emerging narratives, developing conflicts and the real-world online and offline harms they can lead to. Context is key. We have built language and expertise in multiple markets, which, scaled by our technology, enables detection and prioritisation of the content that may cause a wide range of risks.

Scaling these signals to ensure a great on-platform experience for users and creators across the globe requires innovative solutions. At Kinzen, we believe in a ‘human-in-the-loop’ approach, combining human expertise with a range of automated techniques for analysing large volumes of data. This allows us to generate leads and highlight high-risk content for our researchers and clients. We have also developed a specialism in the moderation of audio content.

Day in and day out, Kinzen supports the work of Trust & Safety professionals to find a balance between consistency in policy enforcement and thoughtful consideration of local context. We believe this approach is essential to maintain safe and healthy conversations online. Decisions taken on content posted online can have profound consequences for individuals and long-lasting impact on societies and democracy. Instead of playing catch-up with societal demands for contextual assessment, the industry needs to use systemic thinking and through safety by design generate ongoing innovation. With new, thoughtful reforms, we can get ahead of the challenges of content governance and truly represent the nuance of local communities online.

What to read next