Weekly

Kinzen's Weekly Wrap - June 24, 2022

This week our team are in Oslo at the Global Fact 9 conference. We've been delighted to meet with fact checkers and others working on the same challenges as us, and excited to share our learnings on identifying misinformation in audio in one of the sessions. 

Ahead of that session we published a blog post looking at the challenge in detail. It reflects on conversations we’ve been having recently with Pesacheck, Maldita, and Verificat. Each provided fantastic insights to the various problems they have encountered with the audio format, issues which I’m sure many other researchers would recognise. 

Kinzen has been working on technology to help with this, and recently announced a partnership with Verificat to focus on identifying climate change misinformation in Spanish podcasts. If you’re a fact checker or misinformation researcher who wants to learn more, hit reply. You can read the post in full here.

Editor’s Pick

In Culture Warlords: My Journey Into the Dark Web of White Supremacy, Talia Lavin writes about how she has embedded with extremists in their online chat rooms, and the lessons she learned as a result. 

It won’t surprise you that this is, at times, a difficult read. The strength of the book is in how Lavin reflects on the latest white supremacist trends while tying them back to the historical record. She provides a combination of broad context as well as highly specific examples from her monitoring of these spaces.

When reflecting on why she spends so much time on this, Lavin writes, “The chat rooms would continue without my sock puppet or with it. But if I’m there, I can tell you about it. And if you learn about it, you can help me strip the shadows away, and disinfect these crusty dens of hate with a blast of much-needed sunlight.”

For Your Headphones This Weekend

In an April episode of Tech Against Terrorism, the show focused on the role of algorithms and automation in tackling terrorism online. Speakers include Adam Hadley, founder and executive director of Tech Against Terrorism, Dia Kayyali, director for advocacy at Mnemonic and Chris Meserole, a fellow in Foreign Policy at the Brookings Institution. They examined the ethical implications of this work and how algorithms can be used to analyse terrorist behaviour. Listen here.

From the Kinzen Slack channels

Articles recommended by our uniquely experienced group of engineers, scientists, designers, developers and editorial experts

Balkan Insight. Online Hate Speech Remains Unmoderated in Balkans

Roberta Taveri and Pierre François Docquir report on their worries ahead of the October elections in Bosnia and Herzegovina. Their argument: "Local civil society actors must be given the opportunity to contribute to content moderation". They outline why this is the case: “A particular word could lead to very different consequences, depending on who it is addressed to, and how, when and where it is expressed. The apparent and implicit meanings of a message – and its potential consequences – can only be assessed on the basis of a robust understanding of the specific linguistic, historical, cultural, societal and political context.”

Center for Democracy & Technology. Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis

This report is from May 2021 but it was shared around again in our Slack channels this week. It highlights the advantages and disadvantages of artificial intelligence in understanding content, focusing on two key methods: matching models and predictive models. The report finds that, "State-of-the-art automated analysis tools that perform well in controlled settings struggle to analyze new, previously unseen types of multimedia." This is a problem for a challenge like misinformation, which is ever changing. As the report states, "While there are many important and useful advances being made in the capabilities of machine learning techniques to analyze content, policymakers, technology companies, journalists, advocates, and other stakeholders need to understand the limitations of these tools. A failure to account for these limitations in the design and implementation of these techniques will lead to detrimental impacts on the rights of people affected by automated analysis and decision making."

Equis Research. On Latinos, Misinformation and Uncertainty: New Polling Insights

Fascinating research into the spread of misinformation in Spanish in the United States.

The Guardian. Fifa to tackle online abuse aimed at players during Qatar World Cup

Fifa is launching a moderation service to try to curb the impact of racism online directed at football players. Will be watching to see how this works in practice later this year.

What to read next