Some news: Kinzen is a signatory to the Strengthened EU Code of Practice on Disinformation, which was launched on Thursday.
What does it all mean? Well, we have a new blog post all about that! Our Head of Policy, Karolina Pietkiewicz, outlines why Kinzen is different to the other organisations that have signed up to the Code, but also why we felt strongly that it was important to do so.
We are excited to play our part in examining the systems that help aid in the spread of misinformation; the Code emphasises areas like accountability and transparency of content moderation and recommendations, as well as empowerment of users. We at Kinzen also have a unique role to play in helping the research and fact-checking community with our technological tooling; we are already working with a Spanish agency as they fact-check climate change misinformation.
You can read more about it directly from Karolina here.
PS we will be at the Global Fact 9 conference in Oslo next week talking about our work on identifying misinformation in audio. Hit reply if you're going to be there and you'd like to meet us!
Editor’s Pick
They haven’t gone away, you know. This week, when Justin Bieber revealed he had developed Ramsay Hunt syndrome, some of those within the anti-vaccine community seized on a chance to spread their message. This despite the fact that the cause of Bieber’s symptoms is not known, and unlikely to be related to any vaccine.
I was thinking about this as I was reading Anti-vaxxers: How to Challenge a Misinformed Movement by Jonathan M. Berman. It’s a fantastic account of how the anti-vaccine movement has developed since the 19th century - since the invention of vaccines. And although the language of such communities is always evolving, it is striking how little many of their core narratives have changed. Berman illustrates how many of the conspiracy theories and misinformation surrounding vaccines were the same 100 years ago as they are today. The book also serves as a way to help inform yourself if talking with a loved one about the science behind vaccines. For anyone trying to understand the nuances of the issue, this book is an excellent starting point.
For Your Headphones This Weekend
This week Claire Wardle of First Draft announced the nonprofit is closing but will become part of the newly launched Information Futures Lab at Brown’s School of Public Health. First Draft has pioneered research into misinformation, with thought leadership and helpful resources, so I’m very glad its mission continues at Brown.
Recently, Wardle spoke to the Brown School of Public Health podcast about her work. Listen here.
From the Kinzen Slack channels
Articles recommended by our uniquely experienced group of engineers, scientists, designers, developers and editorial experts
Tech Policy Press. Give Group Admins Tools to Fight Disinformation In Immigrant Diaspora WhatsApp Groups
As we consider ways to counter dis and misinformation in closed messaging apps like WhatsApp we face a conundrum. These are far more private spaces than Facebook and Twitter. Some fact-checkers have worked to establish a “tip line”, where messages can be forwarded to them to allow for research to be performed on the claims being shared. But the ideas promoted in this piece are also a critical piece of the puzzle, and are in line with the ideas expressed by Daphne Keller around “middleware”. If we can empower curators and group admins with tools, resources and information, we might be able to make progress with this seemingly intractable problem.
Shorenstein Center. Ethical Scaling for Content Moderation: Extreme Speech and the (In)Significance of Artificial Intelligence
In a week when hyped up claims about AI supposedly becoming sentient were going viral, this research paper was a breath of fresh air. Yes, AI has incredible potential and is already doing remarkable things. However, complete automation of complex tasks like identifying misinformation, extreme speech and content moderation faces massive challenges. That’s why Kinzen is adopting the Human in the Loop approach, where we have human-infused AI that develops a feedback loop between human expertise and technological scale. But I digress. This research paper proposes “ethical scaling”, that is, “a transparent, inclusive, reflexive and replicable process of iteration for content moderation that should evolve in conjunction with global parity in resource allocation for moderation and addressing structural issues of algorithmic amplification of divisive content.” Worth your time.
Nieman Lab. “Like a slow-motion coup”: Brazil is on the brink of a disinformation disaster
Julia Angwin interviews Patricia Campos Mello, a leading Brazilian journalist who is deeply concerned about the impact of disinformation on the upcoming election in Brazil. We share her concerns; see our recent blog post from Leticia Duarte on the key trends we are seeing ahead of October’s election.