Blog

Algorithms for Editors: Scaling Human Solutions to Content Moderation

Part two of a two-part post about the challenge of content moderation for the world. Read part one here.


There are decades when nothing happens. There are weeks when decades happen. And so it seems with efforts to stem disinformation. 

Content platforms have done more during the last weeks of this US election cycle than in the decade since their vulnerabilities first became apparent.

During the Arab Spring, I led the team at Storyful as we verified and debunked videos for YouTube. Back then, 25 hours of videos were uploaded to the platform every minute. Today it is 500 hours. Content platforms are now in a contest for every passing moment. The need for speed is absolute. 

But what we know is that when technology companies move fast, they end up breaking things. 

When it comes to disinformation, it helps to slow things down. False information tends to lose its emotional charge when we are forced to think about it. Deliberative human judgment should also be the cornerstone of any effective content moderation solution. 

You can understand why technology companies have a bias toward engineering solutions to the challenge, such is the mind-bending volume of the content to be moderated.

But, the complexity of multiple platforms, languages, cultures, and critical events demands MORE human judgment, not less. 

I’m steeped in old-world editorial values but have worked in the engineering culture of Silicon Valley. In my experience, it’s impossible to overstate the capacity for miscommunication between these two worlds.

Until very recently. 

The Opportunity

I work with a unicorn team of editors and engineers at Kinzen. In turn, we work with content platforms and publishers. And we’ve watched the contradictions between two tribes give way to a gradual but definite convergence.

Artificial intelligence has played a forcing function for both the editors and technologists. Editors have realised its potential. Technologists have struggled with its limits. 

In this convergence between editors and engineers lies an opportunity to design moderation systems that exponentially scale the impact of human skill. 

We stress the word opportunity. At Kinzen, we’ve identified six significant challenges to scaling human solutions to content moderation:


  1. Make Humans Visible 

Some content platforms have outsourced the human function in moderation to a global army of low-paid and chronically under-valued contract staff. And in the process, they have downgraded that human function, as Casey Newton discovered in his expose of work practices for moderators in the United States. 

As researchers like Sarah Roberts point out, the ultimate value of humans in this process is to train the datasets for machine learning tools that will ultimately replace the humans. If algorithms are your solution to toxic content, then the human is reduced to virtual invisibility. 

Content moderation will increasingly require the skill of human editors. But there is no future in which these editorial experts are not highly skilled professionals, such as the Trust and Safety teams they would collaborate with. 


  1. Algorithms For Editors

Fact-checking is a necessary pillar of content moderation, but without a technological leap forward, this vital skill can never scale to its full potential.

Core to our vision of content moderation is the design of algorithms which empower editors and fact-checkers, rather than the other way around.

Content moderation cannot scale without artificial intelligence. But it cannot be truly effective if the role of humans is to provide data that makes them obsolescent. 

Editors need to be part of every stage of design and development of AI for content moderation. 


  1. Machines Need Better Data

At Kinzen, our experience tells us editors have a critical role in improving the accuracy and precision of NLP classification in multiple languages, aligning topics, categories and entities with known narratives, dog-whistles and actors. 

Through continuous monitoring, labelling and tagging, editors can help build knowledge graphs that reflect the evolution and migration of disinformation campaigns across the internet, and guide automated curation and moderation systems.

The critical element of the ‘Algorithms for Editors’ model is that human moderators are not passively feeding data into a machine that will eventually replace them. 

They are working directly with ML engineers to solve the ‘Garbage In, Garbage Out’ problem, focusing on the quality of the data being fed into the machine, rather than the quantity. 

Tech-enabled editors better understand the blind spots and limits of data emerging from a monumental breaking news story like COVID. Tech-enabled editors will out-perform the machine in spotting the subtle evolution of human language - the changing pitch of the ‘dog-whistle’ - that defines disinformation networks and problematic narratives. 


  1. Editing the Algorithms

The ‘Algorithms for Editors’ approach uses human judgment to train the machine but also judge its outputs; both its recall (ability to find the bad stuff) and precision (ability to distinguish good from bad). 

Artificial intelligence helps reduce a vast firehose of content to a manageable stream for human review, but it should never be the final arbiter of what is good or bad, no matter how advanced the modelling. 

“If we could effectively automate content moderation,” says Tarleton Gillespie of Microsoft’s research division, “it is not clear that we should.”


  1. Editing The Editors

Human judgment carries its own biases. Even the critical thinker stumbles on contextual subtleties, like satire, sarcasm and the ever increasing complexity of online sub-culture.

Just as news organisations have evolved the concept of the public editor or ombudsman, platforms need to enlist the help of civil society to audit the data being used to train the machine.


  1. Collaboration Across Platforms

In designing a content moderation system that can be applied consistently across the internet, collaboration is the essential ingredient. No content platform can protect its users without understanding what’s worked for other platforms. No technology company can develop consistent moderation policies without help from independent researchers and editorial experts. 

There are early signs of a more open approach to the technology of content moderation, with tech giants like Microsoft backing Project Origin  and the Trusted News Initiative. Twitter has shown impressive transparency as it experiments with tech-enabled human curation and introduces much needed friction to its platform. 

In a recent interview, Daniel Ek of Spotify used the word ‘algotorial’ to describe how machine and editor work together to scale serendipity on his platform. 

The ultimate goal, he said, was to always ensure that we are not only shaping culture, but also reflecting it.  

My hope is that the ‘move fast’ spirit of these last few months gives way to this more deliberative approach to the partnership of humans and algorithms. 




What to read next