Trust & Safety for a social internet: Why we invested in CheckstepDawn

Over the past few years, our lives have become increasingly digital — the news, social media and everything in between infiltrating every part of our lives. In all of this, we know that the depths of the internet are a dark place to be yet, by and large, we manage to avoid it. But have you ever thought about what it takes to make sure that those depths don’t surface up to your casual TikTok, Instagram, or news feed scroll?

To date, the frontline has been manned by legions of human moderators trawling through exponentially increasing volumes of content. When footage of the tragic shooting in Buffalo aired live on Twitch, only 22 people managed to watch it in the 2 minutes it took to take the video down. But, in those two minutes, the video was shared on another streaming platform where it was viewed 3 million times before it was removed. When it was shared from there to Facebook, it garnered 500 comments and 46,000 shares in the 10 hours that it was live on the site.

Managing the tidal wave of content like this is expensive. Facebook and YouTube employ hundreds of thousands of moderators, costing them hundreds of millions a year. And it takes its toll mentally too, with Facebook recently paying out a £42m settlement to content moderators who developed post-traumatic stress disorder (PTSD) on the job. When Twitch alone broadcasts more than 2 million hours of video a day, it’s clear that relying solely on humans, equipped with limited moderation tools, to deal with the accelerating volumes of user-generated content is unsustainable.

But automating content moderation is far from straightforward; it’s an operational and technical challenge. You don’t want to ban everything or let everything through — you need to determine what level of moderation is appropriate for the specific platform and what is harmful in each context. If you’re an education platform, for instance, then you might expect any nudity or violent content to be completely inappropriate. But how should a moderator of that platform treat nudity in art or historical photography? It can be hard for a human to judge, let alone an algorithm. And these challenges are compounded when dealing with different cultural contexts and languages — it’s easier for an AI to spot a naked body than a localised racial slur.

Little wonder, then, that companies have struggled to find a solution that really meets their needs. The variety of specific requirements across platforms forces each one to cobble together their own bespoke solutions. There’s no way that Elon Musk/Twitter is going to take the same approach to content moderation as Meta does — one-size-fits-all just doesn’t work. Larger companies have been able to develop some in-house solutions, while smaller companies have taken to using a patchwork of data vendors and workflow tools to try to create a solution that does the job. Until now, it hasn’t been a priority to use business or developer time to improve that basic functionality. But as the volume and type of content increases, along with the pressure of regulation and potential reputation cost, this lack of investment becomes a real vulnerability.

That’s where our latest investment, Checkstep, comes in. Checkstep’s AI-powered content moderation platform accelerates and automates the process of removing harmful content such as hate speech and harmful imagery from user-led communities and platforms — at scale.

The platform has one of the broadest capabilities in content understanding across text, imagery, video, and live-chat. It’s also highly customisable. Moderators can tailor Checkstep to their platform or online community’s needs, helping them manage content in a way that’s fair and appropriate within their specific context, whilst preserving free speech. So now Twitter and Facebook could use the same software solution but have completely different settings for what (or who) they do and do not allow onto their platform.

Where possible, Checkstep automates decisioning so that ‘no-brainers’ can be dealt with immediately, with fully explainable AI allowing moderators to back up each decision with a reasoned explanation. Bringing all the information into one interface also means that when, inevitably, human moderators need to step in to make a final decision, they have all the information they need at their fingertips to handle those cases effectively. This not only makes their jobs easier, but also lightens the mental load that these roles entail — all while making the digital world a little bit safer and more inclusive.

And the Checkstep team is ideally suited to the mission, with a deep understanding of customer platform requirements, and a strong background in research and product development, including several published scholars in fake news detection and AI ethics and safety. Co-founder and CEO Guillaume Bouchard exited his first company, Bloomsbury AI (which used natural language processing to combat misinformation on platforms), to Facebook in 2018 and was then a research manager at Facebook and head of its AI Integrity team in London. Before Bloomsbury, Guillaume was a researcher at UCL and Xerox for 12 years in the fields of predictive analytics, text understanding and distributed AI. Co-founder and CTO Jonathan Manfield was a machine learning engineer at Bloomsbury, before joining JP Morgan and Morgan Stanley tech units for four years developing systems for payments and risk.

So we’re delighted to be co-leading Checkstep’s $5m seed round, alongside Form Ventures and several angel investors. We believe Checkstep is well placed to be a leader in a $2bn market that is set to grow to $6bn by 2027. With increasing regulatory pressure and growing mental strain on moderators, Checkstep couldn’t have arrived at a better time to help keep the darkness of the depths of the digital world where it belongs.

Latest from Dawn

26-04-2021

AI 50

Dawn Dawn
Dawn Dawn

to our newsletter

Stay in touch with us

Current template: