Tech
How to Protect Young People Online in the Time of Coronavirus
For the 1.5 billion children and young people who are in coronavirus quarantine, the digital sphere has become a vital way of staying in touch with friends, keeping up with lessons, and blowing off steam. In this period, policymakers and tech companies have had to work hard to protect young people online, as the increased online activity implies a higher volume of cases of online predators preying on vulnerable children. Reports to the cyber tipline run by the National Center for Missing and Exploited Children (NCMEC) have more than doubled since the lockdown began—though this was in part due to high levels of concern over a few pieces of content in particular—while child exploitation material has cropped up on mainstream platforms like the Zoom video-conferencing app.
Fortunately, there are a number of promising initiatives to tackle this distressing trend. NCMEC recently inked a partnership with the French social media platform Yubo, whose users are aged 13-25. As part of the collaboration, Yubo’s dedicated moderation team will be able to report potential child exploitation in real-time to NCMEC’s tipline. The French startup will also be able to tap into NCMEC’S database of known files containing child sexual abuse content, which will make it easier for Yubo to ferret out suspicious profiles.
NCMEC will also advise Yubo on developing effective safety tools—though this is an area in which the social media platform already has a headstart. The app has developed an algorithm designed to crack down on inappropriate behavior in its livestreams. If the system detects that a user is nude or in their underwear on a video chat, Yubo moderators step in—warning users that they have a minute to dress appropriately or the stream will be shut down.
The app—whose daily new users have tripled amidst the coronavirus crisis—also uses an age verification system called Yoti, to ensure that users outside its 13-25 age bracket aren’t able to create profiles. In parallel with the NCMEC partnership, it has rolled out two new safety tools—one which prevents users from signing up without a profile picture, and another which alerts users in real-time when they are about to share personal information like their location or phone number.
Yubo isn’t the only firm using machine learning to keep young people safe online. The Internet Watch Foundation (IWF), a UK-based charity, has been building a database of phrases often used by pedophiles to camouflage their activities. The IWF shares its list of abusers’ slang with social media platforms and law enforcement officials, allowing them to uncover cases of child exploitation.
After working on the glossary for ten years, the IWF had managed to identify 450 problematic words—but that number has ballooned over the past couple of weeks. The charity deployed machine learning techniques and an intelligent crawler to comb the web for language used by online predators—a strategy which has seen more than 3,500 new entries added to the IWF’s list.
British startup SafeToNet, meanwhile, uses machine learning to spot problematic behaviors in children’s online activities. It operates on a three-strike system—first warning young users and trying to steer them clear of inappropriate behavior, and only intervening more directly when needed to protect the child, for example by remotely turning off a device’s camera to stop children from sending an inappropriate photo of themselves to a stranger.
Tech’s biggest names are getting on the AI-assisted online safety bandwagon, too. Tech giant Microsoft recently rolled out Artemis, an automated tool which can identify patterns in how online predators communicate with young people. Artemis was trained on Microsoft products including gaming platform Xbox Live and chat app Skype, and Microsoft is now licensing it to other companies for free.
Like any AI system, human moderation is essential—if Artemis identifies a conversation as having a particularly high risk of child exploitation, human moderators at Microsoft take a second look. If they agree that the conversation is problematic, they will tip off law enforcement bodies and the NCMEC.
While these AI-based tools have revolutionized the fight against online child exploitation, one of the best ways to keep young people safe online is one of the most low-tech. Teaching children best practices about online safety can pay huge dividends. Google, for example, has developed a complete curriculum, called “Be Internet Awesome,” aimed at teaching school-age children how to protect themselves online.
Even before the current public health crisis, young people were spending an average of 40 hours a week online. With virtual playdates, online lessons, and long-distance friendships a vital part of keeping life as normal as possible in quarantine, these initiatives to help stamp out child exploitation will be more vital than ever.