Following formal approval of the U.K.’s Online Safety Act, regulator Ofcom has published its first draft codes of practice on how the provisions of the bill should be implemented.
The emphasis is on measures to tackle child sexual abuse and grooming, pro-suicide content, fraud and terrorist content.
“Regulation is here, and we’re wasting no time in setting out how we expect tech firms to protect people from illegal harm online, while upholding freedom of expression,” says Dame Melanie Dawes, Ofcom’s chief executive.
“Children have told us about the dangers they face, and we’re determined to create a safer life online for young people in particular.”
The codes include a series of measures to control friend requests, banning larger and higher-risk services such as the major social media platforms from providing children with lists of suggested friends; suggesting children as friends or making them visible in other users’ connection lists; or making childrens’ connection lists.
Users that don’t appear on a child’s connection list shouldn’t be able to send them direct messages, and their location information shouldn’t be visible to anybody else.
Meanwhile, these larger platforms should use hash matching to check for child sexual abuse material against a database of illegal images, and use automated tools to detect URLs that have been identified as hosting CSAM.
Platforms should also block accounts run by proscribed terrorist organizations, and take measures against fraud, such as using keyword detection to find and remove posts linked to the sale of stolen credentials such as credit card details.
The codes also specify governance requirements such as the need for a named person responsible for compliance, well-resourced content and search moderation teams, performance targets and subsequent monitoring; and clear policies on how content is reviewed. Recommender algorithms should be put through safety tests to make sure they’re not spreading illegal content.
“It’s vital companies are proactive in assessing and understanding the potential risks on their platforms, and taking steps to make sure safety is designed in,” says Susie Hargreaves, chief executive of the Internet Watch Foundation, a charity that works against child sexual exploitation.
“Companies in scope of the regulations now have a huge opportunity to be part of a real step forward in terms of child safety.”
There are concerns over the Online Safety Act, most notably in terms of privacy. But while the government has backed away from suggestions that companies should be forced to break encryption, many fear that the use of hash matching will have problems, too.
It’s prone to false positives, which, says founder and CEO of encrypted email service Proton Andy Yen, “would put law-abiding users at risk and bog the system down, forcing companies or law enforcement to investigate perfectly innocent media, potentially diverting resources away from real cases of abuse.”
Two years ago, Apple abandoned plans to introduce a type of hash matching, citing the risk that it could lead to bulk surveillance; there are also concerns that it can be easily fooled.
Currently up for public consultation, the Ofcom codes are expected to come into force at the end of next year.