What you need to know about the UK’s Online Safety Bill

Three years and four prime ministers after the UK government first published its Online Harms white paper—the basis for the current Online Safety Bill—the Conservative Party’s ambitious attempt at internet regulation has found its way back to Parliament after multiple amendments.

If the bill becomes law, it will apply to any service or site that has users in the UK, or targets the UK as a market, even if it is not based in the country. Failure to comply with the proposed rules will place organizations at risk of fines of up to 10% of global annual turnover or £18 million (US$22 million), whichever is higher.

A somewhat bloated and confused version of its former self, the bill, which was dropped from the legislative agenda when Boris Johnson was ousted in July, has now passed its final report stage, meaning the House of Commons now has one last chance to debate its contents and vote on whether to approve it.

However, the legislation then needs to make its way through the House of Lords unscathed before it can receive royal assent and become law. While the final timetable for the bill has yet to be published, if it has not passed by April 2023, according to parliamentary rules the legislation would be dropped entirely, and the process would need to start all over again in a new Parliament.

The Online Safety Bill is a proposal for legislation that aims to keep websites and different types of internet-based services free of illegal and harmful material while defending freedom of expression. The bill is designed to keep internet users safe from fraudulent and other potentially harmful content and prevent children, in particular, from accessing damaging material. It does this by enacting requirements on how social media platforms and other online platforms assess and delete illegal material and content that they deem to be injurious. The government describes the legislation as its “commitment to make the UK the safest place in the world to be online.”

The bill applies to search engines; internet services that host user-generated content, such as social media platforms; online forums; some online games; and sites that publish or display pornographic content

Parts of the legislation closely mimic rules set out in the EU’s recently approved  Digital Services Act (DSA), which bans the practice of targeting users online based on their religion, gender or sexual preferences, and requires large online platforms to disclose what steps they are taking to tackle misinformation or propaganda.

Ofcom, the UK communications regulator, will be appointed as the regulator for the Online Safety regime and will be given a range of powers to gather the information it needs to support its oversight and enforcement activity.

Currently, if a user posts illegal or harmful content online, the intermediary platform that allows the content to be accessed typically has a liability shield, meaning the publisher doesn’t become liable until it’s made aware of the content, at which point it has to act to remove it. Under the bill, companies need to actively look for illegal content and remove it as soon as it appears, rather than waiting for someone to report it and then acting.

The Online Safety Bill imposes a regulatory framework on these intermediary platforms, requiring them to take responsibility for user-generated content and ensure they are taking the steps to guarantee their systems and processes offer “adequate protection of citizens from harm presented by content.”

Though the bill does not define “adequate,” it does say that the regulated services should offer protection from harm “through the appropriate use by providers of such services of systems and processes designed to reduce the risk of such harm.”

In the original draft of the bill, the UK government required internet companies to monitor “legal but harmful” user content. However, after concerns were raised over the government being ultimately responsible for defining what fell into that category, amendments were made to the bill, replacing the provision with new rules for companies to be more transparent over internal policies on content moderation, for example requiring online services to explicitly say why certain content must be removed. They also must offer a right of appeal when posts are deleted.

Additionally, companies will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances for doing this are clearly set out in their terms.

If the legislation were to become law, social media firms would be legally required to remove illegal content, take down material that breaches their own terms of service, and provide adults with greater choice over the content they see and engage with, even if it’s legal. For example, pop-up screens may inform users that a site displays certain content that the site deems could be harmful for certain users.

Content that would fall under the scope of the legislation includes material that encourages self-harm or suicide, as well as non-consensual images such as so-called deepfake porn, where editing software is used to make and distribute fake sexualized images or videos of people without their permission.

Material involving self-harm is defined as “legal but harmful content” (as long it does not actively encourage self-harm) and is rated as a “priority harm”—a topic that platforms would be required to have a policy on. If they fail to apply their stated policy to this type of content, they could be subject to fines by Ofcom.

In March 2022, the government also added a requirement for search engines and other platforms that host third-party, user-generated content to protect users from fraudulent paid-for advertisements and prevent fraudulent ads from appearing on their sites.

Technology firms would also be required to publish more information about the risks their platforms pose to children and show how they enforce user age limits to stop children from bypassing authentication methods. Furthermore, if Ofcom takes action against a service, details of that disciplinary measure must be published.

Since the bill was first proposed, people across the political spectrum have repeatedly argued that the legislation’s current provisions would erode the benefits of encryption in private communications, reduce internet safety for UK citizens and businesses, and compromise freedom of speech. That’s because, during the summer, the government added a new clause that mandates tech companies provide end-to-end encrypted messaging to scan for child sex abuse material (CSAM) so it can be reported to authorities. However, the only way to ensure a message doesn’t contain illegal material would be for companies to use client-side scanning and check the contents of messages before they were encrypted.

In an open letter signed by 70 organizations, cybersecurity experts, and elected officials after Prime Minister Rishi Sunak announced he was bringing the bill back to Parliament, signatories argued that “Encryption is critical to ensuring internet users are protected online, to building economic security through a pro-business UK economy that can weather the cost-of-living crisis, and to assuring national security.”

“UK businesses are set to have less protection for their data flows than their counterparts in the United States or European Union, leaving them more susceptible to cyber-attacks and intellectual property theft,” the letter noted.

Matthew Hodgson, co-founder of Element, a decentralized British messaging app, said that while it isn’t controversial to agree that platforms should have to provide tools to protect users from content of any kind—whether it’s abusive or just something they don’t want to see— what is controversial is the idea of effectively requiring backdoors into private content such as encrypted messaging, just in case it happens to be bad content.

“The second you put in any kind of backdoor, which can be used in order to break the encryption, it will be used by the bad guys,” he said. “And by opening it up as a means for corrupt actors or miscreants of any flavor to be able to undermine the encryption, you might as well not have the encryption in the first place and the whole thing comes tumbling down.”

Hodgson said there appears to be misunderstanding from some people who, on one hand, have expressly said they don’t want to put back doors into encrypted messages, but on the other hand claim tech companies need to have the ability to scan everybody’s private messages in case it contains illegal content.

“Those two statements are completely contradictory and unfortunately, the powers that be don’t always appreciate that contradiction,” he said, adding that the UK could end up in a situation like Australia, where the government passed legislation that permitted government enforcement agencies to require businesses to hand over user info and data even though it’s protected by cryptography.

Hodgson argues that the UK government should not facilitate the introduction of privacy-eroding infrastructure, but rather prevent it from becoming a reality that more authoritarian regimes could adopt, using the UK as a moral example.

There’s also concern about how some of the provisions in the bill will be enforced. Francesca Reason, a solicitor in the regulatory and corporate defense team at legal firm Birketts LLP, said many tech companies are concerned about the more onerous requirements that might be placed on them.

Reason said there’s also questions of practicality and empathy that will need to be navigated. For example, is the government going to prosecute a vulnerable teenager for posting their own self-harm image online?

In order to avoid what one Conservative member of Parliament described as “legislating against hurt feelings,” amendments to the bill ahead of its return to Parliament now place the focus of protection on children and vulnerable adults. The amended bill makes it illegal for children to see certain types of content—such as pornography—but not for adults, while in previous versions of the bill, it would have been illegal for anyone to see the content. Now, adults to just have to be provided with a content warning regarding content that a service provider deems as potentially objectionable or harmful in its content guidelines.

However, as privacy campaigners are concerned about the bill’s attack on encryption, some safety campaigners argue that the legislation now doesn’t do enough to protect the most vulnerable from online harms.

“There’s a faction that will feel that vulnerable adults now fall outside of that scope of protection,” Reason said, noting that someone’s appetite for harmful content doesn’t suddenly switch off the moment they turn 18.

“The other argument from a lot of people is that adults will still be able to post and view anything legal, even if it’s potentially harmful, so long as it doesn’t violate the platform’s Terms of Service,” she said.

In its current form, it’s estimated that the bill will impact more than 25,000 tech companies, and while a lot of focus has been on how so-called Big Tech companies will comply, smaller internet providers that offer a space where users can share thoughts or that are monetized by ads will also be impacted by the bill.

Reason said that one way tech companies might chose to navigate this legislation is by either locking children out of their site completely or sanitizing their platform to such a level that is appropriate for their youngest user by default.

Additionally, as a result of these new rules, a vast number of websites would require visitors to prove their identity, indicating they are old enough to access certain content. Online age verification is something that the government has tried and failed to enact in the past and, as a result, Matthew Peake, global director of public policy at identity verification (IDV) platform Onfido, warns that unless the government and Ofcom work with the tech industry and IDV providers to get a better understanding of what is actually possible, the bill will fall flat.

“[Onfido] has a very strong view that there is no need to have a trade-off between privacy and good IDV, you can verify someone’s identity in a very robust manner without eroding or jeopardizing their privacy,” he said. “We want that message to be understood by government and by privacy campaigners, because we all want to have a safe experience online. That’s the end goal.”

However, while many politicians have publicly declared that people should not be able to create anonymous accounts on social media platforms, Peake argues that anonymity is vital to allowing whistleblowers, victims of domestic violence and others with very legitimate reasons for keeping their identity obscured to safely access the internet.

Despite a 2022 poll by the BCS, the Chartered Institute for IT finding that just 14% of 1,300 IT professionals considered the bill to be “fit for purpose,” and 46% believing it to be “not workable,” the expectation is that the legislation will get voted through, largely because the fundamental purpose of the bill—keeping children safe online—is a big political point scorer.

http://www.computerworld.com/category/security/index.rss