6 min readOnline Safety Act and Safeguarding: All You Need To Knowposted 5 months ago

Child looking at their phone

In recent years, the safety of children in the online dimension has been a primary concern of educators and parents alike. It forms a crucial part of any safeguarding plan. And the perimeter of concern only keeps expanding. We have previously shared how safeguarding protocols must adapt to the increasing adoption of advanced technologies that leverage AI.  

At the same time, we are now dealing with novel issues that jeopardise the safety of anyone on the internet, including the sophisticated tools that create deepfakes (and the alarming pace at which they are developing), cyberflashing (incidents of which have been reported reluctantly, even though 33% women have experienced it) and the like. 

The Online Safety Act seeks to address these and many other evolving pieces of online mishaps, but its driving focus is the safety of children online. Let’s take a look.

What is the Online Safety Act?

Widely called a landmark legislation, the Online Safety Act introduces additional security and safety mandates for tech companies to protect their users. It imposes a legal “duty of care” on these tech firms to protect users, especially children, from exposure to content that can be classified as racism, death threats, sexual exploitation, dodgy adverts, etc. and from illegal content and activity like pornography and fraud. 

Tech companies, including search engines, social media sites, forums, could storage services, video sites, etc will be required to

  1. Monitor and moderate activity to remove illegal content
  2. Monitor and moderate activity to remove “harmful” content
  3. Activate age verification for most of the sites accessed by children
  4. Extend control to users so they can protect themselves against content they don’t want to see

The premise of the entire bill is rooted in extending tangible safeguards in the virtual world. As Damian Collins, chair of the joint committee on the draft bill was quoted as saying, “What’s illegal offline should be regulated online. For too long, big tech has gotten away with being the land of the lawless. A lack of regulation online has left too many people vulnerable to abuse, fraud, violence and in some cases even loss of life.” Surely, educators and parents can appreciate the intent of the bill. 

Various provisions of the bill aim to protect children from exposure to harmful content, including 

  1. child sexual abuse
  2. controlling or coercive behaviour
  3. extreme sexual violence
  4. illegal immigration and people smuggling
  5. promoting or facilitating suicide
  6. promoting self-harm
  7. triggering eating disorders
  8. animal cruelty
  9. selling illegal drugs or weapons
  10. terrorism
  11. cyber flashing 
  12. deepfakes
  13. downblousing, and the like

Criticism of the Online Safety Act

Any safeguarding plan would involve protecting children from such content, but what sets the Online Safety Act apart is the idea of shifting the onus of the protection — “duty of care” — on the tech firms. This opens up a huge discussion on how these tech firms are supposed to action it. For example, how can they verify the age of a user in a watertight manner without jeopardising their sensitive information?

Most important, there is the question of defining what is considered “harmful” content. The judgment is left up to Ofcom, which can then levy heavy fines or even block access to sites that offend people. This ambiguity and subjective definition around what is considered harmful or inappropriate leaves room for the deletion of incriminating evidence, the pushing of political narratives, and the limiting of free speech. 

The bill has evolved into a guardian of children's online safety, but in doing so, it has sparked many discussions about the best way to ensure online safety. However, the bill has also attracted criticism for its consequences on the freedom of speech and cybersecurity. Digital rights advocacy groups and child safety campaigners are aggrieved that subjectivity in defining what we consider acceptable waters down the impact of the regulation. In contrast, the added safeguards (like age verification for accessing sites) put sensitive information at risk. 

Electronic Frontier Foundation, for instance, points out, “People shouldn’t be fined or thrown in jail because a government official finds their speech offensive…It would also significantly deviate from the new E.U. internet bill, the Digital Services Act, which avoids transforming social networks and other services into censorship tools.” The Open Rights Group has expressed similar concerns. 

Consequences for tech companies that fail to comply

The bill lists the UK’s media watchdog Office of Communications (OFCOM) as the regulator in charge of holding tech companies accountable, assessing the nature of content, as well as, fining tech companies if they fail to adhere to instructions for the removal of content. The criminal liability extends to senior management at such tech companies. 

However, it is not possible to build a foolproof system that caters to the extent of content moderation that the bill demands. Algorithms are imperfect, human effort is expensive, and real-time processing/moderation is not feasible. It remains to be seen how the demands of the bill will be met once implemented, because those sophisticated automated moderation or age-verification tools don’t exist at present. 

The Way Forward for Safeguarding

The Online Safety Act does not change anything for educators or school authorities. It only impacts tech companies at large, shifting accountability for online safety to them. But as we have seen, tech companies are yet equipped with the tools and resources needed to deal with the deluge of data. As American cybersecurity researcher Marcus Ranum said, “You can’t solve social problems with software.”

This is a good reminder for educators and parents that nothing can substitute human interaction. In order to truly protect children from harmful content, parents and educators need to communicate robustly and preemptively with children about the safety measures they should follow while on the internet. 

Schools remain the frontrunners in ensuring children’s safety. The legal responsibilities of the Designated Safeguarding Leads (DSLs) also remain intact. We also anticipate that the latest Keeping Children Safe in Education (KCSiE) guidance will have added modalities. What this bill does is extend that responsibility to tech companies. How this gets implemented will determine how effective it will be. It will take time to come into action effectively.

The Online Safety Act mandates tech companies to protect users, particularly children, from harmful online content. It introduces a "duty of care," compelling companies to monitor and remove illegal and harmful content. The bill signals a shift, holding tech companies accountable, yet its effectiveness hinges on practical implementation and the evolution of digital safeguards.  While the bill places responsibility on tech firms, it underscores the irreplaceable role of human interaction. Educators and parents must engage proactively with children about online safety.

Share this post
Kritika M Narula

Kritika M Narula

Kritika is a research and media professional based in India.