The Online Safety Act, promoted as a safeguard for children, now conceals Gaza’s hardships, suppresses opposition, and spreads censorship globally.
Intended to protect children in the UK, the Online Safety Act instead keeps the public in the dark. Shortly after the law took effect in late July 2025, X (formerly Twitter) began obscuring videos of Israel’s actions in Gaza on UK feeds by placing content warnings and age restrictions. What was marketed as a protective measure has become one of Britain’s most powerful censorship mechanisms. This outcome is deliberate—a law that exploits child protection as a pretext to normalize censorship, online identity verification, and surveillance.
The origins of the UK’s online censorship debacle date back nearly ten years to MindGeek, now known as Aylo, the controversial company behind Pornhub. This tax-evading, exploitative porn conglomerate collaborated closely with the UK government to design AgeID, an age-verification system that would have effectively granted Aylo a near monopoly on legal adult content by forcing competitors to pay or close. Although public outrage ended AgeID in 2019, the concept persisted. The Digital Economy Act 2017 laid the foundation; the Online Safety Act 2023 made it enforceable law. Several EU countries, including France and Germany, are pursuing similar legislation under the guise of “protecting children.” This isn’t a conspiracy theory but a predictable outcome of corporate interests merging with government control, cloaked in child safety rhetoric.
Under the Online Safety Act, Ofcom gains sweeping authority to regulate much of the internet—social media, search engines, and adult platforms—facing penalties as high as £18 million ($24 million) or 10 percent of global turnover. Platforms deemed “Category 1” services face the strictest rules, such as enforced age checks, identity verification for contributors, and the removal of broadly defined “harmful” content. Wikipedia is now under this threat. In August 2025, the High Court dismissed Wikimedia Foundation’s legal challenge against this classification, allowing Ofcom to treat the site as high-risk. The foundation warned that this would compel censorship of crucial information and endanger volunteer contributors by forcing them to reveal their identities. If Wikipedia refuses compliance, the UK could theoretically block access entirely—an alarming example of how “child protection” morphs into information control. Ofcom has already launched inquiries into major porn and social networking sites for alleged rule violations. The chilling effect of this legislation is no longer theoretical; it is already active.
Age verification methods clash fundamentally with privacy and security principles; any system requiring identity checks deserves scrutiny. The July 25 breach of the Tea dating app, which exposed thousands of photos and upwards of 13,000 sensitive ID documents on 4chan, alongside a more recent Discord breach revealing over 70,000 government-issued IDs, starkly illustrates these risks.
When verification data ties real identities to online behavior, it becomes a prime target for hackers, blackmailers, and authoritarian states. Past incidents, like the 2013 Brazzers leak involving nearly 800,000 accounts, and FBI reports showing pornography-linked extortion remains a major online crime, demonstrate this danger. Imagine such infrastructure extended beyond adult content to political discourse, journalism, and activism. The tools designed for “child safety” can facilitate severe manipulation and blackmail. A single breach could compromise journalists, whistleblowers, or officials. Given the global flow of data, no assurance exists that democratically controlled verification systems won’t fall into authoritarian hands. Increasing digital “trust” in this way ultimately jeopardizes it.
A most troubling aspect of this trend is its shift of responsibility from parents to the state. Current parental controls are advanced, allowing monitoring and restrictions through devices and software. The government’s push for mandatory age verification is less about parental failure and more about exploiting some parents’ inaction to justify surveillance. Instead of prioritizing education and digital skills, authorities are expanding their influence to dictate what content everyone may access. The state should not assume the role of public guardian; however, under the Online Safety Act, all citizens become suspects required to prove innocence before speaking or browsing online. What is presented as “child protection” is in effect a system enforcing widespread compliance.
Britain’s flawed model is already influencing other nations. France and Germany are advancing legislation with similar age verification and online safety measures, while the EU’s blueprint links adult content and “high-risk” platforms to interoperable digital IDs. Although the EU claims these systems protect privacy, they mirror the UK’s identity-check framework disguised as child safeguarding. The pattern is consistent: laws begin targeting minor protection from pornography but soon extend to suppressing demonstrations and political speech. Today’s targets are Gaza footage and sexual material; tomorrow might include journalism and dissent. The UK is not an exception but a prototype for digital authoritarianism marketed as safety.
Proponents argue the choice is clear: implement universal age verification or risk abandoning children to online dangers. This framing is misleading. No technology can replace attentive parenting or comprehensive digital education. Determined youths will still find ways into adult content, often migrating to the internet’s darker corners. Meanwhile, these laws scarcely address the true menace: child sexual abuse imagery distributed on encrypted or hidden networks beyond regulation’s reach. In truth, only well-regulated sites comply, and these are the very platforms the government now risks undermining. Forcing young people toward VPNs and unregulated spaces potentially heightens their exposure to harm. The result is not safety but increased vulnerability.
Removing the façade of child protection reveals the Online Safety Act’s real purpose: constructing an apparatus for widespread content control and mass surveillance. Once established, these systems are easy to expand. History offers clear parallels: anti-terrorism laws have evolved into tools to suppress dissent; now, “child safety” serves as the cover for similar overreach. The EU already contemplates mandatory chat monitoring and weakened encryption, promising use only against offenders—until that promise inevitably lapses. Early effects in the UK—restricted Gaza videos, threatened Wikipedia access, banned protest footage—are not glitches but a preview of a digital regime rooted in control. What’s at stake goes beyond privacy; it threatens democracy itself, including the freedoms to speak, learn, and protest without mandatory verification.
Protecting children on the internet does not require creating a surveillance infrastructure. Instead, it calls for improved education, accountability, and support for parents, teachers, and platforms alike. Governments should invest in digital literacy initiatives, pursue genuine cases of online exploitation, and equip parents with better tools to manage access. Platforms must uphold transparent standards and responsible algorithm design, not be compelled to police adults. Where self-regulation fails, targeted oversight can be effective, but universal identity verification is not the answer.
The UK’s Online Safety Act and comparable laws worldwide compel us to decide what digital future we want. We can accept false security achieved through surveillance and control or demand solutions that safeguard children without undermining privacy, freedom, and democratic ideals. The UK’s early experience should serve as a cautionary tale, not a blueprint. Before this authoritarian trend becomes entrenched, lawmakers and citizens must recognize that when governments claim to protect children by censoring information, they are often protecting another interest altogether: their own power to dictate what we are allowed to see, say, and know.
Original article: www.aljazeera.com
