Laptop
(Photo : Glenn Carstens-Peters on Unsplash)
The UK Parliament finally debates the online safety bill to safeguard children from hazardous content.

Internet safety advocates expect the UK parliament will discuss the online safety bill on Tuesday, Jan. 17, before it goes to the House of Lords.

After months of changes, here is the breakdown of the bill's present structure, as summarized by The Guardian.

Child Safety

The bill mandates that Facebook, TikTok, and search engines safeguard youngsters from hazardous information and activities, including child sexual abuse material. Firms must also guarantee that non-illegal information, such as self-harm content, is age-appropriate for minors.

Age assurance, the technical word for online age verification, will be complex for the leading platforms since they must also enforce age limitations, generally 13 years for social media sites. They worry that severe screening, such as asking for ID or facial scanning, would deter users and lower advertising income. 

Campaigners believe digital companies do not verify ages or protect teens from harmful content. In this reform, firms must provide age assurance methods in their terms of service.

In November 2022, tech companies were required to submit risk evaluations of their child-hazardous sites. The legislation requires platforms to evaluate the dangers their services pose to minors and explain how they will mitigate those risks in their terms of service, which the Office of Communications or Ofcom will review.

Companies will also have to post child safety enforcement notifications from the regulator. Ofcom may punish firms up to £18 million ($22 million) or 10% of global sales or prohibit sites in exceptional situations under the statute.

An amendment supported by Conservative rebels that criminalizes tech leaders for severe kid online safety violations is the most significant change. The administration agreed to criminally punish executives who repeatedly violate their duty of care to children.

Related Story: Social Media Execs Who Violate Child Safety Laws Might Be Jailed in the UK

Harmful, Legal Material

The law no longer requires big digital companies like Instagram and YouTube to protect adult users from "legal but harmful" content like various sorts of racist or sexist abuse. The "censor's charter" criticism centered on that clause.

Ofcom will enforce platform terms of service. Thus, if a digital platform promises to ban lawful but damaging information like eating disorder promotion, it must do so or be fined.

Content removal and account bans may be appealed. The government claims this would prevent firms from unilaterally deleting information or banning users and ensure due process.

Tech companies must also allow adults to hide lawful but disturbing content from their feeds. The government will designate abusive or hateful information based on race, ethnicity, religion, handicap, sex, gender reassignment, or sexual orientation.

New Crimes

The England and Wales law criminalizes social media messages that promote self-harm. 

Pornographic "deepfakes," or images or movies made to resemble a person, will likewise be illegal. The capturing and sharing of "downblousing," or images-photos shot down a woman's top, will also be addressed. Cyberflashing will be banned, too.

One damaging communications offense was eliminated. This would have targeted persons who send messages or posts that cause severe anguish.

The measure requires all organizations to protect adult users from child sexual abuse photos, revenge pornography, threats of murder, weapon sales, and terrorist propaganda. Tech platforms must aggressively block such content.

See Also: UK Government Comes Under Fire For Supposedly Failing to Safeguard Children from Online Harm

Trisha Andrada

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion