Tech platforms may very well be compelled to forestall unlawful content material from going viral and restrict the flexibility for individuals to ship digital items to or document a baby’s livestream, below extra on-line security measures proposed by Ofcom.
The UK regulator revealed a session on Monday in search of views on additional protections to maintain residents, notably youngsters, safer on-line.
These might additionally embody making some bigger platforms assess whether or not they should proactively detect terrorist materials below additional on-line security measures.
Oliver Griffiths, on-line security group director at Ofcom, stated its proposed measures search to construct on current UK on-line security guidelines however sustain with “continually evolving” dangers.
“We’re holding platforms to account and launching swift enforcement motion the place we now have issues,” he stated.
“However know-how and harms are continually evolving, and we’re all the time taking a look at how we will make life safer on-line.”
The session highlighted three major areas by which Ofcom thinks extra may very well be carried out:
- stopping unlawful content material going viral
- tackling harms at supply
- giving additional protections to youngsters
The BBC has approached TikTok, livestreaming platform Twitch and Meta – which owns Instagram, Fb and Threads – for remark.
Ofcom’s vary of proposals goal various points – from intimate picture abuse to the hazard of individuals witnessing bodily hurt on livestreams – and fluctuate in what sort or measurement of platform they may apply to.
For instance, proposals that suppliers have a mechanism to let customers report a livestream if its content material “depicts the chance of imminent bodily hurt” would apply to all user-to-user websites that enable a single person to livestream to many, the place there could also be a threat of displaying criminal activity.
In the meantime potential necessities for platforms to make use of proactive know-how to detect content material deemed dangerous to youngsters, would solely apply to the most important tech corporations which current larger dangers of related harms.
“Additional measures are all the time welcome however they won’t deal with both the systemic weaknesses within the On-line Security Act,” stated Ian Russell, chair of the Molly Rose Basis – an organisation arrange in reminiscence of his 14-year-old daughter Molly Russell, who took her personal life after viewing 1000’s of photos selling suicide and self-harm.
He added that Ofcom confirmed a “lack of ambition” in its strategy to regulation.
“So long as the main focus is on sticking plasters not complete options, regulation will fail to maintain up with present ranges of hurt and main new suicide and self-harm threats,” Mr Russell stated.
“It is time for the prime minister to intervene and introduce a strengthened On-line Security Act that may deal with preventable hurt head on by totally compelling firms to determine and repair all of the dangers posed by their platforms.”
The session is open till 20 October 2025 and Ofcom hopes to get suggestions from service suppliers, civil society, legislation enforcement and members of the general public.
It comes as tech platforms look to deliver their providers consistent with the UK’s sweeping on-line security guidelines that Ofcom has been tasked with imposing.
Some have already taken steps to attempt to clamp down on options that consultants have warned could expose youngsters to grooming, equivalent to by livestreaming.
In 2022, TikTok banned youngsters raised its minimal age for going dwell on the platform from 16 to 18 – shortly after a BBC investigation discovered hundreds of accounts going live from Syrian refugee camps with children begging for donations.
YouTube lately said it could improve its threshold for customers to livestream to 16, from 22 July.