How to Survive the Internet

How to Survive the Internet

Share this post

How to Survive the Internet
How to Survive the Internet
Can Ofcom force algorithms to be ‘safe’?
Copy link
Facebook
Email
Notes
More

Can Ofcom force algorithms to be ‘safe’?

It’s not just a question of content

Jamie Bartlett's avatar
Jamie Bartlett
May 10, 2024
∙ Paid

Share this post

How to Survive the Internet
How to Survive the Internet
Can Ofcom force algorithms to be ‘safe’?
Copy link
Facebook
Email
Notes
More
Share

The Online Safety Act is starting to clunk into gear. This week Ofcom announced proposed guidelines to make social media platforms safer for children. Alongside better age checks, it wants algorithms to be ‘tweaked’ – maybe even ‘tamed’ – so kids don’t get recommended dangerous content.

According to the guidelines there are two basic categories of harmful content.

There is the quite bad stuff. Which Ofcom defines as “violent, hateful or abusive material, online bullying, and content promoting dangerous challenges”. Ofcom want that ‘surpressed’ from children’s feeds. Not deleted entirely, but not recommended or pushed at them.   

Then there is the really bad stuff. Self-harm, suicide material. Ofcom wants that removed entirely. (In January 2024, Meta introduced safeguards that aim to restrict content recommendations for self-harm and suicide material on accounts set up by teenagers on Facebook and Instagram.)

I’m sure you already guessed that ‘quite bad stuff’ and ‘really bad stuff’ are my terms not theirs - but you get the idea.  

The companies will be hesitant and nervous, and will no doubt talk about the technical difficulties this entails. After all, no one single person at any tech platform today knows why users are shown the content they are.  It is believed there are at least 10,000 ‘signals’ that Facebook’s algorithm takes into account when deciding what you see on the newsfeed: how long you lingered on a post, what someone in your network liked, how often you comment, whether it includes a photo – and 9,996 other things.

Working out how to down-rank certain pieces of content will no doubt be harder than it sounds, take longer than you think, and involve quite a lot of mistakes. (A one-in-a-million mistake happens several times a day at X, Instagram, or TikTok. And those mistakes will often get shared with journalists, who are quite happy to report on them). If these changes are made, I am very confident there will be several newspaper reports about how these platforms are still not safe. Even if the new measures are actually working quite well.

According to Arturo Bejar, child-focused algorithmic tweaking is not impossible. Arturo used to be a director of safety at Facebook and then Instagram – before becoming a whistleblower, after concluding Meta doesn’t do enough to keep kids safe online. Meta can direct incredibly personalised adverts at users, Arturo told me a while back. So there is no reason they can’t make design algorithms to not target types of content. Difficult – yes. But not impossible. It’s just a question of resources and will.


Arturo Bejar

As usual, the main problems in changing social media content might not be technical, but rather trickier questions about definition and measurement.

Keep reading with a 7-day free trial

Subscribe to How to Survive the Internet to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Jamie Bartlett
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More