How to Survive the Internet

How to Survive the Internet

Share this post

How to Survive the Internet
How to Survive the Internet
Making the most of 'generative AI'

Making the most of 'generative AI'

Scepticism and asking good questions

Jamie Bartlett's avatar
Jamie Bartlett
Mar 09, 2025
∙ Paid
12

Share this post

How to Survive the Internet
How to Survive the Internet
Making the most of 'generative AI'
7
3
Share

A quiet revolution is happening. Increasingly, and at rapid speed, we don’t talk to humans to solve our problems. We talk to machines – and in particular so called ‘large language models’ like OpenAI or Claude or DeepSeek. In the coming few years, knowing how to talk to generative AI, and understand what they’re saying back to you, will become an indispensable skill. Maybe the indispensable skill.

Weird, isn’t it? Not enough people realise how much of a revolution this is. In the end, it will probably change the way we talk to each other.

Let me offer some statistics, to show just how quickly things are changing.

Ninety-two per cent of students now use AI to assist their studies.

Two third of us use it at work.

In 2023, 37 per cent of teenagers used AI – by 2024 it had increased to 77 per cent.

Nearly everyone is at it – and in vital aspect of national life. Civil servants are using it to prepare ministerial briefings, traders are devising billion dollar investment strategies, university students are co-writing exam essays. Teenagers are using it to get free therapy.

I was at a party a few weeks back, and everyone was asked in advance to write a short poem. I was one of the few who hadn’t used ChatGPT. Several friends were surprised I’d bothered using my own brain for the task.

All aspects of life are slowly being influenced by the outputs of complex and obtuse software that only a small number of people really understand, and an even smaller number control.

Is this not a little… worrying?

There are a lot of reasons to be nervous about this. You’ve likely heard most of them already, but I’ll summarise a handful that I think are most obvious.

The possible further concentration of wealth and power into the hands of a small number of companies that own and run the models. People like Sam Altman, Elon Musk, or perhaps the Chinese Communist Party.

The intentional or accidental re-enforcement of various types of bias – the ‘junk data in, junk data out’ argument. A lot of algorithmic diagnostic tools are often trained on predominantly male data – which can create models that are not attuned to women’s health needs, and therefore underperform. It’s very possible we’ll see the same thing with generative AI, producing less useful, accurate information about women’s health.

The potential for new forms of industrial scale online crime (although I think that is currently slightly overstated) and new forms of highly compelling propaganda.

The risks of relying on outputs which have been ‘data poisoned’ (adversaries inputting dodgy data into training systems); or are ‘hallucinations’ (when LLMs create outputs that look and sound credible, but are factually incorrect.) Like this one:

As many have noted – including respected AI sceptic Gary Marcus – ChatGPT 4.5 is underperforming. (Or at least the rate of improvement is slowing, which is unacceptable in our line-must-go-up-venture-capital-culture).

And there are several potential problems that aren’t about performance or accuracy, but ethics and privacy. There are concerns over the use of copyrighted data, especially in the ‘creative’ sector. And while I’m sure ‘AI therapy bots’ could help some people, it’s also likely that some vulnerable people could unknowingly hand over highly personal data; and even the best LLM is no substitute for talking to a real person. (There’s a good summary of the problems and opportunities of AI therapy bots here).

As I’ve said before, these new generative AI tools are remarkable things. It wasn’t very long ago that the Turing Test was viewed as the ‘ultimate’, test of an AIs ability. But they’ve become so hyped that a lot of users (and maybe investors too) forget they are still fairly experimental, and far from perfect.

However, none of these problems will stop our new found addiction to them, because they are really, really useful. We collectively traded privacy for convenience when social media exploded in the 2010s, and will do the same again. Having used these tools a fair amount over the past few weeks, I believe the stats, from a handful of academic papers which show that generative AI can dramatically increase productivity if done well – and even make staff happier and more engaged. A couple of days ago a friend told me that work pitches that used to take him five days now take less than one, thanks to ChatGPT. In the space of just a few weeks, this tool has become indispensable to him.

Much as I’d like to go back to 1997 (or even 2018, frankly) I can’t imagine a world where vast numbers of people don’t rely on OpenAI or Grok or Claude or DeepSeek or Copilot to understand the world and help them make decisions. That being the case, it’s important for everyone to at least grasp the basics of how to talk to these generative AI models, to get the most of what they offer, and know how to use them safely. It pains me to say it, but I’m afraid you will to communicate with a blasted machine, so you might as well get good at it.

So here is a short and very rudimentary list of things I think you should know if you’re just starting out.

Keep reading with a 7-day free trial

Subscribe to How to Survive the Internet to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Jamie Bartlett
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share