Everyone is terrified about how AI will transform cyber-crime. The consensus is it will make hacking and social engineering harder to trace, and easier to commit. Things are changing fast, and we need a radical overhaul in how we think about the problem.
But there are plenty of ways AI can be used to keep us safer too, and it’s vital that we all (especially governments) think creatively about ways to deploy this new tool to our advantage. So here is one idea.
The way to beat cyber-crime is to reduce the profit margin. Cyber-crime is a business, and we need to make it far less profitable. You do this by increasing the operational costs and reducing the strike rate.
As you will know by now, we are all struggling to tell the difference between machine and human generated content. Thanks to ‘generative AI’, like ChatGPT, machines are increasingly able to fool us into thinking we’re chatting to a fellow homo sapien. (Isn’t it amazing how suddenly the famous ‘Turing Test’ was passed, and no-one noticed?)
And if you can’t tell the difference, then neither can a criminal.
That made me think of James Veitch’s brilliant Ted Talk (65 million views and counting) from 2015. Veitch decided to reply to a gold-import scammer email – and then kept him chatting for weeks.
James even managed to rope ‘Solomon’ into a detailed, long-winded exchange about shipments, graphs, profits, code-words, hummus. They almost became friends. And all the while Solomon was talking to James Veitch, he wasn’t trying to persuade and elderly man into investing his pension pot.
As good as James Veitch is, a machine could now do that job, and simply chat away for hours upon hours, stringing these email scammers along, giving them hope, providing fake bank details, acting confused. Anything to keep them talking.
It needn’t be just emails and text messages either – although that’s a decent start. Why not phone calls too? AI generated audio content is getting extremely good at a terrifying rate. Take romance scams, which Action Fraud says cost Brits over £300 million since 2019. These are often quite long-winded, sophisticated telephone and social media frauds, which require a lot of careful manipulation of the victim. Have them woo a machine instead. Whether it’s HMRC or bank phone call scams – we’re all getting hit with fake cold calls from people trying to fool us into handing over personal details.
I’m not sure of the precise logistical arrangements required to match scammers and generative AI. (Maybe someone else smarter than me can get in touch?). But I can imagine anyone who receives a scam texts / call / email / DM should be able to pass the fraudsters’ details to specialised unit within Action Fraud or the City of London Police. Who would then set the machines on them and watch.
Imagine it! One powerful machine talking and messaging with thousands of hopeful scammers. All wasting their time – and therefore not speaking to potential victims. And all pushing up the cost of doing business.
Yes it sounds like a bonkers idea – I know. And I’m the one who made it up. But nearly everyone in the UK has come across an online scam, even if they managed to dodge it. And nationally something like 10 per cent of UK adults admit falling for it. Billions have been lost. I think the psychological cost is even greater.
And yet very little is ever done to proactively stop this epidemic. We’ve become accustomed to living with the constant threat of online scams, and it doesn’t have to be that way. Maybe the AI-timewaster-bot isn’t the answer. But collectively we need to get far more inventive in how we deal with this problem. Because on current trends, it’s only going to get worse.