Tom Cheesewright, Applied Futurist

View Original

The future of trust

25 years ago this year, I left Lancaster University with my freshly minted degree in Mechatronic Engineering and went to work in…PR. It may seem like a bit of a career left turn, but it was a pure tech PR company. And I’d come to realise that I was better at talking about tech than I was at actually building it. Anyone who has seen my code will tell you this remains true.

At the PR agency though, I was far and away the most technical person in the building. So all the really techie jobs came my way. This included working with a company called Exodus, the world’s largest provider of web hosting at the time, and a man named Bill Hancock, the company’s chief security officer.

Listen and learn

Back then, cybersecurity wasn’t a subject that many people outside of the industry were very familiar with. My own experience was limited to ruining my dad’s computer with a virus-ridden set of pirated games on floppy disk. So I learned a lot listening to Bill and his colleagues as they were interviewed by the media. I learned that two out of three cyber attacks were based on social engineering. Not some Hollywood-style, finger-flashing hacker cracking codes and finding back doors into systems, instead most hacks were about tricking people into giving up access to the things the hacker wanted*.

Today you can find all sorts of estimates for the percentage of attacks that are based on social engineering, from that two thirds up to 98%. And you can see why this might be. 

Finding the holes in trillions of lines of code, or weaknesses in hardware design, might be an incredible challenge. But it pales in comparison with trying to teach 5bn internet users caution and scepticism.

Over time, you’d expect people to slowly learn to identify scams more easily. But that’s a slow process, and scams evolve. And there are lots of opposing forces that are making the process of securing us against cyber attack all the harder.

Who do we trust?

One issue is the loss of trust. It only works to tell people what not to trust, if they first trust the person telling them. And as has been widely reported, we are going through a bit of a crisis of trust. Today, 66% of all UK adults don’t trust the news media according to the ONS. And trust in national governments fell 20% between 2020 and 2023 according to Eurofound.

At the same time, we are losing faith in brands. Or at least traditional brands. If a brand had been around for a long time, it used to carry some weight. We might have some loyalty to them. There used to be people who would only ever drive a Ford or a Vauxhall. Today we’re much more fickle.

With an accelerated rate of change in products and services, we’re increasingly chasing the best service, not the most venerable brand. And that means some of the older brands get a little left behind. The most trusted banking brands in the UK today? Not Barclays and HSBC but Starling and Monzo - two banks that don’t exactly have a spotless record in the sort of behaviours that might engender trust (see here and here). But brands that have grown in popularity by providing a quality of service that people appreciate.

So, if there isn’t a foundation of trust in any of the obvious places, who is going to tell us what is real and what is a scam? And will we listen?

The Broker Boom

The second factor amplifying the issue of security risk is the boom in intermediaries, something I’ve written about before. Simply put, in our attempts to navigate an increasingly diverse and complex world of products, services and media, we rely on a growing number of middle men, women, and robots to help us find what we want and need. This is true across  just about every category I’ve looked at. There are more estate agents and recruitment consultants, comparison sites and brokers, influencers and gurus. 

Because we are struggling to navigate the bewildering away of options, in our desperation I believe we tend to trust a little more easily than we otherwise might. 

Directing trust

The challenge then, as I see it, is directing trust. How do we help people know where to place their trust, when they don’t trust the obvious candidates who might direct them? And how do we do this in a context where there is a vast and rapidly changing array of people offering them advice and connections to the ‘best’ services, products, and media?

I think there are two answers. 

The first, and most boring one, is that we learn. Not fast. But generation by generation we embed the knowledge and instincts that help us to identify threats. A sort of cyberfraud literacy that makes us less susceptible to threats.

This is of course riddled with issues. Threats evolve. Learning cannot keep up. But in time I suspect we will evolve to deal with a higher percentage of them, growing a sixth sense for the digitally dishonest.

The second, and perhaps more immediate option is that we turn to technology to help us, in a fashion that it has already been helping us for decades. Assistive technologies that use shared information to identify risks on our behalf.

Spam filters are an enormous and largely unsung success story. If you use a platform like Gmail now, it’s hard to remember just how bad the spam problem was 20-25 years ago. The same technologies are already being applied to SMS. But imagine if they could be extended to cover all of the many communications channels we use now.

AI Copilot

In the next few years it seems almost certain that most of us will have a personal AI. A copilot for life that helps us to navigate and choose, and complete tasks in our home and working lives. This obviously presents a new risk from a cybersecurity perspective, since these systems will have near complete access to our lives, right down to having permissions to spend money (with limits). But they also present an opportunity to take the principles of spam filters and apply them across all of our interactions. Imagine a platform-agnostic filter that can look for patterns of attack across interactions in both digital and physical space. That can hide the obvious attacks from your attention, and flag those that look risky for you to examine.

Maybe these AIs can even help to train you to identify false from true. Researchers are already testing whether we can be trained to improve our performance in identifying bots from real people in text and audio communications. The combination of AI copilot and human intuition could be very powerful - more so than either individually.

Long term optimist

Whenever people ask me about the future in general, I always tell them I am a long term optimist and a short term pessimist. And this very much captures the way I feel about the future of trust. It feels to me like the current pressure of a collapse in trust in institutions, combined with the trend for an explosion in the number of intermediaries, creates the perfect conditions for an explosion in fraud and socially engineered cyber attacks. But our track record in addressing these threats over the long term is pretty good, as the example of spam filtering shows.

We’ll get there, but there might be a lot of pain along the way.


##

*Any reader who has been in the cyber security industry a long time may know there are some suggestions that Bill (RIP) was doing some ‘social engineering’ of his own…