Hacking by criminals and spies has reached ‘unprecedented complexity’, Microsoft says

A surge in hacking attempts by criminals, fraudsters and spy agencies has reached a level of “unprecedented complexity” that only artificial intelligence will be able to combat, according to Microsoft.

“Last year we tracked 30 billion phishing emails,” says Vasu Jakkal, vice president of security at the US-based tech giant.

“There’s no way any human can keep up with the volume.”

In response, the company is launching 11 AI cybersecurity “agents” tasked with identifying and sifting through suspicious emails, blocking hacking attempts and gathering intelligence on where attacks may originate.

With around 70% of the world’s computers running Windows software and many businesses relying on their cloud computing infrastructure, Microsoft has long been the prime target for hackers.

Unlike an AI assistant that might answer a user’s query or book a hair appointment, an AI agent is a computer programme that autonomously interacts with its environment to carry out tasks without direct input from a user.

In recent years, there’s been a boom in marketplaces on the dark web offering ready-made malware programmes for carrying out phishing attacks, as well as the potential for AI to write new malware code and automate attacks, which has led to what Ms Jakkal describes as a “gig economy” for cybercriminals worth $9.2trn (£7.1trn).

She says they have seen a five-fold increase in the number of organised hacking groups – whether state-backed or criminal.

“We are facing unprecedented complexity when it comes to the threat landscape,” says Ms Jakkal.

The AI agents, some created by Microsoft, and others made by external partners, will be incorporated into Microsoft’s portfolio of AI tools called Copilot and will primarily serve their customers’ IT and cybersecurity teams rather than individual Windows users.

Because an AI can spot patterns in data and screen inboxes for dodgy-looking emails far faster than a human IT manager, specialist cybersecurity firms and now Microsoft have been launching “agentic” AI models to keep increasingly vulnerable users safe online.

But others in the field are deeply concerned about unleashing autonomous AI agents across a user’s computer or network.

In an interview with Sky News last month, Meredith Whittaker, CEO of messaging app Signal, said: “Whether you call it an agent, whether you call it a bot, whether you call it something else, it can only know what’s in the data it has access to, which means there is a hunger for your private data and there’s a real temptation to do privacy invading forms of AI.”

Microsoft says its release of multiple cybersecurity agents ensures each AI has a very defined role, only allowing it access to data that’s relevant to its task.

It also applies what it calls a “zero trust framework” to its AI tools, which requires the company to constantly assess whether agents are playing by the rules they were programmed with.

A roll-out of new AI cybersecurity software by a company as dominant as Microsoft will be closely watched.

Last July, a tiny error in the software code of an application made cybersecurity firm CrowdStrike instantly crash around 8.5 million computers worldwide running Microsoft Windows, leaving users unable to restart their machines.

The incident – described as the largest outage in the history of computing – affected airports, hospitals, rail networks and thousands of businesses including Sky News – some of which took days to recover.