Artificial intelligence has become a double-edged sword in cybersecurity. On one hand, defenders are harnessing AI to spot threats faster and automate responses; on the other, attackers use the same technology to craft more convincing scams and automate attacks. Law enforcement warns that AI “provides augmented and enhanced capabilities” to schemes criminals already use, increasing their speed, scale and automation. Even major security vendors caution that traditional tools struggle with these new AI-driven threats. In short, AI is raising the stakes worldwide – and organisations must keep up on both sides of the cyber battlefield.
How Attackers Leverage AI
Cybercriminals are already exploiting AI in numerous ways, letting machines do much of the work once done by human hackers or con artists. Key tactics include:
- Hyper-personalised phishing: Large language models (LLMs) like ChatGPT can rapidly generate hundreds of tailored emails or messages that blend context and urgency. One report found phishing attacks “skyrocketed by 4,151%” after the release of ChatGPT in 2022, as attackers cheaply churned out convincing lures. AI tools can scan social media and corporate sites to craft messages that seem written by a colleague or boss, making it much harder for individuals to spot a scam.
- Deepfake voice and video fraud: Generative AI can mimic real people’s voices or faces with startling accuracy. Attackers have begun using AI-powered voice-cloning and video-synthesis to impersonate executives or officials. For example, in one attack this year hackers cloned a CEO’s voice in a WhatsApp scam, calling employees and pretending to urgently need their logins. In October 2024 Wiz – a US cloud-security firm – admitted dozens of its staff got a “voice message from me” that was an AI deepfake of the CEO, attempting to steal credentials. Even Ferrari fell victim: in July 2024 a scammer posed as Ferrari’s CEO on WhatsApp and then called an executive using a perfect AI-generated voice and accent. The executive only averted disaster by asking a personal question (what book the real CEO had last recommended), which the AI scammer could not answer. And earlier in 2024 a Hong Kong subsidiary of engineering firm Arup lost about HKD 200 million (~$25m) when staff were tricked by an entirely deepfaked video conference with their CFO. These cases show how AI deepfakes can supercharge traditional “CEO fraud” schemes with frightening realism.
- Malware and exploit automation: AI is also being tested for writing code and searching for vulnerabilities. In 2023 cybercriminals even created malicious chatbot tools like “WormGPT” and “FraudGPT” – versions of generative AI trained or prompt-hacked to generate malware, phishing scripts, and fraud campaigns without ethical filters. Security researchers warn that criminals can “scale up” malware development with AI just as developers have, using the same libraries to speed coding. AI models can scan huge codebases or network logs for weaknesses, enabling more efficient vulnerability discovery and automated attack planning.
- Other smart tools: Beyond phishing and malware, attackers use AI to optimise many mundane tasks – from scraping data for credentials to generating fake reviews or social-media posts for scams. For example, simple AI enhancements like translating phishing emails into local languages have already opened new markets for criminals. Even if AI misuse is outlawed, underground tools and “jailbroken” models proliferate, so experts say technology alone won’t stop the abuse, people and policies are needed too.
Together, these AI-driven tactics mean attacks can be far more convincing, customised and numerous than before – effectively changing “the economics of fraud,” as industry experts note.
Recent AI-Driven Attack Case Studies
Several high-profile incidents have already demonstrated AI’s impact on cybercrime:
- LastPass (Apr 2024): Password manager LastPass revealed that its employees were targeted in a sophisticated voice-phishing attempt. The attackers used AI to synthesise CEO Karim Toubba’s voice in a series of WhatsApp calls, texts and voicemails, demanding credentials. Fortunately the recipient recognised the scam because it came through an odd channel (WhatsApp) and displayed urgent, suspicious language. LastPass reported the incident openly, warning that AI voice-clones are “already being used in executive impersonation fraud campaigns”.
- Wiz (Oct 2024): Cloud-security startup Wiz disclosed at TechCrunch Disrupt that hackers had cloned its CEO’s voice to trick employees. “Dozens of my employees got a voice message from me,” Wiz co-founder Assaf Rappaport said. “The attack tried to get their credentials”. The real CEO’s voice was trained from a conference talk, but luckily staff noticed the tone was off and blocked the attempt. This case shows that even security-savvy teams can be targeted with AI-deepfakes.
- Ferrari (Jul 2024): A Ferrari executive shared how an impostor reached out pretending to be Ferrari’s CEO Benedetto Vigna. After messaging on WhatsApp about a fake “big acquisition,” the scammer called the exec with an AI-generated voice that perfectly matched Vigna’s accent. Luckily, the executive got suspicious and asked an unexpected personal question (about a book recommendation). The AI scammer couldn’t answer and hung up, preventing a potential multimillion-pound loss.
- Arup (Feb 2024): International engineering firm Arup quietly confirmed it was the victim behind a major Hong Kong fraud. Attackers sent an email from a fake CFO requesting a “confidential transaction”, then held a video call featuring deepfaked voices and images of the CFO and colleagues. The victim wired HK$200 million (~$25.6m) to the criminals before realizing the meeting was fake. KPMG’s Matthew Miller notes such AI scams are “not new” but can now be massively scaled and funded once they succeed. Arup’s loss is a stark example of AI amplifying a classic CEO/CFO fraud.
These examples, spanning the US, UK, Hong Kong and Europe… illustrate how generative AI is being abused in the wild. They underscore FBI warnings that AI can magnify familiar attack strategies, making them faster, cheaper and more believable.
AI-Augmented Defence
Fortunately, defenders are also weaponising AI to outpace and outsmart attackers. By analysing vast amounts of data and learning normal patterns, AI systems can spot anomalies and shut down attacks faster than human teams alone. Key defence approaches include:
- Anomaly detection and threat hunting: Security platforms increasingly embed machine learning to monitor networks and endpoints. AI can learn the “normal” behaviour of users and devices, then flag deviations (such as unusual login patterns, network traffic spikes or odd process activity) that may indicate a breach. This helps catch stealthy attacks or insider threats. For example, Darktrace and other next-generation security tools use AI to detect sophisticated phishing emails or malware payloads that would evade traditional filters.
- Automated response and triage: When an AI system spots a threat, it can trigger immediate countermeasures. Many security products now offer automated playbooks: for example, isolating an infected machine, quarantining malicious emails, or rolling back unauthorised changes without waiting for human approval. This “AI-driven incident response” cuts reaction time from hours to minutes or seconds, often stopping breaches from spreading. In 2024 the US Cybersecurity and Infrastructure Security Agency (CISA) even ran a pilot using AI to automatically detect software vulnerabilities in government systems, foreshadowing more AI-based defence tools.
- Threat intelligence and analytics: AI sifts through threat feeds, logs and past incidents to identify trends that humans might miss. It can correlate data from different sources (e.g. suspicious IPs, malware hashes, user reports) to provide insights like “this campaign is targeting organisations in your sector” or predict likely next targets. This helps security teams prioritise defences. Major vendors and cloud providers (e.g. Cisco, Microsoft, Google Cloud) now use AI to enrich threat databases and even to simulate attacks (so defenders can test systems). Cisco, for instance, has launched an “AI Defense” platform to secure AI applications, noting that “existing security solutions are not equipped to handle” the novel threats arising with AI. In other words, defenders are racing to build smarter tools too.
- Security guidelines and frameworks: Beyond technology, international bodies are providing guidance to ensure AI is developed securely. In November 2023, the UK’s National Cyber Security Centre (NCSC) and U.S. CISA co‑published ghttps://www.techrepublic.com/article/new-ai-security-guidelines/#:~:text=Incident%20Readiness%20and%20Strategy%20for,Cybersecurityuidelines for secure AI system development, endorsed by agencies in 18 countries. These guidelines stress “secure by design” principles throughout the AI lifecycle. Lindy Cameron, CEO of the NCSC, explains that these joint guidelines mark “a significant step in shaping a truly global… understanding of the cyber risks and mitigation strategies around AI”. In 2025 the UK went further by publishing a voluntary “AI Cyber Security Code of Practice”, aiming to set baseline protections for AI systems. Likewise, in the U.S. CISA released an AI Roadmap and appointed a Chief AI Officer to embed AI into its defensive arsenal. These policy moves help steer both public and private sectors to harden AI projects against attack and misuse.
In short, defenders are now mixing machine learning with traditional best-practices (patching, network segmentation, multi-factor authentication, etc.) to stay ahead. The hope is that AI will turn the tables: every new trick attackers invent could be countered with an AI detector or automated shield. As one security expert put it, using AI defensively is vital because “the stakes have never been higher”, it’s no longer enough to rely on static firewalls or signatures alone.
Global Perspectives and Policy Responses
Governments and regulators around the world are taking note of this AI arms race in cyberspace:
- United Kingdom: The UK has sounded the alarm on AI in cyber threats. In 2025 the NCSC warned that AI will let attackers find and exploit vulnerabilities much faster, creating a “digital divide” between organisations that can defend against AI‑driven attacks and those that cannot. To counter this, the UK updated its cyber strategy and published guidance for AI security. In January 2025 it released a Code of Practice for the Cyber Security of AI – a first-of-its-kind voluntary framework to secure AI systems throughout their lifecycle. The UK also co‑leads international efforts: NCSC worked with 21 other agencies (including the US FBI, Singapore’s CSA, and Japan’s NISC) to craft global AI security guidelines.
- United States: U.S. agencies have been proactive too. CISA’s 2024 year-in-review highlights AI as a priority: it completed AI risk assessments for critical infrastructure and even ran the federal government’s first AI cybersecurity tabletop exercise in June 2024. CISA is developing an “AI Cybersecurity Playbook” with industry partners. Meanwhile, U.S. law enforcement (FBI) and intelligence services regularly warn businesses to guard against AI-powered social engineering and deepfakes. In 2023 and 2024 the White House issued executive orders on AI, and NIST issued an AI Risk Management Framework – all stressing safe development and transparency so that AI itself does not become a new vulnerability.
- European Union: The EU has moved on AI with regulatory force. In mid-2024 it finalised the AI Act, the first comprehensive law governing AI, classifying applications by risk. While the Act isn’t focused solely on cybersecurity, it does impose strict requirements on high-risk AI systems (including those used for biometric ID, critical infrastructure, or law enforcement). The EU also enacted its Cyber Resilience Act (Dec 2024), which strengthens security requirements for connected devices and software – a move that will indirectly affect AI-driven products. EU officials have also proposed watermarks for AI-generated media to counter deepfakes. Policymakers in Europe broadly view AI as a strategic priority: for instance, the UK and Germany have funded national AI strategies that include cyber aspects, and EU agencies urge member states to adopt AI-aware defenses (such as updating phishing filters and anomaly detection with AI).
- Asia and Other Regions: Several Asian countries are ramping up. Singapore’s Cyber Security Agency (CSA) reported in 2024 that its latest threat surveys began to see “AI-powered malware and deepfake-enabled scams” testing defences. Singapore has published guidelines for securing AI projects, and its leaders warn that AI scams can trick companies out of large sums. Japan, seeing growing cyber threats, even amended its security law in 2025 to allow more active cyberdefence measures (though these don’t explicitly name AI). In India, the government is exploring AI for national cyber defense and proposed policies to counter AI-driven social media disinformation and attacks on infrastructure. China has also been quietly tightening regulations around AI and cybersecurity (e.g. drafting rules on generative AI), reflecting its priority on controlling technology.
In all regions, a common theme emerges: AI is too important to ignore. Experts worldwide urge companies and governments to act now. As Singapore’s Cybersecurity Commissioner David Koh put it in 2024: “AI-powered deepfakes and scams [are] trick[ing] companies and individuals out of large sums of hard-earned monies. We have to redouble our efforts…to work towards a future where everyone can live and work online in a trusted, resilient…and vibrant cyberspace.”. And the UK’s Lindy Cameron adds that the multilateral AI security guidelines are a step to ensure “security is not a postscript to [AI] development but a core requirement throughout”
Conclusion
AI is fundamentally reshaping cybersecurity. The technology amplifies old threats – from phishing to malware – but also empowers new defenses like never before. It’s now an arms race between attackers and defenders: each breakthrough on one side spurs innovation on the other. What is clear is that complacency is not an option. As experts warn, organisations that fail to embrace AI-driven defences – or regulate AI’s use – risk falling behind. By combining sophisticated AI tools with robust security practices and international cooperation, we can hope to tilt the advantage back toward defence. After all, in this AI-powered era, the smartest weapon may just be an AI-trained eye on every threat.
Sources
FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence — FBI
https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
Cisco Unveils AI Defense to Secure the AI Transformation of Enterprises
https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2025/m01/cisco-unveils-ai-defense-to-secure-the-ai-transformation-of-enterprises.html
Top 10 Biggest Cyber Attacks of 2024 & 25 Other Attacks to Know About!
https://www.cm-alliance.com/cybersecurity-blog/top-10-biggest-cyber-attacks-of-2024-25-other-attacks-to-know-about
LastPass: Hackers targeted employee in failed deepfake CEO call
https://www.bleepingcomputer.com/news/security/lastpass-hackers-targeted-employee-in-failed-deepfake-ceo-call/
Wiz CEO says company was targeted with deepfake attack that used his voice | TechCrunch
https://techcrunch.com/2024/10/28/wiz-ceo-says-company-was-targeted-with-deepfake-attack-that-used-his-voice/
Ferrari Thwarted an AI Deepfake Scammer Posing as Its CEO With an Age-Old Trick
https://www.thedrive.com/news/ferrari-thwarted-an-ai-deepfake-scammer-posing-as-its-ceo-with-an-age-old-trick
Scammers siphon $25M from engineering firm Arup via AI deepfake ‘CFO’ | CFO Dive
https://www.cfodive.com/news/scammers-siphon-25m-engineering-firm-arup-deepfake-cfo-ai/716501/
Back to the Hype: An Update on How Cybercriminals Are Using GenAI | Trend Micro (US)
https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/back-to-the-hype-an-update-on-how-cybercriminals-are-using-genai
2025 Cyber Threats: A Mid‑Year Review
https://www.darktrace.com/
CISA’s 2024 Year in Review document details cyber defense, infrastructure protection milestones – Industrial Cyber
https://industrialcyber.co/cisa/cisas-2024-year-in-review-document-details-cyber-defense-infrastructure-protection-milestones/
New AI Security Guidelines Published by NCSC, CISA & 21 International Agencies
https://www.techrepublic.com/article/new-ai-security-guidelines/
Code of Practice for the Cyber Security of AI – GOV.UK
https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai
Black Hat 2024 and the Rise of AI-Driven Cyber Defense | Blog | MixMode
https://www.mixmode.ai/blog/black-hat-2024-and-the-rise-of-ai-driven-cyber-defense
NCSC assesses impact of AI on cyber threat to UK until 2027 – Global Relay Intelligence & Practice
https://www.grip.globalrelay.com/impact-of-ai-on-cyber-threat-to-uk-from-now-to-2027-an-ncsc-assessment/
A Decade of Strengthening Singapore’s Cyber Defence Amid Escalating Threats | Cyber Security Agency of Singapore
https://www.csa.gov.sg/news-events/press-releases/a-decade-of-strengthening-singapore-s-cyber-defence-amid-escalating-threats/

