AI’s role in hacking

Smarter attacks, smarter defences

Cyber security is shifting fast due to artificial intelligence. AI has become both the lockpick and the locksmith. Security practitioners spot vulnerabilities faster than ever while attackers find new ways to build malicious and damaging tools.

Ethical hacking is still one of the most effective ways to stay ahead, but the rise of AI means we’re entering a new era of hacking where the rules of the game are changing.

Ethical hacking in the age of AI

Ethical hacking, authorised attempts to break into systems before cyber criminals do, remains a cornerstone of modern security. ‘White-hat’ or ethical hackers use everything from reconnaissance to vulnerability exploitation to uncover weak spots that could otherwise be exploited.

AI is giving these efforts a huge boost. Machine learning models can comb through massive codebases, logs, and network traffic far faster than humans, surfacing anomalies or likely exploits in seconds.

Use-case example: Google’s Project Naptime (2024) showed that AI agents could automatically analyse huge volumes of code and flag potential vulnerabilities. In practice, this means faster bug bounty triage, shorter patch cycles, and a much better chance of catching issues before they’re weaponised. [1]

When AI works against us

The flip side is that criminals are using the same capabilities to push their attacks further.

  • Polymorphic malware: WormGPT-style jailbreak kits are being sold on dark web forums that tap into mainstream AI APIs like Mixtral and Grok. These generate malware that mutates every time it runs, dodging traditional detection and making life much harder for incident responders.[2]
  • AI for exploit discovery: What defenders use for bug hunting can also be turned around. Attackers are experimenting with AI agents that scan open-source code, cloud services, or IoT firmware for exploitable flaws - essentially automating the work of a skilled researcher.[1]
  • Prompt injection and data poisoning: As companies roll out AI copilots and agents, these systems themselves have become targets. In 2025, Microsoft and NIST flagged real-world cases where malicious websites buried hidden prompts in code or metadata. When an AI agent ingested that content, it was tricked into leaking data or performing unauthorised actions.[3][4]
  • Ransomware with AI assistance: Security researchers are also tracking early ransomware prototypes built with help from AI. These can generate new encryption routines or confusing layers on the fly - shortening the development cycle for malware authors.[5]

The impact of AI hacking on companies and society

The dual-use nature of AI is forcing tough questions. If the same AI that helps patch a system can also write polymorphic malware, where does the line get drawn? Regulators and industry bodies are starting to weigh in:

  • NIST AI 100-2 (2025) lays out a taxonomy of adversarial AI threats and recommended mitigations.[4]
  • Microsoft (2025) has issued defensive patterns for handling prompt injection in enterprise AI.[3]
     Europol’s SOCTA 2025 highlights AI as a defining factor in how organised crime is evolving.[6]

The message is clear: AI isn’t a ‘future’ risk. It’s here now, on both sides of the fight. Here are some lessons from the field that showcase today’s relevance:

  • For defenders: AI-assisted vulnerability scanning can reduce remediation times by up to 40%- a huge advantage when every day counts.[1]
  • For attackers: Jailbroken LLMs packaged as WormGPT variants show how easy it has become to generate evasive malware at scale.[2]
  • For enterprises: Prompt injection incidents remind us that even ‘trusted’ AI deployments can introduce new attack surfaces if not secured from the start.[3]

The future of ethical hacking

To keep up with the emerging technologies in ethical and malicious hacking, organisations should consider the following elements:

  • Leverage autonomous ‘red teams’: Integrate AI-driven so-called red team tools that simulates attacks into your technology stack to continuously test your defences. Consider partnerships that offer managed services, or develop internal capabilities.
  • Combat adaptive malware: Be prepared for more sophisticated malware that evolves quickly. Invest in AI powered threat intelligence solutions to streamline your incident response.
  • Prioritise AI secure by design: When developing in implementing AI systems, apply secure by design practices, and conduct regular and thorough audits.

How BDO can help

Your organisation can’t afford to sit back. Staying ahead requires you to blend AI-driven tools with experienced human expertise.

BDO helps you to do exactly that through:

  • AI-augmented penetration testing,
  • Strategic guidance on safe AI adoption,
  • Red-team exercises that simulate AI-enabled threats.

By combining technical rigor with AI-enabled insights, we help organisations prepare for what’s already here - and what’s coming next.

Conclusion

AI isn’t inherently good or bad, it’s an amplifier. In cyber security, it can dramatically improve how we discover and fix vulnerabilities, but it also accelerates how attackers build and deploy exploits. 

The organisations that succeed will be those that adopt AI responsibly, harden their systems against new forms of attack, and continually adapt to an evolving threat landscape.

Questions? Contact our experts

Francis Oostvogels

Francis Oostvogels

Senior Manager
View bio

References

 [1]: Google Project Zero – Project Naptime: AI-assisted vulnerability triage (2024).

 [2]: Dark Web monitoring reports on WormGPT variants exploiting Mixtral/Grok APIs (2025).

 [3]: Microsoft Security Blog – Defending against indirect prompt injection in AI agents (2025).

 [4]: NIST AI 100-2 (2025) – Adversarial Machine Learning: Taxonomy and Mitigations.

 [5]: Security research reports on AI-assisted ransomware prototypes (2025).

 [6]: Europol SOCTA 2025 – Serious and Organised Crime Threat Assessment.