Ethical hacking in the age of AI
Ethical hacking, authorised attempts to break into systems before cyber criminals do, remains a cornerstone of modern security. ‘White-hat’ or ethical hackers use everything from reconnaissance to vulnerability exploitation to uncover weak spots that could otherwise be exploited.
AI is giving these efforts a huge boost. Machine learning models can comb through massive codebases, logs, and network traffic far faster than humans, surfacing anomalies or likely exploits in seconds.
Use-case example: Google’s Project Naptime (2024) showed that AI agents could automatically analyse huge volumes of code and flag potential vulnerabilities. In practice, this means faster bug bounty triage, shorter patch cycles, and a much better chance of catching issues before they’re weaponised. [1]
When AI works against us
The flip side is that criminals are using the same capabilities to push their attacks further.
The impact of AI hacking on companies and society
The dual-use nature of AI is forcing tough questions. If the same AI that helps patch a system can also write polymorphic malware, where does the line get drawn? Regulators and industry bodies are starting to weigh in:
The message is clear: AI isn’t a ‘future’ risk. It’s here now, on both sides of the fight. Here are some lessons from the field that showcase today’s relevance:
The future of ethical hacking
To keep up with the emerging technologies in ethical and malicious hacking, organisations should consider the following elements:
How BDO can help
Your organisation can’t afford to sit back. Staying ahead requires you to blend AI-driven tools with experienced human expertise.
BDO helps you to do exactly that through:
By combining technical rigor with AI-enabled insights, we help organisations prepare for what’s already here - and what’s coming next.
Conclusion
AI isn’t inherently good or bad, it’s an amplifier. In cyber security, it can dramatically improve how we discover and fix vulnerabilities, but it also accelerates how attackers build and deploy exploits.
The organisations that succeed will be those that adopt AI responsibly, harden their systems against new forms of attack, and continually adapt to an evolving threat landscape.
Questions? Contact our experts
References
[1]: Google Project Zero – Project Naptime: AI-assisted vulnerability triage (2024).
[2]: Dark Web monitoring reports on WormGPT variants exploiting Mixtral/Grok APIs (2025).
[3]: Microsoft Security Blog – Defending against indirect prompt injection in AI agents (2025).
[4]: NIST AI 100-2 (2025) – Adversarial Machine Learning: Taxonomy and Mitigations.
[5]: Security research reports on AI-assisted ransomware prototypes (2025).
[6]: Europol SOCTA 2025 – Serious and Organised Crime Threat Assessment.