AI-driven Deception

The Next evolution of social engineering

As Artificial Intelligence (AI) continues to evolve in an effort to increase efficiency and create new ways of working, this new technology also introduces a wide range of threats. Though many of these will be technological, it is also crucial to understand the behavioural threats introduced by AI. 

Social engineering is a form of psychological manipulation used to make people perform tasks or divulge confidential information without them realising it. Traditional thinking of cyber security attacks often leads to thoughts of blue screens of death, ransom demands, and complex coding. However, over 90% of data breaches involve human error or manipulation, rather than purely technical vulnerabilities. Humans are fallible, and threat actors are exploiting this more than ever with the evolution and low barrier to entry of generative AI. 

Traditional vs AI-enhanced social engineering

The traditional methods of social engineering we have all likely experienced over the years rely heavily on direct human interaction. Key elements include

👤
Human interaction
Utilising phone calls (vishing) and emails (phishing) to impersonate trusted individuals.
🧠
Basic psychological tactics
Generate fear, uncertainty, and urgency to prompt immediate action from user.
📢
Limited targeting
Often using mass outreach or ‘spray-and-pray’ tactics, hoping a small percentage of victims will respond

AI-enhanced social engineering tactics build on these elements and leverage advanced generative technologies to increase the plausibility and sophistication of attacks. Recent advancements include:   

📈
Data-driven targeting
AI algorithms can analyse vast amounts of information from publicly available sources such as social media to create detailed profiles of individuals, generate personalised narratives, and target high-profile individuals.
🤖
Automation and scalability
Automation methods in conjunction with generative AI tools allow threat actors to run heavily personalised social engineering campaigns at scale.
🔮
Advanced technological imitation
New deepfake technologies and AI-driven tools to produce realistic voice simulations (ElevenLabs) and videos (Veo3), closely mimicking trusted individuals.

Advanced social engineering in practice – CFO impersonation 

In 2024, a multinational Hong Kong company fell victim to an advanced social engineering attack where a threat actor used deepfake technology to impersonate the Chief Financial Officer during a video call. By leveraging publicly available videos and images, the attacker convincingly mimicked the CFO’s face, voice, and expressions, leading the victim to authorise over 15 transfers. By combining urgency and fear with emerging technologies, the attackers defrauded the company of over 25 million US dollars. 

Sadly, such attacks will only become more common and sophisticated with the rise of AI-generated video. In May 2025, Google Deepmind released Veo3, an AI video tool that significantly lowers the barriers to executing advanced social engineering schemes.

Preventative and detective actions – What can you do 

As this technology becomes more advanced at a rapid pace, it is critical we understand how it operates and its limitations. As social engineering inherently targets human psychologies, awareness has never been more important. 

Preventative:

  1. 1
    Be mindful of what you post
    AI algorithms scrub public information to generate convincing and realistic counterfeits
  2. 2
    Think before you click
    Always double and triple check a URL before clicking or downloading any links from any source. Always hover over links before clicking.
  3. 3
    Verify the contact
    Never assume that an email or message from someone is legitimate, especially if it appears urgent and relates to sensitive information. Verify the identity of the individual through a trusted method, like a known phone number.
  4. 4
    Investigate current tooling
    Augment preventative AI tooling, such as email scanning and enhanced phishing campaigns for technical prevention.


Detective: 

  1. 1
    Most AI-generated videos are short
    Creating realistic videos with AI still takes a lot of computing power (and money), so scammers usually keep them under a minute. If the video feels overly polished but ends quickly, it might be synthetic.
  2. 2
    You won't see the same video from different angles 
    Unlike real footage, AI-generated videos are typically made from a single viewpoint. If someone can’t show themselves from another angle, or if the scene seems to “reset” when the camera shifts, that’s a red flag. 
  3. 3
    Watch for strange visual details 
    AI often struggles with small things like hands, jewellery, blinking, or clothing textures. Look out for flickering earrings, oddly shaped fingers, unnatural lighting, or parts of the image that seem to subtly change from frame to frame.

Note: this advice is current as of October 2025 — AI-generated video is advancing rapidly, so if you're reading this in the future, be sure to check for updated detection methods.

How BDO can help

  • Implementation support: Our expert team specialises in guiding organisations through the implementation of generative AI tooling, strategic development, and best practice guidance while maximising the benefits of AI technologies. 
  • Enhanced awareness and practical training: Stay updated on emerging AI social engineering techniques with BDO's tailored workshops and trainings. Our practical exercises and simulations aim to deepen your understanding and awareness on how to spot social engineering attacks in this new era. 
  • Phishing services: Protect your business from potential threats with our advanced phishing detection and prevention services, designed to safeguard your data and enhance your cyber defence.
Don't let the new age of social engineering catch you off guard. Partner with BDO to ensure a smooth transition and a proactive approach to compliance. 
Contact us today to learn how we can support you in this critical journey.