The Human Firewall: The real threat behind AI and social engineering | WhiteSpider

The Human Firewall: The real threat behind AI and social engineering

June 30, 2025
By Phil Lees

Reflections on Tom Johnson’s session at the Cisco Security Roadshow, Leeds 

We spend millions on firewalls, intrusion prevention, endpoint protection, XDR platforms, and zero-trust architectures, yet the most dangerous vulnerability in any cybersecurity strategy still is not code, it is us

At the recent Cisco Security Roadshow in Leeds, ethical hacker Tom Johnson provided an insight into the front lines of modern cyber warfare. His tales from the trenches were engaging, humorous, but deeply unsettling. The clearest message came from his focus on social engineering, and its explosive convergence with AI and deep-fake technology, which was a bit of an eye-opener. 

Before we highlight the session takeaways, it’s worth considering some of the real-world incidents already shaking organisations.

When AI turns malicious

Tom Johnson’s insights aren’t hypothetical—they reflect a very real and rapidly evolving threat landscape. Artificial intelligence is no longer confined to security tools or research labs; it’s now squarely in the hands of attackers. And they’re getting creative.

Earlier this year, a multinational firm in Hong Kong was defrauded of $25.6 million after staff were tricked by a deepfake video call impersonating their CFO. They thought they were attending a real-time virtual meeting, but what they experienced was an AI-generated illusion, complete with a cloned voice and facial movements.

Meanwhile, researchers have shown that generative AI now outperforms even elite red teams at crafting persuasive phishing emails. In one study, click-through rates for AI-generated spear phishing were as high as 56%, compared to just 12% for standard phishing attempts.

This isn’t just about better grammar in scam emails. AI tools scrape LinkedIn profiles, harvest breached credentials from the dark web, mimic vocal tones, and generate entire chat conversations that appear eerily legitimate.

AI doesn’t just strike at the point of attack – it powers the entire kill chain. A recent Springer article reveals how artificial intelligence enhances every phase of a cyber attack, from initial reconnaissance to final exploitation. Attackers use AI to automate the harvesting of contextual data such as job titles, reporting lines, or recent company news, and then generate phishing messages or chatbot conversations that feel deeply personal and credible. These messages aren’t riddled with the usual red flags. They’re timely, relevant, and frighteningly well-informed, making them exponentially harder to detect and resist.

Trust is the exploit

Tom pointed out that the essence of social engineering is brutally simple: “manipulate the human.” Phishing, vishing, smishing, whaling, and spear phishing don’t exploit code, they exploit trust, urgency, and ignorance.

His live demos showed that even advanced technical controls can be undone by one bad decision. Whether impersonating a sewage worker to plant a rogue access point, or crafting low-effort phishing emails that prey on distracted staff, the weak link was always a human click, a rushed conversation, or an unchallenged assumption.

AI, social engineering on steroids

AI is not just helping defenders, it is democratising sophisticated attacks:

  • Generative AI now crafts hyper-personalised phishing emails, drawing from LinkedIn data, breach records, and company news.
  • Deepfakes have moved beyond Hollywood. As seen in Hong Kong, synthetic CEOs can now request, and receive, millions.
  • Voice clones, AI chatbots, and lifelike avatars allow attackers to bypass traditional identity verification with unsettling ease.
Countermeasures, technical and human

Cisco’s portfolio (Cisco XDR, Umbrella, Secure Endpoint, Secure Access) can automate detection, integrate intelligence, and shrink attack surfaces, yet even these robust defences collapse if one employee slips up.

Tom made it very clear that we are only as strong as the least security-aware person in our organisation.

Key safeguards in the age of AI
  • Mandatory multi-party authorisation for high-value transactions
  • Regular deep-fake awareness sessions and simulated phishing drills, not just the basic cyber awareness trainnig
  • Clear “pause and verify” procedures when something feels off
  • Cultural reinforcement that vigilance is celebrated, never criticised
The soft, squishy weak link

Quantum computing, AI arms races, and generative media will keep advancing, but one variable won’t change: we are human, emotional, distracted, and fallible. Attackers know it. They no longer need to brute-force passwords if they can convincingly sound like your boss, reference yesterday’s meeting, and request a £200k transfer.

Tom’s talk wasn’t doom and gloom; it was a call to rethink how we prepare. AI will continue to raise the stakes, but with the right combination of security technology, cultural awareness, and continuous education, organisations can keep pace.

Don’t just ask if your tech stack is ready. Ask if your people are.

If you’d like help reviewing your security posture, technical or human, get in touch with our team to start building a stronger, safer culture today.