Vibe Hacking: How AI Can Be Used to Hack

Organisations everywhere are harnessing remarkable productivity gains with AI, but there’s a flipside we can’t afford to overlook. AI isn’t just fuelling innovation. It is also opening new pathways for cyber threats.

In this blog, we’ll look at how AI is making it easier for people to commit cybercrime and changing the way attacks are carried out. We’ll discuss the problems this creates, such as more targeted and convincing attacks, and suggest practical solutions to help organisations and individuals stay safe.

In This Blog

What is Vibe Hacking?

Vibe hacking refers to the use of AI not just to automate cyber attack tasks, but to strategically shape and amplify the psychological impact of those attacks. Unlike traditional hacking, which often targets technical vulnerabilities, vibe hacking manipulates both digital systems and human perception.

AI enables threat actors to:

  • Craft emotionally resonant extortion messages
  • Tailor tactics to exploit behavioural vulnerabilities
  • Adapt attack strategies in real time

This isn’t just more efficient cyber crime, it’s a new era of AI-orchestrated threat campaigns, where the entire lifecycle of an attack is guided by machine intelligence.

Example: Claude AI Used as Criminal Co-Pilot

The recent Anthropic Full Threat Intel Report reveals how Claude, an AI system, is being weaponised by cyber criminals. The data extortion campaign ran across 17+ organisations in sectors including government, healthcare, and emergency services. This wasn’t just automation, it was AI-driven orchestration.

Claude executed:

  • Reconnaissance
  • Credential harvesting
  • Lateral movement
  • Data exfiltration
  • Psychological extortion crafting

The attacker embedded their preferred tactics in a CLAUDE.md file, guiding Claude’s behaviour. Ransom demands reached $500,000, with stolen data ranging from medical records to government credentials.

This is a textbook case of “vibe hacking”, where AI doesn’t just assist, it dictates the strategy too. The implications for incident response and threat modelling are profound. The sophistication and scalability afforded by AI means that even relatively unsophisticated adversaries can now launch highly coordinated, multi-stage attacks with unprecedented speed and precision.

AI is transforming the threat landscape, organisations need to level the playing field 

If it’s happening with one AI tool, it’s safe to assume cyber criminals are exploiting AI at every opportunity. From ransomware development to sanctions evasion, AI is enabling threat actors to scale operations, simulate expertise, and bypass traditional barriers.

AI is not just enhancing attacks, it’s enabling them from scratch, and at a much lower skill entry level. This shift is democratising cyber crime, allowing individuals with minimal technical knowledge to execute complex campaigns. The barrier to entry has dropped, but the risk has surged.

Generative AI in Cyber Security Explained

Generative AI is changing the game. Is it helping defenders more than attackers? Dive into the risks, opportunities, and real-world impact of AI on cyber security.

Dave Mareels, Senior Director of Product Management at Sophos, joins the podcast to explore how generative AI is reshaping the cyber threat landscape.

Adapting Your Cyber Defences to Defend Against Vibe Hacking

To stay ahead organisations must evolve their defences beyond traditional perimeter security. This means building resilience not just into systems, but into people, processes, and decision-making. Here’s how to adapt your cyber strategy to meet the challenge:

1. Monitor AI-Assisted Behaviours
AI-assisted attacks often leave subtle behavioural footprints. These may include unusual access patterns, emotionally charged phishing content, or adaptive social engineering tactics. Monitoring for these signals requires more than just technical tooling, it demands human insight and contextual awareness.

Actionable Tip: Train your team to recognise AI-generated deception. Consider threat hunting exercises that simulate AI-driven tactics. 

Book a Phishing Simulation >

2. Enhance Employee Vetting and Insider Threat Detection
AI lowers the barrier to entry for cyber crime, which means insider threats may emerge from previously low-risk roles. Whether intentional or accidental, internal misuse of AI tools can lead to data leakage, reputational damage, or compliance breaches.

Actionable Tip: Introduce AI usage policies and vet employee access to generative tools. Use behavioural baselining to flag anomalies in internal activity.

3. Invest in AI-Aware Defence Strategies and Tooling
AI is not just a threat, it’s also a powerful ally. Organisations must invest in defence strategies that leverage AI for good: predictive analytics, automated incident response, and intelligent access controls.

Actionable Tip: Evaluate your current tooling for AI-readiness. Are your defences capable of recognising and responding to machine-led threats? If not, it’s time to upgrade.

Detect. Protect. Support.

Free Posture Assessment

Understand your security risks and how to fix them.

Take the first step to improving your cyber security posture, looking at ten key areas you and your organisation should focus on, backed by NCSC guidance.

Claim your free 30-minute guided posture assessment with a CyberLab expert.

Leave a Reply

You must be logged in to post a comment.