AWS intruder pulled off AI-assisted cloud break-in in 8 mins

submitted by

www.theregister.com/2026/02/04/aws_cloud_breaki…

A digital intruder broke into an AWS cloud environment and in just under 10 minutes went from initial access to administrative privileges, thanks to an AI speed assist.

The Sysdig Threat Research Team said they observed the break-in on November 28, and noted it stood out not only for its speed, but also for the “multiple indicators” suggesting the criminals used large language models to automate most phases of the attack, from reconnaissance and privilege escalation to lateral movement, malicious code writing, and LLMjacking - using a compromised cloud account to access cloud-hosted LLMs.

“The threat actor achieved administrative privileges in under 10 minutes, compromised 19 distinct AWS principals, and abused both Bedrock models and GPU compute resources,” Sysdig’s threat research director Michael Clark and researcher Alessandro Brucato said in a blog post about the cloud intrusion. “The LLM-generated code with Serbian comments, hallucinated AWS account IDs, and non-existent GitHub repository references all point to AI-assisted offensive operations.”

2
12

Log in to comment

2 Comments

If your operation can be hacked by Clippy on Acid then it was not exactly Fort Knox to begin with.

Clippy on Acid

Bro. That would fuckin awesome lmfao and way to cool for an LLM lol

Comments from other communities

Given how much AWS is pushing vibe-coding, we’re now in the Leopard-Face-Eating territory.

Vibe hacking is the future. Really.

It’s going to be vibe hackers hacking vibe coded systems. Caaaaaant waaaaaiiiiit!

From the report that’s the source of this Register article (emphasis added):

The threat actor infiltrated the victim’s environments using valid test credentials stolen from public S3 buckets. These buckets contained Retrieval-Augmented Generation (RAG) data for AI models , and the compromised credentials belonged to an Identity and Access Management (IAM) user that had multiple read and write permissions on AWS Lambda and restricted permissions on AWS Bedrock. This user was likely intentionally created by the victim organization to automate Bedrock tasks with Lambda functions across the environment.

It is also important to note that the affected S3 buckets were named using common AI tool naming conventions, which the attackers actively searched for during reconnaissance.

https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes

It’s absolutely one of the strongest applications of LLMs right now.

Very interested to see how things develop long-term though, since theoretically we should start seeing red team tools developed that can close the holes an attacker would be hunting. Granted, I think we’ll need at least another five years for true high-quality pentest agents, and offense will have the upper hand in the cat & mouse until then.

This is just poor security. Not like in TV/Movies where an “AI” was found “breaking layers of firewalls and encryption” or whatever 🤣

Somebody fucked up. Plain and simple.

Deleted by moderator

 reply
14