StratosAlly – Cybersecurity for digital safety

AI-Powered Intrusion: AWS Admin Access Breached in Under 10 Minutes

Picture of StratosAlly

StratosAlly

AI-Powered Intrusion: AWS Admin Access Breached in Under 10 Minutes

In a striking demonstration of how artificial intelligence is compressing the cyberattack lifecycle, security researchers at Sysdig documented a real-world breach where an intruder achieved full administrative control of an AWS environment in just eight minutes.

Detailed in a February 2026 report, the incident highlights a growing “force multiplier” effect where AI tools collapse hours of manual reconnaissance and exploitation into a single automated sprint. The case also shows how the traditional buffer time between credential exposure and full compromise is rapidly disappearing.

The attack began on November 28, 2025, when valid AWS access keys were left exposed in a publicly accessible Amazon S3 bucket — a common but dangerous cloud misconfiguration. The bucket naming followed predictable AI development patterns, making it easy for automated scanners to discover. The credentials likely originated from development or AI pipeline infrastructure, reflecting rising supply-chain style risks in model and data engineering environments.

The compromised credentials belonged to an IAM user with limited permissions, mainly ReadOnlyAccess and specific AWS Lambda controls. However, within seconds, the attacker used AI-assisted scripts to enumerate the environment and identify a Lambda function named “EC2-init” running with an overly permissive execution role that had administrative privileges. This single weakness dramatically expanded the identity blast radius.

Instead of manual trial-and-error exploitation, the attacker used an LLM to generate malicious Python code, updating the Lambda function multiple times until successfully creating new admin access keys and returning them directly in function output — avoiding the need for external command-and-control infrastructure.

After gaining admin access, the attacker moved across 19 AWS principals to establish persistence. The primary goal appeared to be “LLMjacking” — hijacking cloud AI compute resources. The attacker pivoted to Amazon Bedrock, confirmed logging was disabled, and invoked multiple foundation models. They also provisioned a p4d.24xlarge GPU instance costing about $32.77 per hour and deployed a public JupyterLab server for persistent access. At scale, such abuse can generate five- or six-figure cloud bills within days.

Researchers noted AI hallmarks, including rapid attack velocity (480 seconds), sophisticated error-handled scripts with Serbian comments, and automation attempts targeting non-existent AWS accounts — likely AI hallucinations.

The incident underscores the need for identity-first cloud security, including eliminating long-lived keys, restricting Lambda privilege chains, enabling AI service logging, and deploying real-time identity behavior monitoring. As attackers weaponize AI for speed and scale, defense must operate at machine speed as well.

Caught feelings for cybersecurity? It’s okay, it happens. Follow us on LinkedInYoutube and Instagram to keep the spark alive.

more Related articles