The Capital One Breach, Seven Years Later: The Blast Radius Problem That Won't Go Away

blast-radiusssrfiamimdsbreach-analysis

In July 2019, Capital One disclosed that an attacker had accessed personal information of 106 million customers and credit card applicants. The Department of Justice later convicted former AWS employee Paige Thompson of wire fraud and computer intrusion. The financial fallout reached $270 million in damages, an $80 million civil penalty from the OCC, and a $190 million class-action settlement.

The breach became a landmark case in cloud security. AWS responded by shipping IMDSv2, a hardened version of the EC2 Instance Metadata Service that blocks the exact technique used in the attack. The industry declared the problem solved.

It wasn't. Seven years later, the same attack pattern keeps working because the blast radius problem that made Capital One catastrophic was never actually fixed.

The attack chain

The technical details of the Capital One breach are well-documented through court proceedings and subsequent analysis. The attack chain was:

Misconfigured WAF (ModSecurity on EC2)
  → Server-Side Request Forgery (SSRF)
    → http://169.254.169.254/latest/meta-data/iam/security-credentials/
      → IAM role temporary credentials
        → S3 bucket enumeration and download
          → 106 million customer records

The attacker exploited a misconfigured web application firewall to make server-side requests to the EC2 Instance Metadata Service (IMDS) at 169.254.169.254. This link-local endpoint, reachable by any process on the instance, returned temporary AWS credentials for the IAM role attached to the EC2 instance. Those credentials had access to S3 buckets containing customer data.

The SSRF was the door. The IAM role's blast radius, everything those credentials could reach, determined the damage. That role could enumerate and download from S3 buckets across the environment. No privilege escalation was needed. No additional vulnerabilities were required. The permissions attached to a single compute identity were sufficient to reach 106 million records.

AWS's response: IMDSv2

Four months after the breach disclosure, AWS announced IMDSv2, a session-oriented version of the Instance Metadata Service. IMDSv2 requires a two-step process: first, a PUT request with a TTL header to obtain a session token, then all subsequent metadata requests must include that token in a header.

This blocks SSRF-based credential theft because most SSRF vulnerabilities only allow GET requests and can't set custom headers. It was a well-designed mitigation that directly addressed the attack technique.

By mid-2024, AWS made IMDSv2-only the default for new instance types. The assumption was that the problem would gradually solve itself as organizations migrated to newer instances.

That assumption was wrong.

Half the surface is still exposed

Datadog's State of Cloud Security reports have tracked IMDSv2 enforcement every year since 2022. The adoption curve tells a clear story:

Year EC2 instances enforcing IMDSv2
2022 7%
2023 21%
2024 32%
2025 49%

Source: Datadog 2025 State of Cloud Security, data collected September 2025.

Seven years after Capital One, half of all EC2 instances still don't enforce IMDSv2. The improvement is real, from 7% to 49%, but the attack surface remains enormous.

The numbers get worse when you look at instance age. Only 14% of instances older than two years enforce IMDSv2. These are exactly the kind of long-running, load-bearing instances that tend to have broad IAM roles: the production servers, the data pipelines, the internal tools that nobody wants to touch.

Perhaps the most revealing statistic: 82% of instances had exclusively used IMDSv2 in the two weeks before measurement. They could enforce IMDSv2 right now with zero functional impact. They just haven't. The gap between "we could be safe" and "we are safe" is 33 percentage points of the entire EC2 fleet.

The technique has been industrialized

While organizations slowly adopt IMDSv2, attackers haven't been waiting. The Capital One technique, SSRF to IMDS to credential theft, has evolved from a one-off exploitation into an automated, scaled operation.

F5 Labs: mass scanning campaign (March 2025)

In March 2025, F5 Labs documented a coordinated campaign targeting EC2-hosted websites with SSRF payloads designed to extract IMDS credentials. The campaign ran from March 13 to March 25, rotating six query parameter names (url, dest, file, redirect, target, uri) across thousands of targets.

This wasn't a researcher finding a single vulnerable WAF. It was automated infrastructure scanning the internet for any SSRF on any EC2 instance still running IMDSv1. The Capital One technique, industrialized.

Grafana SSRF to AWS credentials (Fortinet, ongoing)

Fortinet observed a surge in AWS credential compromises traced to Grafana SSRF vulnerabilities. Approximately 14% of environments they monitor run Grafana versions within the affected range. A newer vulnerability (CVE-2025-4123), when chained with the Grafana Image Renderer plugin, escalates to a full-read SSRF, putting IMDS credentials in reach on any EC2 instance without IMDSv2.

Pandoc CVE-2025-51591 (September 2025, Wiz)

Wiz caught active exploitation of a Pandoc SSRF vulnerability. Attackers submitted HTML documents containing iframes pointed at 169.254.169.254, targeting IAM credential endpoints. The attacks failed where IMDSv2 was enforced but succeeded where it wasn't.

Chainlit CVE-2026-22219 (January 2026)

The most recent example: an SSRF vulnerability in the Chainlit AI framework allows fetching from the IMDS endpoint on EC2 instances. Security researchers demonstrated the full chain: SSRF to credential theft to lateral movement into S3, Secrets Manager, and databases. Disclosed November 2025, patched December 2025 in version 2.9.4.

The pattern

Every few months, a new SSRF vulnerability appears in a widely-deployed application: a monitoring tool, a document converter, an AI framework. Each one provides a path to 169.254.169.254. Each one turns an application-layer bug into AWS credential theft on any instance without IMDSv2. The entry points keep changing. The technique is the same one that worked against Capital One in 2019.

SSRF is just one entry point

Meanwhile, the credential theft problem extends well beyond SSRF. Datadog's 2025 report found that 59% of AWS IAM users have an access key older than one year, long-lived credentials that accumulate in CI/CD pipelines, config files, and developer machines.

In 2024, the ShinyHunters/Nemesis operation demonstrated what happens when credential theft scales: attackers scanned AWS IP ranges, exploited misconfigured web applications to harvest AWS credentials, and exfiltrated terabytes of data, including AWS keys, customer data, and source code, from thousands of organizations.

In January 2025, the Codefinger ransomware group showed what attackers do with stolen AWS credentials now: they used compromised keys to encrypt S3 buckets with SSE-C (server-side encryption with customer-provided keys), then demanded ransom for the decryption keys. AWS can't recover the data because it never stored the encryption key. Stolen credentials became an unrecoverable ransomware vector.

Every one of these incidents shares the same root cause as Capital One: the compromised credential had more access than it needed, and nobody measured the blast radius before an attacker did.

The real lesson wasn't about SSRF

The instinct after Capital One was to fix the entry point: patch the WAF, enforce IMDSv2, block SSRF. All necessary. None sufficient.

IMDSv2 blocks one path to credential theft. It does nothing for credentials leaked in Docker images, hardcoded in config files, harvested from CI/CD pipelines, or exposed through memory safety bugs. And it does nothing about the actual problem: what those credentials can do once stolen.

Consider two EC2 instances, both running IMDSv1, both hit by the same SSRF:

Instance A has an IAM role scoped to a single S3 bucket and a single Secrets Manager secret:

{
  "Effect": "Allow",
  "Action": "s3:GetObject",
  "Resource": "arn:aws:s3:::app-assets-prod/*"
}

Instance B has an IAM role with the permissions Capital One's did, broad S3 access with no resource constraints:

{
  "Effect": "Allow",
  "Action": "s3:*",
  "Resource": "*"
}

Both are vulnerable to SSRF. Both have their credentials stolen. Instance A loses one bucket of application assets. Instance B loses everything. Same vulnerability, same technique, vastly different blast radius.

The Capital One breach wasn't caused by SSRF. It was caused by an IAM role whose blast radius encompassed 106 million customer records. The SSRF was incidental. Any credential theft technique would have produced the same outcome.

What actually needs to change

Seven years of data tell a consistent story: the industry is slowly adopting the fix for one entry point (IMDSv2) while ignoring the underlying problem (blast radius). Meanwhile, new entry points appear faster than the old ones get closed.

Enforce IMDSv2. Obviously. If you're in the 51% that hasn't, and especially if you're in the 82% that could enforce it today with no functional impact, do it now. AWS's migration guide walks through the process.

But don't stop there. IMDSv2 is one control against one technique. The blast radius of every compute identity, every EC2 instance profile, every ECS task role, every Lambda execution role, determines the damage from any credential compromise, regardless of how the credential was obtained.

Measure blast radius from every identity. Start from each compute role and trace what it can actually reach: which S3 buckets, which secrets, which databases, which other roles it can assume. The Capital One role could reach 106 million records. What can yours reach?

Scope IAM policies to specific resources. The difference between Resource: "*" and a specific ARN is the difference between a contained incident and a catastrophic breach. This is the single highest-leverage change most organizations can make.

Treat secrets access as transitive. A role that can read Secrets Manager secrets effectively inherits the permissions of every credential stored in those secrets. As the LexisNexis breach showed, a single overpermissioned role with broad secrets access turns one vulnerability into full infrastructure compromise.

Rotate and eliminate long-lived credentials. With 59% of IAM users holding access keys older than a year, the blast radius problem extends far beyond EC2. Every long-lived credential is a skeleton key waiting to be found.

The seven-year test

Capital One was supposed to be the wake-up call. AWS shipped the technical fix within four months. The industry had a clear, specific mitigation: enforce IMDSv2.

Seven years later, the scorecard is:

  • IMDSv2 enforcement: 49% of instances. Up from 7%, but still leaving half the surface exposed.
  • SSRF attacks: up 452% from 2023 to 2024. Automated campaigns now scan the internet for any SSRF on any EC2 instance.
  • Credential theft: still the dominant cloud attack vector. New entry points appear in Grafana, Pandoc, Chainlit, and whatever ships next.
  • Blast radius: still unmeasured in most environments. Still the factor that determines whether a stolen credential is a minor incident or a catastrophic breach.

The Capital One breach didn't happen because of a novel attack technique. It happened because a single identity could reach too much. Every breach since then that follows the same pattern, SSRF or leaked credential to overpermissioned role to mass data access, is proving the same point.

The entry points will keep changing. The blast radius is the constant.


Ready to map your blast radius? Sign up for hackaws.cloud and find out what an attacker could reach from your AWS identities before they do.