AWS Finally Gave S3 Buckets Their Own Rooms

s3iamsupply-chainshadow-resources

In August 2024, Aqua Security's Nautilus research team presented findings at Black Hat USA that should have made every AWS customer uncomfortable. They had discovered that at least six AWS services automatically create S3 buckets behind the scenes when you first use them, using naming patterns that are entirely predictable from public information. An attacker who knows your AWS account ID can pre-register those bucket names before you do. And account IDs are trivial to obtain: they leak in IAM ARNs, error messages, public bucket policies, and can even be enumerated from any public S3 bucket.

When you eventually enable the service in a new region, AWS tries to create the bucket, finds it already exists with a policy that lets your account write to it, and silently uses the attacker's bucket instead. Your CloudFormation templates, Glue ETL scripts, SageMaker training data, and Athena query results flow into a bucket someone else controls.

Aqua called these shadow resources. The attack they demonstrated, Bucket Monopoly, could escalate to full account takeover through a single CloudFormation stack deployment.

In March 2026, AWS shipped the structural fix: account-regional namespaces for S3. This post examines how the attack worked, why it took 18 months to get a real solution, and what you need to do about it now.

The root cause: S3's global flat namespace

Since its launch in 2006, S3 has enforced a simple rule: bucket names are globally unique across all AWS accounts and all regions. If someone, anyone, creates a bucket named aws-glue-assets-123456789012-eu-west-1, that name is taken forever, regardless of who created it.

This design made sense when S3 was a simple object store. It became a liability when AWS services started automatically creating buckets with predictable names derived from the account ID and region.

The naming patterns are formulaic:

Service Bucket naming pattern
CloudFormation cf-templates-{hash}-{region}
Glue aws-glue-assets-{account-id}-{region}
EMR aws-emr-studio-{account-id}-{region}
SageMaker sagemaker-{region}-{account-id}
Athena aws-athena-query-results-{account-id}-{region}
Elastic Beanstalk elasticbeanstalk-{region}-{account-id}

Sources: Aqua Security, Spike Reply

If you know someone's account ID, you know the exact bucket name every one of these services will try to create in every region.

The attack chain: from squatted bucket to account takeover

The most severe attack path Aqua demonstrated targeted CloudFormation. Here's the chain:

Attacker learns target's AWS account ID
  → Predicts CloudFormation template bucket name for unused region
    → Pre-creates bucket with policy allowing target account to write
      → Target enables CloudFormation in that region
        → AWS uses attacker's bucket for template storage
          → Attacker replaces template with malicious version
            → Template creates rogue IAM admin user
              → Full account takeover

This isn't theoretical. Aqua demonstrated it end-to-end. The victim deploys a CloudFormation stack, believing they're using their own template. The template actually comes from the attacker's bucket. The attacker's version includes an additional IAM user with AdministratorAccess. The stack deploys successfully. The victim sees a working deployment. The attacker has persistent admin access.

The attack works because CloudFormation trusts the contents of the template bucket implicitly. There's no integrity verification, no signature check, no confirmation that the bucket belongs to the account deploying the stack.

Other service attack paths

CloudFormation account takeover was the headline, but every affected service had its own exploitation path:

AWS Glue: ETL job scripts are stored in the Glue assets bucket. An attacker who controls this bucket can modify job code. When the victim runs a Glue job, it executes the attacker's code with the job's IAM role permissions, which typically include access to data sources and destinations across the account.

SageMaker: Notebook code and training data pass through the SageMaker default bucket. An attacker can inject JavaScript into Jupyter notebooks (achieving XSS in the SageMaker console) or poison training datasets, corrupting ML models without leaving obvious traces.

Athena: Query results are written to the default results bucket. If an attacker controls that bucket, every query result, potentially containing sensitive data from any data source Athena can reach, flows to the attacker.

Elastic Beanstalk: Application source bundles are staged through the EB bucket. An attacker can perform a TOCTOU (time-of-check-to-time-of-use) attack, replacing the application bundle between upload and deployment.

In each case, the damage extends far beyond the squatted bucket itself. The bucket is just the entry point. The impact is determined by what the AWS service does with the contents of that bucket, and those services operate with broad permissions by design.

The CDK variant

Aqua found a separate but related vulnerability in the AWS Cloud Development Kit (CDK). When you bootstrap a CDK environment, it creates a staging bucket following the pattern:

cdk-hnb659fds-assets-{account-id}-{region}

The qualifier hnb659fds is the default, and most CDK users never change it. This makes the staging bucket name as predictable as the service-created buckets above. An attacker who pre-registers the CDK staging bucket can inject malicious CloudFormation templates into any CDK deployment in that region.

The CDK vulnerability shared the same root cause: globally unique bucket names plus predictable naming patterns equals a race condition that attackers win by default.

Why this was hard to fix

AWS patched the individual service vulnerabilities between February and June 2024, adding checks so that services no longer blindly trust pre-existing buckets. Additional defense-in-depth protections were deployed by December 2024.

But these were patches on top of a fundamentally broken design. The global flat namespace meant that any new service, any new automation tool, any new default bucket name was a potential squatting target. The patches fixed known patterns. They couldn't prevent the pattern from recurring.

What was needed was a structural change to S3's naming model. That's what took until March 2026.

The fix: account-regional namespaces

On March 11, 2026, AWS announced account-regional namespaces for S3 general purpose buckets. This is the first fundamental change to S3's naming model in its 20-year history.

The concept is straightforward: buckets created in the account-regional namespace are scoped to your account and region. The same bucket name prefix can exist in different accounts without conflict. No one else can squat your bucket names because the namespace is yours.

Bucket names in the account-regional namespace include an automatic suffix: -{account-id}-{region}-an. You specify the full name when creating:

aws s3api create-bucket \
  --bucket my-data-123456789012-us-west-1-an \
  --bucket-namespace account-regional \
  --region us-west-1 \
  --create-bucket-configuration LocationConstraint=us-west-1

In CloudFormation, you can use BucketNamePrefix instead and let AWS append the suffix:

Resources:
  MyBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketNamePrefix: my-data
      BucketNamespace: account-regional

The key difference from the old model: even if an attacker knows your account ID and region, they can't register a bucket in your account-regional namespace. The namespace is intrinsically tied to account ownership.

Enforcing it organizationally

The real power is in the new IAM condition key s3:x-amz-bucket-namespace. Organizations can deploy an SCP that denies bucket creation outside the account-regional namespace:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "RequireAccountRegionalNamespace",
      "Effect": "Deny",
      "Action": "s3:CreateBucket",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-bucket-namespace": "account-regional"
        }
      }
    }
  ]
}

This single policy eliminates the entire class of bucket squatting attacks for new buckets. No more predictable global names. No more race conditions. No more shadow resources.

The feature is available across 37 AWS regions at no additional cost. Existing buckets in the global namespace continue to work unchanged.

What this doesn't fix

Account-regional namespaces solve the new bucket problem. They don't retroactively fix:

Existing shadow resources. If a service already created a bucket in the global namespace, that bucket stays in the global namespace. You need to audit existing service-created buckets and verify ownership.

Custom automation using global names. If your CI/CD pipelines, Terraform modules, or deployment scripts create buckets in the global namespace, they're still squattable until you migrate them.

Service permissions. Even with correctly-owned buckets, the AWS services that use them still operate with broad permissions. A Glue job that can read from every data source in the account is overpermissioned regardless of who owns the assets bucket. The bucket squatting attack is gone, but the underlying permission model remains.

Third-party tools with predictable names. The shadow resources problem isn't unique to AWS services. Any tool or framework that creates S3 buckets with predictable names, and many do, is a potential squatting target until it adopts account-regional namespaces.

What to do now

1. Enforce account-regional namespaces via SCP. Deploy the SCP above across your organization. Every new bucket should be created in the account-regional namespace. This is the single highest-leverage action.

2. Audit existing service-created buckets. For each of the services listed above, verify that the default buckets in every region are owned by your account. Pay special attention to regions you've enabled but rarely use, since those are the ones most likely to have been squatted.

3. Change your CDK bootstrap qualifier. If you're using the default hnb659fds qualifier, you're using a predictable bucket name. Generate a unique qualifier and re-bootstrap your environments.

4. Customize default bucket names. Where services allow it, override the default bucket name with something that isn't derived from your account ID. Not all services support this, but the ones that do should be configured.

5. Scope service roles. The IAM roles attached to CloudFormation, Glue, SageMaker, and other services determine the damage from any compromise, whether from bucket squatting or anything else. A Glue job that processes data from one database doesn't need access to every data source in the account.

6. Monitor for unexpected bucket usage. CloudTrail logs S3 API calls. Alert on CreateBucket events in the global namespace after you've enforced the SCP, and on PutObject or GetObject calls to buckets you don't recognize.

AWS is fixing this the right way

Credit where it's due: account-regional namespaces are the right fix, and it's a significant one.

AWS could have kept playing whack-a-mole, patching individual services as new squatting vectors surfaced. Instead, they changed the naming model itself. A 20-year-old design decision that underpins millions of applications isn't something you change lightly, and the fact that they shipped it as an opt-in feature with SCP enforcement means organizations can adopt it at their own pace without breaking existing infrastructure.

The vulnerability existed because of a design tradeoff that made sense in 2006. Global bucket names were simple, intuitive, and easy to reason about. They became a liability when AWS services started auto-creating buckets with names derived from account metadata, but that wasn't foreseeable two decades ago. What matters is that once the problem was clearly demonstrated by Aqua's research, AWS responded with both immediate service-level patches and a structural fix that eliminates the entire attack class going forward.

The broader pattern is worth watching, too. Predictable identifiers in shared namespaces are a recurring problem across cloud infrastructure: DNS names, container image tags, package registries, and AI model repositories all share the same fundamental dynamic. S3 account-regional namespaces set a precedent for how cloud providers can address this class of issue at the platform level rather than leaving it to individual customers to work around.

The fix is here. The SCP is six lines of JSON. Deploy it.


Ready to understand your AWS attack surface? Sign up for hackaws.cloud and find out what an attacker could reach from your AWS identities - before they do.