Good news: unsecured S3 bucket discovery just got easier

Oh, that's not good news is it?

If you thought the business of discovering unsecured Amazon Web Services S3 buckets was for the pros, think again: like all things, the process can be automated, and the code to automate it posted to GitHub.

It's not a new discipline – quickly Googling GitHub for S3 bucket enumeration turns up more than 1,000 results, but the latest projects provide a new wrinkle because the latest projects use data made public in certificate transparency logs, instead of word-lists common in previous approaches.

Over the weekend, the first such project to come to this author's attention landed: Bucket Stream, which scans certificate transparency logs with a simple bit of Python.

Price added a few useful tips for anybody using S3: randomise bucket names so they don't identify your company; set permissions and keep an eye on them; use two buckets to separate private and public data; audit who can access the data (for example, suppliers), and use Amazon Macie to classify and secure sensitive data.

A second project, Slurp, builds on the Certstream idea, but re-implements it in Go, which author “bbb31” says is faster and avoids Python dependencies.

It also adds a bit of UI goodness, like colour-coding to identify S3 buckets that are secured versus those that are public.

Those are just the newest of the enumeration projects, of course, but using certificate information to get bucket names is the new wrinkle.

For the older Bucketeers, for example, you need AWS credentials to run the script, not required by Bucket Stream or Slurp.

The much older AWSBucketDump from Jordan Potti needed a lot more work from the user, since as Potti described it, it's a “brute forcer” based on word-lists.

Of course, pros like UpGuard's Chris Vickery and Kromtech's researchers probably have their own toolkits, but the big thing to remember is: if you secure your AWS bucket, bad actors will have to find another way to steal your secrets. ®