Search This Blog

AWS IAM Diagnostics, figuring out what role your script is using

I just spent the last hour of my day trying to figure out why my script was saying I didn't have access to a particular resource from within my Docker image. I granted access to the IAM Role for my Docker instance (or so I thought) but it still said it had no access to write to a particular bucket. The request seemed like it was saying there was credentials set, but it wasn't indicating what role the request was made by.

At one point, the AWS-SDK for Node.js would print out a very useful message like "User arn:..... is not allowed to perform action s3:putObject on resource arn:...."

Now, however, it simply told me:

{ [AccessDenied: Access Denied]

message: 'Access Denied',

code: 'AccessDenied',

region: null,

time: Wed Sep 21 2016 13:51:55 GMT+0000 (UTC),

requestId: 'REQUEST ID',

extendedRequestId: 'LONG_REQUEST_ID',

cfId: undefined,

statusCode: 403,

retryable: false,

retryDelay: 13.531139446422458 }

That's great, I know the error is Access Denied, and I know it's not a retryable error, but what did it try to do, and who tried to do it?

After stumbling around for awhile, I finally found this:

curl http://169.254.169.254/latest/meta-data/iam/info/

Which told me the IAM role that scripts were assuming in my current EC2/Docker environment:

Popular Posts

Ever wonder how sites like battle.net support things like this in Google Chrome?

Well I did, so I did a little bit of digging. It turns out Google Chrome supports an open standard called Open Search. This format is relatively simple, and very easy to add to your own site. I just added it to some of our systems in under 5 minutes.

Adding OpenSearch to your site is incredibly simple, you just have to add a simple tag to your index HTML page, and add a simple XML file that it points to. The link tag looks like this:
<link rel="search" type="application/opensearchdescription+xml" href="http://my-site.com/opensearch.xml" title="MySite Search" />

For a while, I have been creating command line tools provided right with boto which I used to manage AWS. Recently, others have become interested in these tools as well, and I've seen several other contributors adding to these tools to make them even more useful to others. One recent submission by Ales Zoulek added some nice features to my list_instances command, which I use on a regular basis to list out the instances that are currently active for my account in EC2.

Amazon now lets you add Tags to EC2 objects such as Instances and Snapshots. This allows you to actually "Name" your EC2 instance, as well as add some metadata that could be used for AMI initialization, etc. Ales added the ability to list these tags by name within the list_instances command line application:

Last week, Amazon announced the launch of a new product, DynamoDB. Within the same day, Mitch Garnaat quickly released support for DynamoDB in Boto. I quickly worked with Mitch to add on some additional features, and work out some of the more interesting quirks that DynamoDB has, such as the provisioned throughput, and what exactly it means to read and write to the database.

One very interesting and confusing part that I discovered was how Amazon actually measures this provisioned throughput. When creating a table (or at any time in the future), you set up a provisioned amount of "Read" and "Write" units individually. At a minimum, you must have at least 5 Read and 5 Write units partitioned. What isn't as clear, however, is that read and write units are measured in terms of 1KB operations. That is, if you're reading a single value that's 5KB, that counts as 5 Read units (same with Write). If you choose to operate in eventually consistent mode, you'r…