If you are a boto3 user and have multiple AWS profiles defined in your shell, I am sure that you have faced this issue at least once. Boto3 loads the default profile if you don’t specify explicitly. I am trying to explain how to specify AWS profiles while using boto3.

Lets say, we want to use the profile “dev”, We have the following ways in boto3.

1. Create a new session with the profile

dev = boto3.session.Session(profile_name='dev')

2. Change the profile of the default session in code

boto3.setup_default_session(profile_name='dev')

3. Change the profile of the default session with an environment variable

The best practice always is to create IAM logins for each user in an organization rather than sharing the root account. Most of the companies are following this trend as per their compliance regulations. But what if the root account is compromised by some way?

Yes. That can happen. It is recommended to enable two-factor authentication for the root account and even for all the IAM users. But it will be a wise idea to get notified if someone logins to console or make some API calls using the root credentials so that we can act fast. This can be done by the following steps.

Enable cloudtrail for all regions.

Create a cloudwatch rule to check for console login/API access for rootuser

4. Select the target as an SNS topic for “Matched event” and select the Topic you are planning to subscribe to. (Assuming we have already created an SNS topic).

This way, we will get notified when root user does something. If we want to get email, go to SNS and create an email subscriber.

The above is the basic way to check for root activity. But, we can tune this better by including AWS lambda here.

AWS blog already have very detailed documentation on how to do this. So, I am not repeating that here. Please refer the link which includes a cloud formation template and lambda function which means you can simply spin up the stack in few minutes. I have used this personally and is working great.

I was totally unaware about the fact that even a master account doesn’t have all the privileges in an RDS database(MySQL) until I got stuck with this issue. Today, I was asked to create a secondary admin user in one of our production DB with all privileges. The MySQL DB instance was running in AWS RDS. I tried the following command

I got the above error while trying to grant all privileges. I was sure about the command because the same command was working fine for non-RDS mysql instances. Few minutes of googling has given me the fix.

In order to protect the instance itself, RDS doesn’t allow even the master account to access to the mysql database. The mysql.* tables are considered off-limits here since I don’t have access to the mysql.* tables which are restricted by Amazon. I can’t grant permissions on *.* since that would match MySQL, and `%`.* appears to not match those system tables.

So, the quick fix is to use `%`.* instead of *.*.

The _ and % wildcards are permitted when specifying DB names in GRANT statements that grant privileges at the global or database levels.

This is something that I have came across while tuning an nginx server which has multiple tomcat instances as upstream. We were trying to adjust the read timeout of the upstream proxies. It is hard to simulate this by stopping the backend as it will throw a 503 bad gateway. So, for simulating this, we used a nodejs script.

This was an issue I have faced while setting up this blog. I was getting 404 errors for all the post links in this blog when selecting the non default permalink structure with SSL.

First thing I tried was to regenerate the .htaccess file. Removed the existing .htaccess file in the WordPress root folder. Regenerated the file by switching the permalink again. That didnt worked for me. The fix was something with the web sever level. Finally, I found the fix.

The directory tag is required in ssl virtual host config of apache same as of http port 80, to allow override redirect rules using .htaccess of wordpress.

Example

<Directory /var/www/html/devopslife.io/>
DirectoryIndex index.php
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

The Amazon ECS container agent allows container instances to connect to your cluster. If this agent is down for some reason, deployments to the service won’t be reflected in the instance and can cause discrepancy.

Here is a one-liner to check if ECS agent container is running. If it is not running, we are making use of AWS SNS service to send a notification to a topic.

In some cases, we might need to throw a custom/different error code for a specific issue. For example, we can throw a different error to the end user even if the backend node is down. We can do that in nginx as in the example below.