Main menu

Post navigation

By now everyone should have an SSL enabled website thanks to the Let’s Encrypt project which provides free SSL certificates that are good for three months at a time. If you don’t, check out their site and generate your free SSL certificate. The main problem with a three month SSL certificate is remembering to renew it before it expires. Fortunately Let’s Encrypt provides an application called Certbot that does automatic certificate renewal. The instructions were simple. Set up a daily cronjob that calls certbot and it will automatically renew your SSL certificate when it reaches the thirty day expiration period. No problem, I set up the following:

0 0 * * * certbot renew --quiet --no-self-upgrade

Because certbot does not renew the certificate until it reaches the thirty day threshold, there was no easy way to test it. Fortunately I stuck a reminder on my calendar to check. Sixty days later I opened my website, clicked the padlock and checked the certificate expiration date. It still had the same expiration date, ie it didn’t get renewed. Puzzled I ran the command manually:

# certbot renew --quiet --no-self-upgrade
#

That seemed to work fine. I checked the certificate’s expiration date and it was now ninety days into the future. There was nothing obvious in the logs so I did a Google search. I found a lot of other people reporting the same issue. It wasn’t until I saw this post that I understood the issue. The crontab had a PATH that did not include the location of certbot. When I downloaded certbot, I installed it in /usr/local/bin. By default, my crontab’s PATH did not include /usr/local/bin. There were a few ways to fix this.

Update the crontab to contain the complete path to the certbot application:

0 0 * * * /usr/local/bin/certbot renew --quiet --no-self-upgrade

Update the PATH variable to include the missing PATH location. If no PATH is defined in the crontab, simply add the following at the top:

PATH=$PATH:/usr/local/bin

Move the certbot application to a common location like /usr/bin

If you were struggling to figure out why your Let’s Encrypt SSL certificate was not renewing automatically with certbot, I hope you found this page before spending a lot of time debugging.

I have a Macbook Pro and use the media keys (play/pause, skip, volume) extensively when listening to iTunes. I recently noticed that the media keys stopped working. I tried the standard things like restarting iTunes and rebooting. A few weeks back I upgraded to El Capitan (OSX 10.11). I was a bit skeptical of that being the issue since iTunes is a core part of OSX but you never know.

I did some Google searches that pointed at options in Accessibility solving the issue. None of these solutions worked. Finally I came upon this post on the Mac Rumors forums. I remembered a few weeks back installing the Google Music extension in Chrome; likely in a moment of weakness. I disabled that Chrome extension and BOOM my media keys started working again.

Another day, another SSL vulnerability. Today’s SSL vulnerability is called POODLE and you can read more about it here. In a nutshell, SSLv3 needs to be disabled on all AWS ELBs. If you only have a single ELB, you can easily switch to the newest ELB policy, ELBSecurityPolicy-2014-10, via the console. Select your ELB in the console, click the Listeners tab, and then click Change under Cipher. Select ELBSecurityPolicy-2014-10 from the Predefined Security Policy drop down.

If you have a large number of ELBs, you will need to use the CLI and a short script. To get a list of your ELB names run:

This will parse the output and dump the ELB names into a text file called /tmp/lbs.txt

The CLI does not allow a new policy to applied to an ELB that already has a policy. To work around this, I apply an empty policy and then the new policy. There is a potential for a second or two of downtime. I have not had an opportunity to check this. I ran a for loop through my ELB names and made the policy change:

In an earlier post, I outlined the steps to patch a Linux system and regenerate an SSL certificate in response to the Heartbleed bug. Amazon announced that the openssl code has been patched in the ELB service. If you are terminating SSL on an ELB, you need to regenerate your SSL certificate and upload it to AWS. Here I present the steps to do that.

First create a new private key and a CSR.
Generate an RSA private key:

Next upload the CSR to your SSL registrar. The details are different for each provider but you want to find the option to re-key or regenerate your certificate. When you select this option, your registrar will ask you to upload a new CSR. Copy and paste the CSR that you just created. Within a few minutes, the registrar will either email the new certificate or make it available on their website.

After you download the new certificate, log into the AWS Console and go to the Load Balancer section. Select the ELB and click on the Listeners tab. Click on the Change link next to your certificate name. Unfortunately you can’t simply overwrite the existing certificate with the regenerated certificate. Click Upload a new SSL Certificate. You will be presented with a box that looks like this:

Create a new Certificate Name in the first box. Then copy the new private key that you created in the first step into the Private Key box. Finally copy the newly generated certificate from your registrar into the Public Key Certificate box. If your registrar provided the certificate chain bundle, you can copy that into the Certificate Chain box. This last step is optional. Click save and the new certificate should be used by the ELB within a few seconds.

To verify that the new certificate is being used, open your website in a browser and click the lock icon in the address field. View the certificate details and verify that the issue date is today and not the original date the certificate was issued. Here is a snippet from my updated certificate:

Unless you are living under a rock, you have heard all of the hysteria surrounding the Heartbleed openssl bug. Due to the nature of the bug and the possible exposure of SSL private keys, the openssl package needs to be updated and the SSL certificate needs to be regenerated. I will present the procedure that I used to patch a CentOS Linux server.

First I updated the openssl package:

# yum update openssl

Next I regenerated my SSL certificate. I needed to create a new private key and a CSR.
Generate an RSA private key:

I am leaving out the details that I used as they are different for each certificate. It’s important that you set the Common Name correctly. The Common Name is the Fully Qualified Domain Name (FQDN) for the certificate. If you are creating a wildcard certificate for foobar.com, then the Common Name is *.foobar.com. I now have two files in my directory, private-key.pem and csr.pem.

Next I uploaded my CSR to my SSL certificate registrar. The exact details will be different between registrars. In my case, I used the User Portal for Geotrust. There was an option to Reissue Certificate. That opened a text box for me to copy and paste my CSR. A few minutes later Geotrust sent me an email with the new certificate. I regenerated a certificate with Godaddy and the option was called Re-Key. Instead of emailing me the new certificate, Godaddy made it available for download from their website.

The last step is to overwrite the existing private key and certificate with the newly created files and restart Apache. To verify that Apache is using the new SSL certificate, visit your site and click on the lock icon in the address bar. View the certificate details and verify that the issue date is today and not the original date the certificate was issued. Here is a snippet from my updated certificate:

Good luck and do not delay patching your systems. Please leave any comments or questions below.

This worked as expected. The dump.sql file contains table1, table2, and table3. Next I created a shell script and defined a number of variables. The format for the date in the S3 bucket is year/month/day. Today is 3/24/2014 so the date format for the bucket s3://net.dcolon.backups/mysql/ is:

s3://net.dcolon.backups/mysql/2014/03/24

Using the date command I get each of the values that I need and store them in a variable in the script. I take the mysqldump and store it locally and verify that the process completed without an error. After copying the mysqldump I rename the local copy appending the date. You can also add some logic to keep a certain number of recent copies on local disk and delete everything older.

Note: There is an inherent security risk of storing the password in clear text in a script or configuration file. mysqldump will mask your password while the process is running so another user can’t get the password from the process list.

If you spend a moderate amount of time creating and modifying AWS security groups, you will inevitably encounter the “Error deleting security group sg-12345678: resource sg-12345678 has a dependent object” error message.
Trying to find the security group that includes the group you want to delete can be an exercise in futility in the console. Instead, I make use of the CLI. I dumped all of my security groups into a text file:

aws ec2 describe-security-groups > securitygroups.txt

From there, I opened the securitygroups.txt file in vim and searched for sg-12345678. One entry is for the security group itself and all other matches are for security groups that include the group I want to delete.

It’s also possible that the group is attached to a network interface. I found the solution for this situation here.

You created an AWS instance and decided to make a small root volume. Maybe you were shortsighted or just being cheap. Now the root volume is nearly full and you can’t expand it because you didn’t use LVM.

To add insult to injury, AWS recently cut the price of EBS storage in half. This issue can be fixed with a handful of commands from the CLI.

Before you begin, take note of a few important details about the instance. You need to know the instance-id and the availability-zone that your instance is running in. You can get the instance-id from the running instance by querying the instance metadata service. More details about the metadata service are available here.

Now that you have a snapshot (snap-e76a3b25) of the root volume, you can create an exact larger copy of it. If the snapshot state is still pending, you will get an error. Do not forget to specify the size or the new volume will be the same size as the old.

Problem: I have a central rsyslog server that is filling up. I used LVM on the log partition so that I can grow it on demand. Here are the exact steps that I took to grow the partition.

I allocate fifteen 25 GB volumes and write the ec2-create-volume output to a file disks.txt. Volumes must be allocated in the same availability zone (AZ) as the instance (server) that they will be attached to.

I am an avid GNU screen user. I have been using it for about fourteen years and can’t live without it. About two years ago I started using Byobu which adds some eye candy to screen. Recently I started using IRC again. A coworker introduced me to Weechat. If you still use irc, check out Weechat. All three of these programs are available in the standard Debian and Ubuntu repos.

While I’m getting used to Weechat, I discover that the keybindings to scroll through the list of nicks in a channel are F11 and F12. I hit F12 and screen invokes lockscreen. After a lot of Googling, I find that the termcap name for F12 is F2. Digging through my .screenrc I can’t find any reference to F2. Then it occurs to me to check my Byobu settings. Sure enough I find the following in ~/.byobu/keybindings:

bindkey -k F2 lockscreen # F12 | Lock this terminal

I commented out that line. It took me a little while to figure out how to change the keybinding for my current session. There is no unbindkey command in screen. Instead you bind it to nothing.