Categories

Meta

Anyone responsible for hosting web services protected by SSL/TLS should be at least curious about how they might score against Qualys SSL Labs Server Test. I know I was when I first became aware of the tool. The results may surprise you, and you’ll probably learn a lot if you actually put the effort into securing and optimizing your configuration to get a higher score. I’d like to share some of my Apache configurations to hopefully save some folks out there a little time and raise awareness about web security.

I’ll start by removing all configurations I’ve added to achieve my A+ score, and we’ll slowly tighten the screws to see the effect each configuration has on the results of the test.

Ouch! If I’m being honest, I may have intentionally sabotaged my Apache config a little to get a score like this. Turns out if you’re running a fully patched CentOS 7 web server with Apache 2.4.6, it does an ok job of being secure out of the box. I enabled all possible ciphers, excluded the secure ones, and used a 1024-bit certificate issued by an untrusted CA to add a little dramatic effect. I tried to make things a little worse by enabling SSLv2 and v3, but they are no longer supported with the version of Apache I am using. Because I am unable to use that as an example here, just make sure you have a line like this in your Apache configuration to ensure all insecure SSL/TLS protocols are disabled.

SSLProtocol all -SSLv2 -SSLv3 -TLSv1

For our first change, let’s fix that certificate by requesting one using Let’s Encrypt with at least a 2048-bit private key.

That’s good progress. We’ve gotten rid of a few warnings, but we still have an ugly “F”. Next we’ll make some changes to the supported ciphers.

Excellent! In these results, we notice that the server does not support “Forward Secrecy.” I intentionally left out the ECDHE suite of ciphers just to bring attention to this and stress the importance of making sure these ciphers are enabled. For our final cipher hardening and to fully support perfect forward secrecy, we need to make sure the following lines exist in our config.

Looking good! Now to finally get our server to score that A+, we need to enable HTTP Strict Transport Security (HSTS). This is simply an additional web header that is stored in a browser for the amount of time specified in the header that tells the browser to force the use of HTTPS. This prevents software like SSLStrip from intercepting web requests and convincing your browser to use HTTP instead. This is a simple security feature that can be enabled by just adding a plugin to your wordpress, but Apache gives us a great way to enable it within our config using the following line.

Perfect! Now we can technically go one step further with our HTTPS security by creating some additional headers to support a feature called HTTP Public Key Pinning (HPKP). This will tell your browser to store at least one certificate in the chain upon its first visit to a given site. If the next visit doesn’t contain the cached certificate in the chain, it will prevent the user from being able to visit the site. This is extremely effective at preventing man-in-the-middle (MITM) attacks, but requires a strong understanding of how it works and lots of diligence to maintain it properly. Currently only Chrome, Firefox and Opera support HPKP, and Chrome has announced plans to remove support for it because of the possibility for an attacker to install malicious pins or for a site operator to accidentally block visitors. Given that, it’s not something I would recommend, but I want to at least touch on it for completeness.

I hope this article was helpful and informative. Please leave questions and comments below.

Convert a VHD image from a native Windows backup to raw format using qemu-img, and write it directly to a disk or partition with the Linux dd command

I’ve recently been evaluating native Windows Server Backup as an option for bare-metal backup and recovery for our remaining physical servers at work. The utility creates several XML files and a VHD image for each partition it backs up. It seems to work ok for the most part, but I ran into a problem when I came across a system that for some unknown reason had a 38MB boot partition with insufficient space to create a VSS snapshot, thus preventing the tool from properly backing up the partition. I’ve read all the articles about allocating storage on a separate partition to get VSS to behave, but I could never get it to function correctly.

This got me thinking… I have some personal trust issues with the reliability of Microsoft products to begin with and now I’m having these problems which are just reinforcing the fear of something going wrong during the restore process. This lead me to start researching restore options using the VHD files produced by the native Windows Server Backup.

Option 1 is to do a restore using a Windows installation CD and select the restore option. Option 2 is to mount the VHD and manually copy files. This option is really only good for individual file restores. Obviously this is not something you’d want to do for a bare metal restore. Option 3 is to restore the VHD image directly to disk. This is the option I was most interested in and it made sense to me that there would be a straightforward way of doing this as every other bare metal backup solution I’ve used had this sort of option. While searching for a tool to do a VHD to disk image I found “VHD2Disk”. Unfortunately this tool was designed to do just that… write a VHD to DISK. No option for writing to a partition. Feel free to correct me if I’m wrong, but I see no way of ever getting 2 partitions on a single disk with this tool which makes it useless for my purposes.

After finding that there were no tools to do a image to disk write I became curious to find out if there was a way to just use the dd command in linux. After all, I would have instinctively turned to dd if this were something I were doing with linux. dd does block by block copying and a block is a block regardless of which OS you’re using. I quickly learned this is not something that can be done directly with a VHD because they are not in raw format, but the qemu-img supports VHD format and has the ability to convert them to raw image format. Below you will find the detailed instructions of how to convert the VHD, and use dd to write your new image to disk or partition. I’ll also give some details about getting the system to boot if you’re like me in this case and you don’t have a good backup of the boot partition.

The backup directory created by Windows Server Backup will look like this. I’ve highlighted the “interesting files” that I’ll mention throughout the article. The one ending in “Components.xml” has useful information about the disk partition layout that can come in handy when recreating partitions on your new disk. The .vhd file is the actual image data.

First thing we need to do is convert the VHD image to raw format. To do so, you’ll need access to a linux environment and use the qemu-img command. I’d recommend using either a Clonezilla or GParted liveCD as they both have all sorts of utilities pre-installed for disk imaging and partitioning. Boot the CD on the system you will be using for the restore image. When it finishes booting type the following commands to install qemu-img: (You may need to type sudo before each command if you’re not root. Keep this in mind for the remaining command as well.)

apt-get update
apt-get install qemu-utils -y

My backups are on a windows file share so I’ll use the following command to mount them to the /mnt directory:

You’ll need to repeat this command for any additional partitions you need to convert. Be sure to change the .vhd and .raw filenames to those appropriate for your environment. To be clear, the .vhd filename should be the one that exists in this directory like the highlighted file in the screenshot above, and the .raw filename can be whatever you want to name it.

You’ll notice a new file will be created that will reflect the full size of the partition for the data it contains. This is expected considering the nature of the raw image format.

The conversion process can take a long time depending on the size of the partition. You can use the following command to output the status of the process: (I’ve noticed there is often some delay before the command writes to stdout)

root@debian# kill -SIGUSR1 `pidof qemu-img`
root@debian# (22.03/100%)

Next you’ll need to create the 100MB boot partition (unless you’re restoring only a single partition and all others are fully intact) and any additional partitions the system originally had. I’ll assume you know how to do this, but you can use the output below for help if necessary. In the event that you don’t know the original partition layout, you can use the raw image size as a hint or the “Components.xml” file generated by Windows Server Backup in the backup directory for the server. With the values BytesPerSector, PartitionOffset, and PartitionLength contained in that file, you can re-create the exact partition table.

If you created the 100MB boot partition, format it as NTFS and the default “System Reserved” label:

mkfs.ntfs -f /dev/sda1 -L "System Reserved"

The VHDs always store the image as a partition within the image which means we have to get the offset to determine where the data actually begins in the raw image before we write it to disk. Use the following command:

The values “512” and “128” are what we need from this output. This tells us that the block size is 512 bytes and the partition starts at sector 128. Now we have all the information we need to give dd to write the image to our physical disk using this command:

dd if=myserver.raw bs=512 skip=128 of=/dev/sda2

You’ll need to repeat this command for any additional partitions you need to restore. You can use the following command to get the status of the dd process:

After you’ve restored all of your partitions, its time to reboot. Don’t forget to make sure the boot flag is on for your boot partition. If you didn’t have a copy of the boot partition, you’ll need to use the windows installation CD to repair the MBR. This usually involves a combination of the startup repair option available from the installation CD as well as some of the boot repair utilities that you can use from a command prompt on the windows installation CD. Like these:

UPDATE:

I recently found a cool new way of mounting the VHD image directly and imaging from the virtual block device instead of waiting for the qemu-img conversion and using up all of your precious storage for the VHD image you already have as well as a raw copy of the data. Below are the commands to enable the NBD kernel module with the right arguments, mount the VHD image as a virtual block device, and perform a dd copy to your physical disk. This is assuming you’ve already booted the liveCD, installed the qemu-utils, mounted the media containing your backups, and changed directly to the path containing the VHDs.

You can see I’m using /dev/nbd0p1 as the source for the dd command. This is because as I mentioned earlier in the article, each VHD image contains a partition. nbd0p1 is referencing the first partition (the only partition) in the nbd0 virtual block device. Previously we had to specify the block size and offset with the dd command to specify where the partition started. Use the following command to remove the virtual block device for the VHD image.

qemu-nbd -d /dev/nbd0

If you have any questions or if you found this post useful, please leave a comment!

In my workplace, our helpdesk has a need for the ability to quickly and easily delete user profiles remotely. I did a little tinkering with wbemtest and found I could call the Delete() method on any of the WMI objects returned by the query “SELECT * FROM Win32_UserProfile.” It will properly delete the profile’s associated files and registry keys the same way that the windows native GUI tools do it. The problem with the native tool however, is that you need to be logged in to use it. It is fairly slow and clunky, and you can only select one profile at a time for deletion. This accounts for a lot of wasted time. So I took what I learned and created a little vbs script that made some WMI calls and deleted profiles. This worked great, but the help desk needs the ability to selectively choose which profiles get deleted through some form of user interface. I wanted the simplest possible solution that required no dependencies. (.NET, AutoIT DLL’s, etc). I found the best way to do that was to make an HTA application.

The first version of my profile cleanup HTA was very basic but served its purpose well. The problem was everything was done using synchronous WMI calls. I’ve recently been playing with a lot of Node.js to understand this whole “non-blocking IO” asynchronous programming methodology, and it got me wondering if I could do the same with this HTA application. It’s not difficult to find examples online for creating WMI queries and calling methods asynchronously, but getting them to play nice in the HTA application proved to be a challenge. At least for me : ).

One problem I had was certain things only worked using jScript, while other things only worked using VBscript. Fortunately I found a way to use both and reference functions in both languages from either language. The next problem I had was finding a way to reference the “WbemScripting.SWbemSink” object within HTA. The way I found to do it was by referencing the object by its class ID like so:

My first attempts at improving the UI was to make the function calls using the setTimeout javascript function but that didn’t seem to change anything. To prevent the windows from freezing I had to do everything asynchronously within WMI. I’m including links to both versions of the application. The old version should really only be used for educational purposes for developers interested in a before and after demonstration of asynchronous WMI vs standard synchronous WMI. The second version works quite well and is safe to use in production. Just be sure you don’t accidentally delete some important data in a user’s profile. Any comments, suggestions, improvements or questions are welcome!

At work we were evaluating different options to enable two factor authentication for VMware Horizon View. They were all more than we were interested in paying and none had the ability to integrate with the communication platforms that we were interested in utilizing for delivering the PIN used as the “second factor”. Given that, my director gave me the opportunity to innovate and develop something custom.

Before we get started, you should know that I will not be providing a complete solution for two factor authentication with freeradius. My intention in this post is to demonstrate a working example of freeradius issuing an Access-Challenge response to a VMware View authentication request to achieve two factor authentication. Further development will be necessary to provide a full “solution”. (Integrating the freeradius perl module with LDAP or some other central authentication mechanism as well as deliver PINs and validate them.) If you have any questions in regards to how I achieved this, feel free to ask in the comments.

I had been looking for a good reason to play with freeradius and I finally had one. After some research within VMware’s documentation I knew I needed to learn how to get freeradius to send an “Access-Challenge” response.

The code above is extremely bare-bones and serves only as an example to use the perl module with freeradius to send an authenticator an Access-Challenge response to an authentication request. You will want to modify the “testusernamehere” and “testpasswordhere” strings to something more appropriate and optionally the “1234” test PIN. This code first authenticates a user by validating their username and password. If it is successful, an Access-Challenge response is sent to the authenticator and the “State” AVP (Attribute-Value Pair) is set to “challenge”. When the authenticator receives the Access-Challenge it prompts for a PIN. When the PIN is entered, the request is processed by the first block of code because the text value of the “State” AVP (challeng) now matches the hexadecimal string “0x6368616c6c656e6765” in the first if statement. This happens because in the previous request we set the State AVP to be equal to “challenge” which is the text equivalent to the hexadecimal string “0x6368616c6c656e6765”. The same User-Name is sent as used previously, but this time User-Password must match “1234”. Any other PIN will cause authentication to fail.

Here are screenshots of the Horizon View client authentication behavior using a freeradius server with this configuration.

I’ve been doing a lot of playing with multicast lately and I always have to google for a while to find these commands. I figured it was time to throw a post together for a quick reference. Hopefully someone else can benefit from this too.

Below you can find the commands to determine whether a system or switch port is a member of a multicast group on Cisco IOS, windows and linux. Multicast uses IGMP to join these groups and there is no way to join a group manually. The operating system does it automatically when an application requests it so these commands can come in handy when you’re trying to figure out why you’re not seeing the multicast traffic that you’re expecting.

I recently came up with a unique and free way to do screen recording and broadcasting by leveraging a few unrelated, open source software components. The intention is not for brief screen captures, but to permanently record. Meaning, begin the recording on logon/unlock and stop at logoff/lock with the ability to monitor the session live, hear audio from the local microphone, and optionally activate the webcam and overlay it in a corner of the view.

Here’s a high-level overview of how everything will work:

NGINX is running with the RTMP module ready to receive RTMP AV streams and record them, making a new file every 5 minutes

FFmpeg launches at logon/unlock sending an RTMP stream to NGINX either locally or on a server remotely. It will use the UScreenCapture DirectShow filter and optionally connect to a local microphone and/or webcam.

During streaming, the session can be viewed live. FFplay, VLC, or flowplayer will works for this.

FFmpeg is killed at logoff/lock and the recording is stopped on NGINX.

I’m providing the NGINX build I found because it has the RTMP module compiled in, I’ve already put the stats.xsn file from the RTMP module in the html directory, and it already has the necessary configuration. It may not be the latest build out there, so feel free to use it as a reference for a better download you can probably find elsewhere.

To get everything in place, extract your ffmpeg download into C:\ffmpeg. This way the executable will be located at C:\ffmpeg\bin\ffmpeg.exe. Do a normal “next, next, finish” install of UScreenCapture. Finally, download the nginx zip and extract it to C:\nginx so that the executable is located at C:\nginx\nginx.exe. Feel free to install these components in alternative locations, but understand that you will need to modify the commands I provide accordingly.

Before we get ahead of ourselves, let’s make sure everything is working correctly. Start by opening a command prompt and typing “C:\ffmpeg\bin\ffmpeg.exe -list_devices true -f dshow -i dummy”. We need to make sure that the dshow filter “UScreenCapture” is listed in the output.

That should start nginx in the background and you should be able to browse to http://127.0.0.1:81/ and see “Welcome to nginx for Windows!” I used port 81 in the configuration in C:\nginx\conf\nginx.conf to avoid conflict with other web servers that might be installed. If for some reason nginx isn’t working for you, check error.log located in C:\nginx\logs. If this is done in any sort of production configuration, I highly recommend compiling the latest build with the RTMP module on a linux server.

Now, from a command prompt, enter the command “C:\ffmpeg\bin\ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -rtbufsize 1500M -f dshow -i video=”UScreenCapture” -c:v libx264 -vf “scale=trunc(iw/2)*2:trunc(ih/2)*2″ -crf 40 -profile:v baseline -x264opts level=31 -pix_fmt yuv420p -preset ultrafast -f flv rtmp://127.0.0.1/view/%USERNAME%-%COMPUTERNAME%”. If you’d like you can use a streaming URL like rtmp://127.0.0.1/view/test. I like to try and use something that will be unique if multiple streams are being broadcasted, but something that is also meaningful.

If the stream is working properly, you should see some statistics at http://127.0.0.1:81/stats, and you should see recordings being generated within C:\nginx\recordings. Use VLC to play the recordings. To view the stream live with VLC click Media->Open Network Stream and enter the network URL “rtmp://192.168.164.110/view/username-computername”. Keep in mind that the username and computername here are case sensetive and should match exactly what is shown on the statistics page http://127.0.0.1:81/stats.

Be patient as it can take some time for VLC to detect the video codec before it begins displaying. You can press “q” or Ctrl+c to stop the ffmpeg stream.

I did my best to tweak the command so that there is a good balance of quality and efficiency, but if you’d prefer higher quality video try changing the -crf parameter to a lower value like 23 or a slower -preset value like “fast”. A word of caution, the slower the preset you choose, the higher your CPU utilization will be. The “scale=trunc(iw/2)*2:trunc(ih/2)*2” part of the command is to avoid “not divisible by 2” errors when either your height or width stream resolution is an odd number. I ran into this in our VDI environment because you can resize the screen of the client to be any size you want and will frequently have this problem.

If video is all you need, at this point you can simply run the following vbs script using task scheduler with a logon and unlock event as the trigger:

To kill ffmpeg at logoff/lock, use task scheduler again with the appropriate triggers and run the command taskkill /f /im ffmpeg.exe.

When I first set out to get screen recording working for my purposes, I was originally attempting to save directly to an MP4 over a CIFS share, but I still had to kill the ffmpeg process because obviously we want in running in the background and there is no way to interact with the process to stop it gracefully. Terminating the process in this way would corrupt the MP4. With NGINX receiving the RTMP stream and handling all of the recordings independently of ffmpeg, you are able to kill the process without corrupting the video files.

Be sure to do some testing to make sure ffmpeg is terminating and launching correctly during the events you are using to trigger it. It is a good idea to set up an idle timeout/screensaver that locks your workstation and kills ffmpeg’s stream to avoid wasting storage on useless video.

I’ll try to post some more flexible/dynamic scripts later to demonstrate how to capture audio from the local microphone and overlay a webcam. If you have any input or questions, please comment below.

I’m working on setting up two fully redundant servers to host all sorts of services from the house. Most of the HA is automated via keepalived scripts, but I needed another one to automatically migrate all VMs from one host to another using libvirt. This is analogous to putting an ESXi host in “maintenance mode”. I thought I share the bash script I threw together.

First make sure you can successfully migrate manually then replace the $HOST variable with your target host and give it a shot. The script will first migrate all live VMs and then do an offline migration of all powered off VMs. Enjoy!

I have a large video library and I’ve been on the look out for the best device to access all this media. It must support DLNA, not have cinavia, and obviously I’d like it to support as many audio and video codecs as possible. That eliminates most Sony products because they all seem to have Cinavia including PlayStation. I tried a chromecast and I won’t go into the details of how much I absolutely hated that useless piece of garbage. I still have a device running GoogleTV which is definitely my favorite, but unfortunately it has been discontinued by Google.

After much research I bought a Roku. I like it a lot, but it can be pretty picky about audio and video codecs. When videos have multiple audio streams whether it be DTS and stereo or multiple languages, the device will sometimes have no audio or play the wrong language. Fortunately, it is generally pretty simple to demux the streams and remap them in a way that the Roku will tolerate, but the device does not support AVI. This means if I want to keep the Roku around, I’ve either got to run Plex or some other transcoding capable DLNA server or convert all of my AVIs to H264 MP4s. I like to try to be as efficient as possible so which rules out transcoding a video every time you watch it, so I developed a little bash script to find all AVI files in my video library to MP4.

To run the script, you’ll need to have the perl-based “rename” utility installed as well as ffmpeg.

Just change “/path/to/your/video/library/” to the real path to your video library and let the script do its thing. If you’d like to convert other video types, just change the search parameters “-name *.avi” to something that suits your needs. All videos will be re-encoded to H264 video, and 192k stereo AAC audio. It will then rename the file and delete the original file.

If anyone has any modifications or useful custom scripts you’d like to share, please leave them in the comments.

This one-liner doesn’t depend on a specific version of the rename utlilty. It also supports more versions of ffmpeg. The only flaw now is it only supports lowercase avi extension. Still working out the rename part of the script to handle that properly.

I have a cubieboard set up at a friend’s house as a VPN and a backup target. I went to SSH into it the other day and found that almost every command I entered returned “Input/output error”. So I did the obvious and attempted a reboot, however both commands “reboot” and “init 6” returned “Segmentation fault” and did nothing. So I set out to find the most generic way to force reboot any linux distro regardless of systemd, kernel version or any other variable, and I found it! The method reminded me a lot of ALT + SysRq + REISUB only not as gentle considering the only signal it sends is a reboot. Maybe it can be modified to include the REIS & U, but here is the command I used in case someone else out there is in a hurry to get back online without concern for a safe shutdown!