Today, I am working on a project, that uses Puppet template to create a stackdriver config file: /etc/sysconfig/stackdriver

For some reason, this Puppet template (.erb) is ‘dos’ file format (from my colleague), which is Windows format. The line break is CR + LF. Usually this is OK, because the config file mostly just for Linux service to read. However, in stackdriver service startup script, there is a line ‘ . /etc/sysconfig/stackdriver ‘. Stackdrive service uses this config file to initial certain variables. In this case, if you have Windows line breaks in the config file, you will see this error:

: command not found
: command not found

I then use sublime text to recreate this file in unix file format, problem goes away. Hope this helps someone.

These sublime text personal preference setting are recommended, unless you are a Windows platform developer:

{
"default_line_ending": "unix",
"translate_tabs_to_spaces": true
}

Update: It turns out it is because I copied the files from my Windows PC over to Linux server. What I should do is to git clone the files on the Linux server. This way Git will help deal with the line ending issue.

With DevOps concept in mind, I don’t normally set up a server manually anymore. I either use Puppet (Chef, Ansible) or write a quick Bash script to do the server provisioning. What happens is sometimes a package I need to install may pop up a blue GUI asking for user inputs. This is annoying as it could break my server provisioning automation. No worries – I can use debconf-set-selections(Ubuntu) to preconfigure the user inputs.

Let us use an example, today I need to add newrelic-php5 to a server provisioning script written in Bash.

First let me see if I have any preconfigured values in place for newrelic-php5:

# debconf-get-selections |grep newrelic
0

Good, let me go ahead and install it.

# apt-get install newrelic-php5 -y

As you can see, it asks me for newrelic license and followed by application name. After filling the details and finishing the installation, I run this command again to see if there is any preconfigured values saved in debconf database:

This time I got something. To install this package headless, since I was asked to two inputs during the installation, one for license and one for application name, all I need to do is to preseed these two values with debconf-set-selections.

I am working on an AWS ec2 instance provisioning script today. I am using Packer to provision the instance and build an AMI. The instance is running Ubuntu. There is a very annoying error when I am using ‘apt-get -y upgrade’ or ‘apt-get -y install':

debconf: unable to initialize frontend: Dialog
debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:

Google it and not hard to find the fix is to add “DEBIAN_FRONTEND=noninteractive” in your script
(credit to link to stackoverflow)

However, it still doesn’t work for me until I realize I need to put in ‘export':

export DEBIAN_FRONTEND=noninteractive

Apparently ‘export’ will make the variable is available to any process you run from your current shell process (not parent process though). Without ‘export’, the variable scope is restricted to the shell, and is not available to any other process.
(credit to link to stackoverflow)

As of today, AWS WorkSpace (Standard) only comes with 50G user data (D drive). You can install WorkDoc client on your WorkSpace to have your data synced. However, if your WorkDoc data exceed 50G, this will fill up your WorkSpace D drive very quickly.

Here a work around:

Start a EC2 Windows instance with a big enough EBS, say 300G. This instance must be launched in the same VPC as your WorkSpace.

Join this instance to your AWS Directory, same Directory as your WorkSpace

Install WorkDoc client on this EC2 instance and share the WorkDoc folder to Directory/Users

Added this instance to the existing WorkSpace Controller security group or make sure it is accessible to WorkSpaces via Port 445

In each WorkSpace, just map the shared drive on this EC2 instance. \\[ec2 instance computer name]\workdoc

There you have it, good speed, plenty of spaces for your WorkDoc data.

For how to join EC2 instance to your AWS Directory, please refer to this Doc (I am in AWS Sydney region, and we can only do this manually): http://docs.aws.amazon.com/directoryservice/latest/adminguide/join_windows_instance.html

You will also be able to add Directory Management tools on this EC2 instance for management purpose.
http://docs.aws.amazon.com/directoryservice/latest/adminguide/install_ad_tools.html

Hope this helps. If you have other ideas and suggestions, please feel free to let me know.

I am a Centos fan for last couple of years, but have heard of Ubuntu is a great system too, especially user friendly.

Today, one client asked me to rebuild a Amazon server for his Zend Framework PHP Web Application dut to its current stability issue. (Amazon Linux seems to have issue with MySQL which causes MySQL memory issue randomly). I figured it is a good time for me to try Ubuntu.

First, start up a new Amazon micro instance with Ubuntu LTS (LTS seems to be more stable and long term support).

It was painful when you need to copy large amount of data from one server to the other and came across this “file path too long” error. That means you will need to dig into each subfolder to either rename the file or zip them up. That would take all night if you have a lot of this errors. I had this problem today.

What I am doing is helping a client migrate Windows SBS 2008 to Windows SBS 2011. There are over 200G data to copy across. For the first couple of “file path too long” errors, I just mark them down until I find there are too many of them.

Quick search on Google leads me to RoboCopy . I have to say this is an excellent tool to handle exactly this case. Actually people are using this tool to do server backup too.

robocopy "\\server1\folder1" "D:\folder2" /e /z /dcopy:T /XO

A simple command line came to rescue. I can then sit back and enjoy my coffee.

Update: Just realise today that you can use robocopy to delete files as well:
create a empty folder and run this command:

Centos 6.2 Minimal Installation

The ISO only 200+MB. With this ISO, you can install a very very basic running Centos system, not even the base tools. You are mostly like need to ‘yum groupinstall base’ after system boots up. I would go this option, but I don’t want to waste too much time wondering which packages are missing and reinstalling them.

PS. During the installation, you won’t be asked to choose packages to install. You can only install more packages with Yum after.

Centos 6.2 Net Installation

ISO is 100+MB. You will need network for sure. During installation, it will ask you to config the network (or just DHCP). You will also need to put in the Centos 6.2 online images: [Your nearest Centos image mirror]/centos/6.2/os/i386/

The installation took me a while and you will be asked to choose couple of installation types: Desktop, Minimal, Basic Server …. For the difference, please see this link: CentOS 6 “Default” Installation Options

I chose “Basic Server”. After installation, the whole system is about 1.5G.