In this article, I will show how to do a very simple auto-scaling system on eucalyptus cloud using the wonderful eucalyptus fast start image. Afterwards you will appreciate how easy and configurable the Eucalyptus cloud is in regards to configuring customization scripts on systems that are booted dynamically inside auto-scaling triggers (like low CPU, RAM, etc… ).

A little history, last year (2014), HP has requisitioned a company called Eucalyptus, what I must admit surprised me after spending so much time with OpenStack. So I tried to get an idea why this move has happened and what are the main differences that immediately come to mind to compare these two.

… demo experience

Prerequisites:

The target requirements

1) Have a cloud system with capability to deploy a server quickly
2) Test basic systems like load-balancing
3) Check the network forwarding inside the cloud
4) Demonstrate auto-scaling system of Eucalyptus on example server system

If I check now in the webGUI, there is a new image available called Fedora20.

WebGUI NOTE: Access to the webGUI is running on port 8888, so I will use my http://192.168.125.3:8888/ , the account is “eucalyptus“, username “admin” and password is “password“.

Eucalyptus WebGUI, new Fedora20 image loaded

New, the tutorial will show you how to change this image from private to public (so that all cloud users can deploy it) and that can be achieved with this command:

1

# euca-modify-image-attribute -l -a all emi-0676ae2c

REMARK: There is a bug in the tutorial and the command there was missing the image ID.

You can see again the images also with the euca-describe-images command.

Now the last part is lauching an instance with the image, this can be simply done by this command:

1

# euca-run-instances -k my-first-keypair emi-0676ae2c

REMARK: By default there is already one instance running since installation that is eating 2GB of RAM. So your second instance may fail with euca-run-instances: error (InsufficientInstanceCapacity): Not enough resources, if this happens, go to the eucalyptus WebGUI and terminate the default instance:

Terminate default instance running since install!

If you are doing this via the tutorial, you will get a nice extra output like this:

*Launching an instance andkicking off an automated application install

*Launchingahybrid install with AWS andEucalyptus

*AddinganewNode Controller toincrease cloud capacity

*...andmany more!

So what to do next ?

Step 4: With tutorials missing, let’s play independently

Now this is where fun starts, we have Eucalyptus, we have an image, but nothing much to do now as the tutorials are not yet really finished (December 2014/January 2015). So let’s try going independently and play around Eucalyptus. But I will not go into API or development of AWS in this tutorial, but I will go for the auto-scaling feature.

But first. lets mess around and get a feeling how to work with Eucalyptus a bit more, so lets list the basic commands for checking the eucalyptus without webGUI:

Prerequisite: Login to eucalyptus, which inside the faststart image you can do via provided source with this command:

1

source/root/aucarc

euca-describe-images – shows all the system images loaded in the eucalyptus storage

In addition please keep these commands in mind as these are the best commands to troubleshoot during this tutorial, but currently I give no example output because on this point in our tutorial there these are mostly empty.

euca-describe-instances

euca-describe-instance-status

euscale-describe-auto-scaling-groups

euwatch-describe-alarms

Step 5: Start preparations before auto-scaling (security groups)

Here we will create a security group called “Demo” that will allow basically the same things like the default group, but also 443 port. So in total icmp, TCP22, TCP80, TCP443.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

# euca-create-group -d "Demo Security Group" DemoSG

GROUP sg-49d47746DemoSG Demo Security Group

# euca-authorize -P icmp -t -1:-1 -s 0.0.0.0/0 DemoSG

GROUPDemoSG

PERMISSIONDemoSGALLOWSicmp-1-1FROMCIDR0.0.0.0/0

# euca-authorize -P tcp -p 22 -s 0.0.0.0/0 DemoSG

GROUPDemoSG

PERMISSIONDemoSGALLOWStcp2222FROMCIDR0.0.0.0/0

# euca-authorize -P tcp -p 80 -s 0.0.0.0/0 DemoSG

GROUPDemoSG

PERMISSIONDemoSGALLOWStcp8080FROMCIDR0.0.0.0/0

# euca-authorize -P tcp -p 443 -s 0.0.0.0/0 DemoSG

GROUPDemoSG

PERMISSIONDemoSGALLOWStcp443443FROMCIDR0.0.0.0/0

if we now look again on all the security groups, we will see both the default and the new one (you can also double-check via webGUI)

Step 6: Create a load-balancer

=====================================================================================
–OPTIONAL, BUT RECOMMENDED SECTION START–
Sometimes in the future, you will probably need to troubleshoot the load-balancer and for that you need access with SSH to the load-balancer instance. Now the problem is that by default Eucalyptus doesn’t give SSH keys to the load-balancer instances, so we need to do some steps to tell Eucalyptus to give these SSH keys where needed. So first generate a key with euca-create-keypair:

1

2

# euca-create-keypair euca-admin-lb > euca-admin-lb.priv

# chmod 0600 euca-admin-lb.priv

The the cloud property ‘loadbalancing.loadbalancer_vm_keyname’ governs the keypair assignments, so we modify it like this:

To create a load-balancer, we will use the eulb-create-lb command, the parameters are very simple at this point as we will only use the HTTP for load-balancing with default settings (more information about the settings can be found in the –help of the command, or on eucalyptus.com documentations)

The above command will create a load-balancer check that is checking an URL of /index.html every 15 seconds, failure of a test is after timeout of 30 seconds and two consecutive failures means server down and two consequent successful tests mean the server is back up.

Step 7: Server configuration scripts after booting (in auto-scaling)

If we want to do auto-scaling demo, the empty servers booting has to have some way to prepare the server after boot for real work. Because we are working here with HTTP servers, we need a small script, that will install apache2 server and configure a basic index.html webpage.

This is a script that we will use as part of a “launch-configuration” to do example configuration of a server instance after start:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

#!/bin/bash

#

# This small script will prepare a virtual image in eucalyptus to perform

Step 8: Creating the auto-scaling group

Before we go further I want to present to you something that was a problem for me when I was first attempting to create this auto-scaling system. My problem was that despite that I have enough RAM in my eucalyptus host (~8GB), I was not able to start more than 2 instances because of resource quotas and the auto-scaling was simply quietly failing in the background. Therefore you should first manually check if you can create at least 3 instances manually in the dashboard/webGUI (whe one running on 8888 port).

You can either start creating new instances via the webGUI interface and wait until you hit this error:

Eucalyptus resource limit error after unsuccessful instance launch.

The problem that I had was that I had enough RAM, definitely enough to have several t1.small instances (256MB RAM each) running, but something was blocking me, what I found out was that each eucalyptus node (ergo server registered in the control system as place capable of hosting instances) has a quota limits that can be viewed with euca-describe-availability-zones verbose command. This is what I got when I had my problems:

Notice the freeand maxcolumns, this is the maximum amount of instances your eucalyptus node will allow you to launch! And 1 instance maximum is definitely not enough for an auto-scaling tutorial we are running here. So here is how to extend this limit, but note that your are responsible for managing your own RAM limits when you do this.

EDIT file /etc/eucalyptus/aucalyptus.conf and look for a parameter “MAX_CORES=0“. And increase the value, afterwards restart the eucalytus process with # service eucalyptus-cloud restart or # service eucalyptus-nc restart reboot.

I for example changed MAX_CORES=4 and as such I get the following availabilities in the cloud:

Now we are going to prepare a auto-scaling group that will be driving starting and shutdowns of server as needed, the command used is euscale-create-auto-scaling-group and we will reference both the load-balancer DemoLB and the launch-configuration DemoLC we created in previous steps.

Step 9: Creating scaling-policy for both increase and decrease of instance counts

With the following command euscale-put-scaling-policy we will define a policy for changing scaling capacity based, as name suggest we will in the second step make this policy behavior based on CPU alarms.

Now the second part is to create and alarm and monitor the CPU usage, for that we will use the euwatch-put-metric-alarm command, and at the end in the –alarm-action we will use the auto-scale policy ID from the previous command.

Step 10: Creating a termination policy

One thing that we omitted in the previous scale-down policy is to say which instance should be terminated from the group of instances running. At this moment we will simply choose one of the pre-set options that is called OldestLaunchConfiguration. This method will during scale-down policy shutdown that instance, that has the oldest version of configuration script from Step 7 (ergo it is expected that you will update these scripts over time).

REMARK: This method actually has one additional use-case, imagine that you are doing an application update (for example new version of webpage rolled out to the instances), for something like this you can modify the server configuration script from Step 7 and then just increasing the load will launch a new auto-scaled instance with new webpage and after a while, when the system will be scaling-done the instance cluster, it will shutdown specifically those servers that are running with the oldest version of the server configuration script. This way you can technically do a rolling updates across all your instances as a “trick”.

Step 11: Verification that auto-scaling is running the first instance

Ok, so everything is actually configured, now the auto-scaling group should have already created the initial instance. On this point I will show the webGUI view on the running instances, but I really recommend you to re-run all the commands from Step 4 to give yourself the full view on how the auto-scaling and instance status looks like from the console commands perspective.

If you go to webGUI, then immediatelly enter the “SCALING-GROUPS” view and you will see two groups exist, one is internal system for load-balancer resouces, which is a result of your DemoLB, but you do not have to care about this, the second however is your DemoASG and you should see the number of instances on 1! This is the view:

DemoASG showing the initial instance running!

Next we will check the details, select the gear icon and select View details…

In this view, select “Instances” tab and you should see your auto-scaled instance ID i-db9ead12:

Detail of the initial instance ID

Now that we have our ID, lets go check the instance details in the main view “Instances” view (go back to dashboard and select Running instances there):

Ok, now we have an IP address, lets go connect to it! If you followed my steps from the beginning, you should have the my-first-keypair.pem file in the /root directory. So you can use it to connect to the fedora image like this:

1

2

3

# ssh -i ./my-first-keypair.pem fedora@192.168.125.53

Last login:Wed Jan2822:14:202015from192.168.125.53

[fedora@instance-192-168-125-74~]$

Immediatelly please notice that the hostname of the target system is “instance-192-168-125-74” what means that our configuration script has worked!!! Maybe it will take some time to finish the whole configuration (like apache2 installation), but lets check if the HTTP service is running already with the netstat command.

1

2

3

4

5

6

7

[fedora@instance-192-168-125-74~]$sudo netstat-tl

Active Internet connections(only servers)

Proto Recv-QSend-QLocal AddressForeign AddressState

tcp000.0.0.0:ssh0.0.0.0:*LISTEN

tcp600[::]:http[::]:*LISTEN

tcp600[::]:ssh[::]:*LISTEN

tcp600[::]:https[::]:*LISTEN

As you can see, HTTP is running, so lets point our browser to it (using either the internal 192.168.125.53 OR the external 192.168.125.74 IP) and check what we will find:

Access to instance working via 192.168.125.53 (internal IP), including configuration script that configured a webpage!

Access to instance working via 192.168.125.74 (external IP), including configuration script that configured a webpage!

Now also you should check the access via the load-balancer, if everything works, you should also via the load-balancer, first check the IP of the load-balancer via the webGUI, go to Running instances again and select details of the load-balancer instance running.

So to test access, point your browser also to the public IP of the load-balancer that is the 192.168.125.71 and you should see access to one of the running instances, in this case only the one 192.168.125.74:

Access to instance 192.168.125.74 web service VIA load-balancer with public IP 192.168.125.71

BONUS Step if not working: Troubleshooting load-balancer if needed.

When I have tried accessing the test webpage via Load-Balancer for the first time, it was not working, after double-checking everything I concluded that something must be wrong with the Eucalyptus Load-Balancers used in the auto-scaling. But how to troubleshoot this ? Well the point is that from the eucalyptus system, you can only check how the load-balancer considers the server HTTP system alive or not with the eulb-describe-instance-health command. This was specifically my problem, the server (despite running HTTP and test page) was considered “OutOfService”.

1

2

# eulb-describe-instance-health DemoLB

INSTANCEi-7ddcfe12OutOfService

Ok, so we need to check the load-balancer operation, and for that we need to enter it. First list out the instances and look for the load-balancer, in the webGUI you can find the loadbalancer in the running instances, and select detailed view:

load-balancer instance SSH access details

Notice the Instance ID of i-b5d6412a in the GUI, we can find this also in the console instances view:

Right behind the “running” word is the key pair that the load-balancer instance is using, which is of course the euca-admin-lb that the created Step 6 optional section. If you didn’t done this, you probably see “0” instead of key and this means that there is no SSH keypair deployed in the load-balancer and you cannot connect there now! However if you have done the optional part of Step 6, you can now connect to the loadbalancer with SSH like this:

1

# ssh -i euca-admin-lb.priv root@192.168.125.17

Once inside the load-balancer, the main cause for me was the NTP not synchronized.

Step 12: Verify the auto-scaling work with CPU stress tests

Now we have the auto-scaling configured, we have policy to increase and to decrease the number of instances based on CPU load, so lets test it. Right now our group has a minimum running instances of 1, lets try to push it to 2 with loading the CPU a little bit up.

To have a tool to push CPU usage up, install “stress” to the

1

# yum install -y stress

Now, have a look on the auto-scaling group in the webGUI, there is a default cooldown period in seconds between scaling events, therefore we must product a CPU usage above 50% cpu for more than 300 seconds in order to have a trigger. And for that we use the stress tool like this (running from inside the instance):

1

# stress -c 4 -t 600

This will generate a CPU load inside the instance that should trigger a scaling event.

1

2

3

4

5

6

7

8

9

10

11

12

top-18:19:08up53min,2users,load average:101.24,85.11,45.10

Tasks:174total,102running,72sleeping,0stopped,0zombie

%Cpu(s):0.0us,0.0sy,0.0ni,0.0id,0.0wa,98.0hi,0.0si,95.0st

KiB Mem:245364total,240940used,4424free,7808buffers

KiB Swap:0total,0used,0free,124428cached

PID USER PR NI VIRT RES SHRS%CPU%MEM TIME+COMMAND

866root2007268920R23.60.00:00.75stress

867root2007268920R23.60.00:00.75stress

868root2007268920R23.60.00:00.75stress

869root2007268920R23.60.00:00.75stress

Alternative, if stress is not generating enough CPU load is to use superPi or for 64bit only linux then this version of y-cruncher pi

If successful, you will see two INSTANCES,one old and one new that was launched under the auto-scaling group:

Auto scaling group triggered INSTANCE increase to 2

The details of the two instances now running

In summary

Now after all is finished, and the auto-scaling is working, you technically have something like shown in the diagram below. To test/verify, I encourage you to use all of the commands that I presented during the tutorial, the euca*, eulb*, euwatch* to verify the functionality. I understand that there are probably many other questions here, specifically about the load-balancer internal functions, but this calls actually for actually start learning Eucalyptus for production deployment and that is right now beyond the target of this quick introduction article. But feel free to check the external links below for more information on eucalyptus (especially the administrator guide).

I have a feeling you are simply missink the kernel-devel packages. But I see you are using Kali which is a distribution I am now familiar with in detail (other than the wifi hacking tutorials here). But since it is based on ububntu, maybe simply “apt-get install linux-headers” ?