Rohit YadavBlog posts and talks by Rohit Yadav.
http://rohityadav.cloud/
Sun, 20 Jan 2019 02:46:59 +0530Sun, 20 Jan 2019 02:46:59 +0530Jekyll v3.7.4Build your own IaaS cloud with Apache CloudStack 4.11 and KVM on Ubuntu 18.04LTS<p>This is a how-to install guide on setting up a Apache CloudStack based cloud all
in a single Ubuntu 18.04 host that is also used as a KVM host.</p>
<p>Note: this should work for ACS 4.11.2 and above. This how-to post may get
outdated in future, so please <a href="http://docs.cloudstack.apache.org/en/4.11.2.0/installguide">follow the latest docs</a>
and/or <a href="http://docs.cloudstack.apache.org/en/4.11.2.0/installguide/hypervisor/kvm.html">read the latest docs on KVM host installation</a>.</p>
<h1 id="initial-setup">Initial Setup</h1>
<p>First install Ubuntu 16.04/18.04 LTS on your x86_64 system that has at least
4GB RAM (prerably more) with Intel VT-X or AMD-V enabled. Ensure that the
<code class="highlighter-rouge">universe</code> repository is enabled in <code class="highlighter-rouge">/etc/apt/sources.list</code>.</p>
<p>Install basic packages:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get install openntpd openssh-server sudo vim htop tar
</code></pre></div></div>
<p>Optionally, if you’ve Intel based system install/update CPU microcode:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get install intel-microcode
</code></pre></div></div>
<p>Allow the root user for ssh access using password, fix <code class="highlighter-rouge">/etc/ssh/sshd_config</code>.
Change and remember the <code class="highlighter-rouge">root</code> password:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>passwd root
</code></pre></div></div>
<h1 id="setup-networking">Setup Networking</h1>
<p>Setup Linux bridges that will handle CloudStack’s public, guest, management
and storage traffic. For simplicity, we will use a single bridge <code class="highlighter-rouge">cloudbr0</code> to
be used for all these networks. Install bridge utilities:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get install bridge-utils
</code></pre></div></div>
<p>This guide assumes that you’re in a 192.168.1.0/24 network which is a typical
RFC1918 private network.</p>
<h3 id="ubuntu-1604">Ubuntu 16.04</h3>
<p>To configure bridge on Ubuntu 16.04, make suitable changes to
<code class="highlighter-rouge">/etc/network/interfaces</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>auto lo
iface lo inet loopback
auto enp2s0
iface enp2s0 inet manual
auto cloudbr0
iface cloudbr0 inet static
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 1.1.1.1
bridge_ports enp2s0
bridge_fd 0
bridge_stp off
</code></pre></div></div>
<p>Restart networking or reboot the host to enforce network settings.</p>
<h3 id="ubuntu-1804">Ubuntu 18.04</h3>
<p>Starting Ubuntu bionic, admins can use <code class="highlighter-rouge">netplan</code> to configure networking. The
default installation creates a file at <code class="highlighter-rouge">/etc/netplan/50-cloud-init.yaml</code> which
you can comment, and create a file at <code class="highlighter-rouge">/etc/netplan/01-netcfg.yaml</code> applying
your network specific changes:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> network:
version: 2
renderer: networkd
ethernets:
ens3:
dhcp4: false
dhcp6: false
optional: true
bridges:
cloudbr0:
addresses: [192.168.1.10/24]
gateway4: 192.168.1.1
nameservers:
addresses: [1.1.1.1,8.8.8.8]
interfaces: [ens3]
dhcp4: false
dhcp6: false
parameters:
stp: false
forward-delay: 0
</code></pre></div></div>
<p>Tip: If you want to use VXLAN based traffic isolation, make sure to increase the MTU setting of the physical nics by <code class="highlighter-rouge">50 bytes</code> (because VXLAN header size is 50 bytes). For example:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> ethernets:
enp2s0:
match:
macaddress: 00:01:2e:4f:f7:d0
mtu: 1550
dhcp4: false
dhcp6: false
enp3s0:
mtu: 1550
</code></pre></div></div>
<p>Save the file and apply network config, finally reboot:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>netplan generate
netplan apply
reboot
</code></pre></div></div>
<h1 id="cloudstack-management-server-setup">CloudStack Management Server Setup</h1>
<p>Install CloudStack management server and MySQL server: (run as root)</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-key adv --keyserver keys.gnupg.net --recv-keys 584DF93F
echo deb http://packages.shapeblue.com/cloudstack/upstream/debian/4.11 / &gt; /etc/apt/sources.list.d/cloudstack.list
apt-get update -y
apt-get install cloudstack-management cloudstack-usage mysql-server
</code></pre></div></div>
<p>Make a note of the MySQL server’s root user password. Configure InnoDB settings
in mysql server’s <code class="highlighter-rouge">/etc/mysql/mysql.conf.d/mysqld.cnf</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[mysqld]
server_id = 1
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_ENGINE_SUBSTITUTION"
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=1000
log-bin=mysql-bin
binlog-format = 'ROW'
</code></pre></div></div>
<p>Restart MySQL server and setup database:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>systemctl restart mysql
cloudstack-setup-databases cloud:cloud@localhost --deploy-as=root:&lt;root password, default blank&gt; -i &lt;cloudbr0 IP here&gt;
</code></pre></div></div>
<h1 id="storage-setup">Storage Setup</h1>
<p>Install NFS server:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get install nfs-kernel-server quota
</code></pre></div></div>
<p>Create exports:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "/export *(rw,async,no_root_squash,no_subtree_check)" &gt; /etc/exports
mkdir -p /export/primary /export/secondary
exportfs -a
</code></pre></div></div>
<p>Configure and restart NFS server:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed -i -e 's/^RPCMOUNTDOPTS="--manage-gids"$/RPCMOUNTDOPTS="-p 892 --manage-gids"/g' /etc/default/nfs-kernel-server
sed -i -e 's/^STATDOPTS=$/STATDOPTS="--port 662 --outgoing-port 2020"/g' /etc/default/nfs-common
echo "NEED_STATD=yes" &gt;&gt; /etc/default/nfs-common
sed -i -e 's/^RPCRQUOTADOPTS=$/RPCRQUOTADOPTS="-p 875"/g' /etc/default/quota
service nfs-kernel-server restart
</code></pre></div></div>
<p>Seed systemvm template:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget http://packages.shapeblue.com/systemvmtemplate/4.11/systemvmtemplate-4.11.2-kvm.qcow2.bz2
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /export/secondary -f systemvmtemplate-4.11.2-kvm.qcow2.bz2 -h kvm \
-o localhost -r cloud -d cloud
</code></pre></div></div>
<h1 id="setup-kvm-host">Setup KVM host</h1>
<p>Install KVM and CloudStack agent, configure libvirt:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get install qemu-kvm cloudstack-agent
</code></pre></div></div>
<p>Enable VNC for console proxy:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed -i -e 's/\#vnc_listen.*$/vnc_listen = "0.0.0.0"/g' /etc/libvirt/qemu.conf
</code></pre></div></div>
<p>Enable libvirtd in listen mode:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sed -i -e 's/.*libvirtd_opts.*/libvirtd_opts="-l"/' /etc/default/libvirtd
</code></pre></div></div>
<p>Configure default libvirtd config:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo 'listen_tls=0' &gt;&gt; /etc/libvirt/libvirtd.conf
echo 'listen_tcp=1' &gt;&gt; /etc/libvirt/libvirtd.conf
echo 'tcp_port = "16509"' &gt;&gt; /etc/libvirt/libvirtd.conf
echo 'mdns_adv = 0' &gt;&gt; /etc/libvirt/libvirtd.conf
echo 'auth_tcp = "none"' &gt;&gt; /etc/libvirt/libvirtd.conf
systemctl restart libvirtd
</code></pre></div></div>
<p>Optional: If you’ve a crappy server vendor, they may fail to make each server
unique and libvirtd can complain that servers are not unique. To make them
unique setup host specific UUID in libvirtd config:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apt-get install uuid
UUID=$(uuid)
echo host_uuid = \"$UUID\" &gt;&gt; /etc/libvirt/libvirtd.conf
systemctl restart libvirtd
</code></pre></div></div>
<p>Note: In Ubuntu 18.04, the libvirt daemon process has been named libvirtd but
libvirt-bin alias is also available.</p>
<h1 id="configure-firewall">Configure Firewall</h1>
<p>Configure firewall:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># configure iptables
NETWORK=192.168.1.0/24
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 32803 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p udp --dport 32769 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 892 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 875 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 662 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 8250 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 8080 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 9090 -j ACCEPT
iptables -A INPUT -s $NETWORK -m state --state NEW -p tcp --dport 16514 -j ACCEPT
apt-get install iptables-persistent
# Disable apparmour on libvirtd
ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper
</code></pre></div></div>
<p>If your system uses <code class="highlighter-rouge">ufw</code> instead (you can check using <code class="highlighter-rouge">ufw status</code>), run the
following:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ufw allow mysql
ufw allow proto tcp from any to any port 22
ufw allow proto tcp from any to any port 1798
ufw allow proto tcp from any to any port 16509
ufw allow proto tcp from any to any port 16514
ufw allow proto tcp from any to any port 5900:6100
ufw allow proto tcp from any to any port 49152:49216
</code></pre></div></div>
<h3 id="launch-management-server">Launch Management Server</h3>
<p>Start your cloud:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cloudstack-setup-management
systemctl status cloudstack-management
tail -f /var/log/cloudstack/management/management-server.log
</code></pre></div></div>
<p>After management server is UP, proceed to http://<code class="highlighter-rouge">192.168.1.10(cloudbr0-IP)</code>:8080/client
and log in using the default credentials - username <code class="highlighter-rouge">admin</code> and password
<code class="highlighter-rouge">password</code>.</p>
<h1 id="deploying-basic-zone">Deploying Basic Zone</h1>
<p><code class="highlighter-rouge">TODO</code></p>
<h1 id="deploying-advanced-zone">Deploying Advanced Zone</h1>
<p>The following is an example of how you can setup an advanced zone in the
192.168.1.0/24 network.</p>
<h3 id="setup-zone">Setup Zone</h3>
<p>Go to Infrastructure &gt; Zone and click on add zone button, select advanced zone and
provide following configuration:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name - any name
Public DNS 1 - 8.8.8.8
Internal DNS1 - 192.168.1.1
Hypervisor - KVM
</code></pre></div></div>
<h3 id="setup-network">Setup Network</h3>
<p>Use the default, which is <code class="highlighter-rouge">VLAN</code> isolation method on a single physical nic (on
the host) that will carry all traffic types (management, public, guest etc).</p>
<p>Note: If you’ve <code class="highlighter-rouge">iproute2</code> installed and host’s physical NIC MTUs configured, you can used <code class="highlighter-rouge">VXLAN</code> as well.</p>
<p>Public traffic configuration:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Gateway - 192.168.1.1
Netmask - 255.255.255.0
VLAN/VNI - (leave blank for vlan://untagged or in case of VXLAN use vxlan://untagged)
Start IP - 192.168.1.20
End IP - 192.168.1.50
</code></pre></div></div>
<p>Pod Configuration:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name - any name
Gateway - 192.168.1.1
Start/end reserved system IPs - 192.168.1.51 - 192.168.1.80
</code></pre></div></div>
<p>Guest traffic:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>VLAN/VNI range: 700-900
</code></pre></div></div>
<h3 id="add-resources">Add Resources</h3>
<p>Create a cluster with following:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name - any name
Hypervisor - Choose KVM
</code></pre></div></div>
<p>Add your default/first host:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Hostname - 192.168.1.10
Username - root
Password - &lt;password for root user, please enable root user ssh-access by password on the KVM host&gt;
</code></pre></div></div>
<p>Note: <code class="highlighter-rouge">root</code> user ssh-access is disabled by default, <a href="https://askubuntu.com/questions/469143/how-to-enable-ssh-root-access-on-ubuntu-14-04">please enable it</a>.</p>
<p>Add primary storage:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name - any name
Scope - zone-wide
Protocol - NFS
Server - 192.168.1.10
Path - /export/primary
</code></pre></div></div>
<p>Add secondary storage:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Provider - NFS
Name - any name
Server - 192.168.1.10
Path - /export/secondary
</code></pre></div></div>
<p>Next, click <code class="highlighter-rouge">Launch Zone</code> which will perform following actions:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Create Zone
Create Physical networks:
- Add various traffic types to the physical network
- Update and enable the physical network
- Configure, enable and update various network provider and elements such as the virtual network element
Create Pod
Configure public traffic
Configure guest traffic (vlan range for physical network)
Create Cluster
Add host
Create primary storage (also mounts it on the KVM host)
Create secondary storage
Complete zone creation
</code></pre></div></div>
<p>Finally, confirm and enable the zone. Wait for the system VMs to come up, then
you can proceed with your IaaS usage. Happy hacking!</p>
Sun, 20 Jan 2019 00:00:00 +0530http://rohityadav.cloud/blog/cloudstack-kvm/
http://rohityadav.cloud/blog/cloudstack-kvm/cloudstackVMware ESXi and vCenter on KVM with CloudStack<p>I’ve two KVM hosts based on Ubuntu 14.04 and 15.04 that I use for CloudStack
development and testing. The Ubuntu 14.04 based host is managed by CloudStack
and the Ubuntu 15.04 based host is my workstation. In order
to develop and test CloudStack with VMware I always wanted a DevCloud like
appliance that could run ESXi/vCenter, this post explains how you can build your
own DevCloud-VMware.</p>
<div class="post-image">
<img src="/images/cloudstack/vmware-on-kvm.png" />
</div>
<p>Enable nested virtualization on KVM host:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ echo options kvm ignore_msrs=1 &gt;&gt; /etc/modprobe.d/qemu-system-x86.conf
$ echo options kvm-intel nested=y ept=y &gt;&gt; /etc/modprobe.d/qemu-system-x86.conf
</code></pre></div></div>
<p>Next, we will need to patch and build our own qemu-system package. To do that,
either build qemu with the following patch or get the <a href="http://packages.ubuntu.com/trusty-updates/qemu">source qemu package from Ubuntu</a>
and build your own. I’ve some prebuilt packages <a href="http://home.apache.org/~bhaisaab/qemu">here</a>.</p>
<p>As an example, the following worked for me on Ubuntu 14.04.2:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ wget http://archive.ubuntu.com/ubuntu/pool/main/q/qemu/qemu_2.0.0+dfsg-2ubuntu1.15.dsc
$ wget http://archive.ubuntu.com/ubuntu/pool/main/q/qemu/qemu_2.0.0+dfsg.orig.tar.xz
$ wget http://archive.ubuntu.com/ubuntu/pool/main/q/qemu/qemu_2.0.0+dfsg-2ubuntu1.15.debian.tar.gz
$ tar xvfJ qemu_2.0.0+dfsg.orig.tar.xz
$ tar zxvf qemu_2.0.0+dfsg-2ubuntu1.15.debian.tar.gz
$ mv debian qemu_2.0.0+dfsg
$ cd qemu_2.0.0+dfsg
</code></pre></div></div>
<p>Install <a href="http://wiki.qemu.org/Hosts/Linux#Fedora_Linux_.2F_Debian_GNU_Linux_.2F_Ubuntu_Linux_.2F_Linux_Mint">qemu-kvm dependencies</a> or simply run <code class="highlighter-rouge">apt-get build-dep qemu-kvm</code>.</p>
<p>Next, apply this <a href="http://mattinaction.blogspot.in/2014/05/install-and-run-full-functional-vmware.html">patch</a>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gd">--- qemu-2.0.0+dfsg.orig/hw/i386/pc_piix.c
</span><span class="gi">+++ qemu-2.0.0+dfsg/hw/i386/pc_piix.c
</span><span class="gu">@@ -205,7 +205,7 @@ static void pc_init1(QEMUMachineInitArgs
</span> pc_vga_init(isa_bus, pci_enabled ? pci_bus : NULL);
/* init basic PC hardware */
<span class="gd">- pc_basic_device_init(isa_bus, gsi, &amp;rtc_state, &amp;floppy, xen_enabled(),
</span><span class="gi">+ pc_basic_device_init(isa_bus, gsi, &amp;rtc_state, &amp;floppy, 1,
</span> 0x4);
pc_nic_init(isa_bus, pci_bus);
</code></pre></div></div>
<p>Finally, commit the patch and build the package:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dpkg-source --commit
$ dpkg-buildpackage -uc -us -j4
</code></pre></div></div>
<p>Assuming you’re on x86 architecture, install the qemu-system-x86 package:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dpkg -i ../qemu-system-x86_2.0.0+dfsg-2ubuntu1.15_amd64.deb
$ apt-mark hold qemu-system-x86
</code></pre></div></div>
<p>In my case I setup a ESXi 5.5 VM with NIC adapter set to <code class="highlighter-rouge">vmxnet3</code>.
NIC adapter to vmxnet3. In case of CloudStack, you can update the VM details
using CloudMonkey:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cloudmonkey update virtualmachine id=382ba742-125b-45fa-8c50-d0c8608c3b59 details[0].nicAdapter=vmxnet3
</code></pre></div></div>
<p>With CloudStack 4.5.1 and above, I recommend following settings (at least vmx) in the <code class="highlighter-rouge">agent.properties</code> for maximum efficiency:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>guest.cpu.mode=host-passthrough
guest.cpu.features=vmx smx ept vnmi ht lm
</code></pre></div></div>
<p>Next, install ESXi 5.5 with at least 8GB RAM and 4 cores. Once done, enable SSH
and ESXi shell on the host.</p>
<p>SSH to the ESXi VM, add the following to <code class="highlighter-rouge">/etc/vmware/config</code> and reboot the host:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>hv.assumeEnabled="TRUE"
vhv.allow = "TRUE"
vhv.enable = "TRUE"
vmx.allowNested = "TRUE"
</code></pre></div></div>
<p>For some reason, vCenter 5.5 on Windows Server 2008 R2 never worked for me; so I
tried using the vCenter 5.5 ova appliance which worked for me. I also could not get
the vSphere client to upload the ova to the ESXi host, but ovftool tool worked
for me:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ovftool --diskMode=thin --name=vCenter55 VMware-vCenter-Server-Appliance-5.5.0.10200-1891314_OVF10.ova vi://root@password:192.168.1.58/
Opening OVA source: VMware-vCenter-Server-Appliance-5.5.0.10200-1891314_OVF10.ova
The manifest validates
Source is signed and the certificate validates
Enter login information for target vi://192.168.1.58/
Username: root
Password: ********
Opening VI target: vi://root@192.168.1.58:443/
Deploying to VI: vi://root@192.168.1.58:443/
Transfer Completed
Completed successfully
</code></pre></div></div>
<p>Once done, you can reduce the vCenter VM RAM to 2-3GB using the vSphere 5.5
Windows client. Once vCenter VM has started you can open the vCenter URL in your
browser to configure it, and later use the vSphere client to add the host to the
vCenter VM. I had issue with running certain ESXi 5.5 ISOs on Ubuntu 14.04, but
on Ubuntu 15.04 it has worked out of the box so I recommend using Ubuntu 15.04.</p>
<p>Update: For ESXi 6.0, you can simply use E1000 based nic. For qemu 2.2+, you don’t need
to patch the qemu-system-x86 package but instead add the following in the VM’s
virt xml directly:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> virsh edit &lt;domain name&gt;
&lt;features&gt;
&lt;kvm&gt;
&lt;hidden state='on'/&gt;
&lt;/kvm&gt;
&lt;vmport state='off'/&gt;
&lt;/features&gt;
</code></pre></div></div>
Tue, 04 Aug 2015 00:00:00 +0530http://rohityadav.cloud/blog/vmware-esxi-vcenter-on-kvm-cloudstack/
http://rohityadav.cloud/blog/vmware-esxi-vcenter-on-kvm-cloudstack/cloudstackFast Storage with RocksDB<div class="post-image">
<small>Note: Cross-posted from Wingify's engineering <a href="http://engineering.wingify.com/fast-storage-with-rocksdb">blog</a></small>
</div>
<p>In November last year, I started developing an infrastructure that would allow <a href="http://wingify.com">Wingify</a> to
collect, store, search and retrieve high volume data. The idea was
to collect all the URLs on which Wingify’s <a href="https://visualwebsiteoptimizer.com/split-testing-blog/geo-distributed-architecture/">homegrown CDN</a>
would serve JS content. Based on Wingify’s current traffic, we were looking to collect some 10k URLs per
second across four major geographic regions where Wingify runs their servers.</p>
<p>In the beginning I tried MySQL, Redis, Riak, CouchDB, MongoDB, ElasticSearch but
nothing worked out for me with that kind of high speed writes. I also wanted the
system to respond very quickly, under 40ms between
internal servers on private network. This post talks about how I was able to
make such a system using C++11, <a href="http://rocksdb.org">RocksDB</a> and <a href="http://thrift.apache.org">Thrift</a>.</p>
<p>First, let me start by sharing the use cases of such a system in VWO; the
following screenshot shows a feature where users can enter a URL to check if VWO
Smart Code was installed on it.</p>
<div class="post-image">
<img src="/images/wingify/rocks0.png" /><br />
<p>VWO Smart Code checker</p>
</div>
<p>The following screenshot shows another feature where users can see a list of URLs
matching a complex wildcard pattern, regex pattern, string rule etc. while
creating a campaign.</p>
<div class="post-image">
<img src="/images/wingify/rocks1.png" /><br />
<p>VWO URL Matching Helper</p>
</div>
<p>I <a href="http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis">reviewed</a>
several opensource databases but none of them would fit Wingify’s requirements except
Cassandra. In clustered deployment, reads from Cassandra were too slow and slower
when data size would grew. After understanding how Cassandra worked under the
hood such as its log structured storage like LevelDB I started playing with opensource
embeddable databases that would use similar approach such as LevelDB and Kyoto Cabinet.
At the time, I found an embedabble persistent key-value store
library built on LevelDB called <a href="http://rocksdb.org">RocksDB</a>.
It was opensourced by Facebook and had a fairly active developer community so I
started <a href="https://github.com/facebook/rocksdb/tree/master/examples">playing</a>
with it. I read the <a href="https://github.com/facebook/rocksdb/wiki">project wiki</a>,
wrote some working code and joined their Facebook group to ask questions around
prefix lookup. The community was helpful, especially Igor and
Siying who gave me <a href="https://www.facebook.com/groups/rocksdb.dev/permalink/506160312815821/">enough hints</a>
around <a href="https://github.com/facebook/rocksdb/wiki/Prefix-Seek-API-Changes">prefix lookup</a>,
using custom <a href="https://github.com/facebook/rocksdb/wiki/Hash-based-memtable-implementations">extractors</a>
and <a href="http://en.wikipedia.org/wiki/Bloom_filter">bloom filters</a> which helped me
write something that actually worked in the production environment for the first time.
Explaining the technology and jargons is out of scope of this post but I would like
to encourage the readers to read
<a href="http://google-opensource.blogspot.in/2011/07/leveldb-fast-persistent-key-value-store.html">about</a>
<a href="https://code.google.com/p/leveldb/">LevelDB</a> and to read the RocksDB <a href="https://github.com/facebook/rocksdb/wiki">wiki</a>.</p>
<div class="post-image">
<img src="/images/wingify/rocks2.png" /><br />
<p>RocksDB FB Group</p>
</div>
<p>For capturing the URLs with peak velocity up to 10k serves/s, I reused Wingify’s
<a href="/scaling-with-queues/">distributed queue based infrastructure</a>.
For storage, search and retrieval of URLs I wrote a custom datastore service
using C++, RocksDB and Thrift called <em>HarvestDB</em>. <a href="http://thrift.apache.org/">Thrift</a>
provided the <a href="http://en.wikipedia.org/wiki/Remote_procedure_call">RPC</a> mechanism
for implementing this system as a distributed service accessible by various
backend sub-systems. The backend sub-systems use client libraries
<a href="http://thrift.apache.org/tutorial">generated by Thrift compiler</a> for communicating
with the <em>HarvestDB</em> server.</p>
<p>The <em>HarvestDB</em> service implements five remote procedures - ping, get,
put, search and purge. The following <a href="http://thrift.apache.org/docs/idl">Thrift IDL</a>
describes this service.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>namespace cpp harvestdb
namespace go harvestdb
namespace py harvestdb
namespace php HarvestDB
struct Url {
1: required i64 timestamp;
2: required string url;
3: required string version;
}
typedef list&lt;Url&gt; UrlList
struct UrlResult {
1: required i32 prefix;
2: required i32 found;
3: required i32 total;
4: required list&lt;string&gt; urls;
}
service HarvestDB {
bool ping(),
Url get(1:i32 prefix, 2:string url),
bool put(1:i32 prefix, 2:Url url),
UrlResult search(1:i32 prefix,
2:string includeRegex,
3:string excludeRegex,
4:i32 size,
5:i32 timeout),
bool purge(1:i32 prefix, 2:i64 timestamp)
}
</code></pre></div></div>
<p>Clients use <code class="highlighter-rouge">ping</code> to check <em>HarvestDB</em> server connectivity before executing
other procedures. RabbitMQ consumers consume collected URLs and <code class="highlighter-rouge">put</code> them to
<em>HarvestDB</em>. The PHP based application backend uses custom Thrift based client
library to <code class="highlighter-rouge">get</code> (read) and to <code class="highlighter-rouge">search</code> URLs.
A Python program runs as a periodic cron job and uses <code class="highlighter-rouge">purge</code> procedure to purge old entries
based on timestamp which makes sure it won’t exhaust the storage
resources. The system is in production for more than five months now and is
capable of handling (benchmarked) workload of up to 24k writes/second while consuming
less than 500MB RAM. The future work will be on replication, sharding and fault
tolerance of this service. The following diagram illustrates this architecture.</p>
<div class="post-image">
<img src="/images/wingify/rocks3.png" /><br />
<p>Overall architecture</p>
</div>
<p><a href="https://news.ycombinator.com/item?id=7899353">Discussion on Hacker News</a></p>
Fri, 13 Jun 2014 00:00:00 +0530http://rohityadav.cloud/blog/fast-storage-with-rocksdb/
http://rohityadav.cloud/blog/fast-storage-with-rocksdb/systemsMoving to a Bigger Disk<p>This post would describe a painless disk migration strategy when moving your
partitions to a larger disk. My Thinkpad uses a 120G SSD which I wanted to clone
to a 480G SSD for my desktop so I can migrate my existing setup without having
to reinstall Linux, tons of packages on it and deal with their custom
configurations. I use LVM on all my system which makes the cloning and migration
very simple. This post assumes a simple partitioning scheme where you have at least
one primary partition for <code class="highlighter-rouge">/boot</code> and another for <code class="highlighter-rouge">/</code> (second one could be an
extended partition with LVM partitions).</p>
<p>First of all do back up your important data, keys and whatnot. Attach
the disks to a computer (desktop in my case). Next, boot to Linux from your
source disk, in single user mode or recovery mode, which in my case was the OCZ
120G SSD. Identify the destination partition using <code class="highlighter-rouge">fdisk -l</code>.</p>
<p>Alright, let’s copy data bit by bit using <code class="highlighter-rouge">dd</code>. For readymade UX I use <code class="highlighter-rouge">pv</code> for
tracking progress, people use Ctrl+t or signals (such as sig USR) for tracking
copied bytes.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dd if=/dev/sda | pv | dd of=/dev/sdb
</code></pre></div></div>
<p>After this is successful, run <code class="highlighter-rouge">sync</code> to force flush disk buffer and reboot to
the destination disk which in my case was the 480G SSD.</p>
<p>Next, boot to the destination disk (probably detach the source disk). Do <code class="highlighter-rouge">fdisk -l</code>
to find various partitions, depending on how you may have partitioned the
source disk you may have to adapt to the solution this post describes. In my
case there were two partitions, a primary <code class="highlighter-rouge">/dev/sda1</code> for the /boot partition
and an extended <code class="highlighter-rouge">/dev/sda2</code> partition which had one main LVM partition
<code class="highlighter-rouge">/dev/sda5</code>. We now simply need to alter the partition table so the partitions
can occupy the free space, then resize the primary volumes and the logical
volumes and finally resize the file systems.</p>
<p>Now, we’ll delete the partition table entries and resize the boundaries.
Don’t worry doing the following does not really wipe off your data but simply
changes partition enteries (but beware what’s you’re going to do):</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ fdisk /dev/sda # note this is the new disk
Command (m for help): p
Disk /dev/sda: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000ea999
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 499712 937703087 468601688 5 Extended
/dev/sda5 501760 937703087 468600664 8e Linux LVM
Command (m for help): d
Partition number (1-5): 2
Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): e
Partition number (1-4, default 2):
Using default value 2
First sector (499712-937703087, default 499712):
Using default value 499712
Last sector, +sectors or +size{K,M,G} (499712-937703087, default 937703087):
Using default value 937703087
Command (m for help): n
Partition type:
p primary (1 primary, 1 extended, 2 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (501760-937703087, default 501760):
Using default value 501760
Last sector, +sectors or +size{K,M,G} (501760-937703087, default 937703087):
Using default value 937703087
Command (m for help): t
Partition number (1-5): 5
Hex code (type L to list codes): 8e
Changed system type of partition 5 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sda: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000ea999
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 499712 937703087 468601688 5 Extended
/dev/sda5 501760 937703087 468600664 8e Linux LVM
Command (m for help): w
</code></pre></div></div>
<p>Finally resize the physical volumes and logical volumes after which we’re done:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pvdisplay
$ pvresize /dev/sda5
$ lvdisplay
$ lvresize -l+100%FREE /dev/volume-group-name/root
$ resize2fs /dev/volume-group-name/root
$ lvdisplay # verify LVM partition size
$ df -h # verify partition size
</code></pre></div></div>
Tue, 22 Apr 2014 00:00:00 +0530http://rohityadav.cloud/blog/lvm-cloning/
http://rohityadav.cloud/blog/lvm-cloning/linuxScaling with Queues<div class="post-image">
<small>Note: Cross-posted from Wingify's engineering <a href="http://engineering.wingify.com/scaling-with-queues">blog</a></small>
</div>
<p>Our home-grown <a href="https://vwo.com/blog/geo-distributed-architecture/">geo-distributed architecture</a>
based CDN allows us to delivery dynamic javascript content with minimum
latencies possible. Using the same architecture we do data acquisition as well.
Over the years we’ve done a lot of changes to our backend, this post talks
about some scaling and reliability aspects and our recent work on making fast and
reliable data acquisition system using message queues which is in production for
about three months now. I’ll start by giving some background on our previous
architecture.</p>
<p><a href="http://en.wikipedia.org/wiki/Web_bug">Web beacons</a> are widely used to do data
acquisition, the idea is to have a webpage send us data using an HTTP request
and the server sends some valid object. There are many ways to do this. To keep
the size of the returned object small, for every HTTP request we
return a tiny 1x1 pixel gif image and our geo-distributed architecture along with
our managed Anycast DNS service helps us to do this with very low latencies,
we aim for less than 40ms. When an HTTP request hits one of our data acquisition servers, <a href="http://openresty.org">OpenResty</a>
handles it and our Lua based code processes the request in the same process thread.
OpenResty is a <code class="highlighter-rouge">nginx</code> mod which among many things bundles <code class="highlighter-rouge">luajit</code> that allows
us to write URL handlers in Lua and the code runs within the web server. Our Lua code
does some quick checks, transformations and writes the data to a <a href="http://redis.io">Redis</a>
server which is used as fast in-memory data sink. The data stored in Redis is
later moved, processed and stored in our database servers.</p>
<div class="post-image">
<img src="/images/wingify/queue1.png" /><br />
<p>Previous Architecture</p>
</div>
<p>This was the architecture when I had <a href="http://team.wingify.com/friday-engineering-talks-at-wingify">joined</a>
Wingify couple of months ago. Things were going smooth but the problem was we were
not quite sure about data accuracy and scalability. We used Redis as a fast
in-memory data storage sink, which our custom written PHP based queue infrastructure
would read from, our backend would process it and write to our database servers.
The PHP code was not scalable and after about a week of hacking, exploring options
we found few bottlenecks and decided to re-do the backend queue infrastructure.</p>
<p>We explored many <a href="http://queues.io">options</a> and decided to use <a href="http://www.rabbitmq.com">RabbitMQ</a>.
We wrote a few proof-of-concept backend programs in Go, Python and PHP and
did a lot of testing, benchmarking and real-world <a href="http://loader.io">load testing</a>.</p>
<p>Ankit, Sparsh and I discussed how we should move forward and we finally
decided to explore two models in which we would replace the home-grown PHP queue
system with RabbitMQ. In the first model, we wrote directly to RabbitMQ from the
Lua code. In the second model, we wrote a transport agent which moved data from Redis
to RabbitMQ. And we wrote RabbitMQ consumers in both cases.</p>
<p>There was no Lua-resty library for RabbitMQ, so I wrote one using <code class="highlighter-rouge">cosocket</code> APIs
which could publish messages to a RabbitMQ broker over STOMP protocol. The library
<a href="https://github.com/wingify/lua-resty-rabbitmqstomp">lua-resty-rabbitmqstomp</a> was
opensourced for the hacker <a href="https://groups.google.com/forum/?fromgroups#!forum/openresty-en">community</a>.</p>
<p>Later, I rewrote our Lua handler code using this library and ran a <a href="http://loader.io">loader.io</a>
load test. It failed this model due to very <a href="http://ldr.io/154Xf1h">low throughtput</a>,
we performed a load test on a small 1G DigitalOcean instance for both models.
For us, the STOMP protocol
and slow RabbitMQ STOMP adapter were performance bottlenecks. RabbitMQ was not
as fast as Redis, so we decided to keep it and work on the second
model. For our requirements, we wrote a proof-of-concept Redis to RabbitMQ transport
agent called <code class="highlighter-rouge">agentredrabbit</code> to leverage Redis as a fast in-memory storage sink and
use RabbitMQ as a reliable broker. The <em>POC</em> worked well in terms of performance,
throughput, scalability and failover. In next few weeks we were able to write a
production level queue based pipeline for our data acquisition system.</p>
<p>For about a month, we ran the new pipeline in production against the existing one,
to A/B test our backend :) To do that we modified our Lua code to write to two
different Redis lists, the original list was consumed by the existing pipeline, the other was
consumed by the new RabbitMQ based pipeline. The consumer would process and write
data to a new database. This allowed us to compare realtime data from the two
pipelines. During this period we tweaked our implementation a lot, rewrote the
producers and consumers thrice and had two major phases of refactoring.</p>
<div class="post-image">
<img src="/images/wingify/queue2.png" /><br />
<p>A/B testing of existing and new architecture</p>
</div>
<p>Based on <a href="http://ldr.io/1565jPu">results</a> against a 1G DigitalOcean instance like
for the first model and against the A/B comparison of existing pipeline in realtime,
we migrated to the new pipeline based on RabbitMQ. Other issues of HA,
redundancy and failover were addressed in this migration as well.
The new architecture ensures no single point of failure and has mechanisms to
recover from failure and fault.</p>
<div class="post-image">
<img src="/images/wingify/queue3.png" /><br />
<p>Queue (RabbitMQ) based architecture in production</p>
</div>
<p>We’ve <a href="https://github.com/wingify/agentredrabbit">opensourced <code class="highlighter-rouge">agentredrabbit</code></a>
which can be used as a general purpose fast and reliable transport agent for
moving data in chunks from Redis lists to RabbitMQ with some assumptions and queue
name conventions. The flow diagram below has hints on how it works, checkout the
<a href="https://github.com/wingify/agentredrabbit">README for details</a>.</p>
<div class="post-image">
<img src="/images/wingify/queue4.png" /><br />
<p>Flow diagram of "agentredrabbit"</p>
</div>
<p><a href="https://news.ycombinator.com/item?id=6359786">Discussion on Hacker News</a></p>
Mon, 02 Sep 2013 00:00:00 +0530http://rohityadav.cloud/blog/scaling-with-queues/
http://rohityadav.cloud/blog/scaling-with-queues/systemsBuilding CloudStack SystemVMs<p>CloudStack uses virtual appliances as part of its orchestration. For example, it
uses virtual routers for SDN, secondary storage vm for snapshots, templates etc.
All these service appliances are created off a template called a systemvm
template in CloudStack’s terminologies. This template appliance is patched to create
secondary storage vm, console proxy vm or router vm. There was an old way of building
systemvms in <code class="highlighter-rouge">patches/systemvm/debian/buildsystemvm.sh</code> which is no longer maintained
and we wanted to have a way for hackers to just build systemvms on their own box.</p>
<p><a href="mailto:jmartin@basho.com">James Martin</a> did a great job on automating DevCloud appliance
building using <a href="https://github.com/jedi4ever/veewee/">veewee</a>, a tool with
which one can build appliances on VirtualBox. The tool itself is easy to use, you
first define what kind of box you want to build, configure a preseed file and add
any post installation script you want to run, once done you can export the appliance in
various formats using <code class="highlighter-rouge">vhd-util</code>, <code class="highlighter-rouge">qemu-img</code> and <code class="highlighter-rouge">vboxmanage</code>. I finally fixed a
solution to this <a href="https://issues.apache.org/jira/browse/CLOUDSTACK-1066">problem</a>
today and the code lives in <code class="highlighter-rouge">tools/appliance</code> on master branch but this post is
not about that solution but about the issues and challenges of <a href="http://jenkins.cloudstack.org/job/build-systemvm-master">setting up an
automated jenkins job</a>
and on replicating the build job.</p>
<p>I used Ubuntu 12.04 on a large machine which runs a jenkins slave and connects
to <code class="highlighter-rouge">jenkins.cloudstack.org</code>. After little housekeeping I installed VirtualBox from
<code class="highlighter-rouge">virtualbox.org</code>. VirtualBox comes up with its command line tool, <code class="highlighter-rouge">vboxmanage</code>
which can be used to clone, copy and export appliance. I used it to export it to
ova, vhd and raw image formats. Next, installed qemu which gets you <code class="highlighter-rouge">qemu-img</code> for
exporting a raw disk image to the qcow2 format.</p>
<p>The VirtualBox vhd format is compatible to HyperV virtual disk format, but for
exporting VHD for Xen, we need to export the appliance to raw disk format and
then use <code class="highlighter-rouge">vhd-util</code> to convert it to Xen VHD image.</p>
<p>Unfortunately, the vhd-util <a href="http://download.cloud.com.s3.amazonaws.com/tools/vhd-util">I got did not work for me</a>,
so I just compiled my own from an approach suggested on <a href="http://blogs.citrix.com/2012/10/04/convert-a-raw-image-to-xenserver-vhd/">this blog</a>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install bzip2 python-dev gcc g++ build-essential libssl-dev
uuid-dev zlib1g-dev libncurses5-dev libx11-dev python-dev iasl bin86 bcc
gettext libglib2.0-dev libyajl-dev
# On 64 bit system
sudo apt-get install libc6-dev-i386
# Build vhd-util from source
wget -q http://bits.xensource.com/oss-xen/release/4.2.0/xen-4.2.0.tar.gz
tar -xzf xen-4.2.0.tar.gz
cd xen-4.2.0/tools/
wget https://github.com/citrix-openstack/xenserver-utils/raw/master/blktap2.patch -qO - | patch -p0
./configure --disable-monitors --disable-ocamltools --disable-rombios --disable-seabios
cd blktap2/vhd
make -j 2
sudo make install
</code></pre></div></div>
<p>Last thing was to setup rvm for the jenkins user:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ \curl -L https://get.rvm.io | bash -s stable --ruby
# In case of dependency or openssl error:
$ rvm requirements run
$ rvm reinstall 1.9.3
</code></pre></div></div>
<p>One issue with <code class="highlighter-rouge">rvm</code> is that it requires a login shell, which I fixed in <code class="highlighter-rouge">build.sh</code>
using <code class="highlighter-rouge"><span class="c">#!/bin/bash -xl</span></code>. But the build job failed for me due to missing env variables.
<code class="highlighter-rouge">$HOME</code> needs to be defined and rvm should be in path. The shell commands used to
run the jenkins job:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>whoami
export PATH=/home/jenkins/.rvm/bin:$PATH
export rvm_path=/home/jenkins/.rvm
export HOME=/home/jenkins/
cd tools/appliance
rm -fr iso/ dist/
chmod +x build.sh
./build.sh
</code></pre></div></div>
Tue, 19 Feb 2013 00:00:00 +0530http://rohityadav.cloud/blog/building-systemvms/
http://rohityadav.cloud/blog/building-systemvms/cloudstackDevCloud for CloudStack Development<p><a href="http://incubator.apache.org/cloudstack">Apache CloudStack</a> development is
not an easy task, for the simplest of deployments one requires a server where
the management server, mysql server and NFS server would run, at least
one host or server which would run a hypervisor (to run virtual machines) or
would be used for baremetal deployment and some network infrastructure.</p>
<p>And talk about development, sometimes reproducing a bug can take hours or days
(been there done that :) and moreover a developer may not have access to such
an infrastructure all the time.</p>
<h3 id="the-solution">The Solution</h3>
<p>To solve the problem of infrastructure availability for development and testing,
earlier this year <a href="http://www.linkedin.com/pub/disheng-su/5/ab9/90b">Edison</a>,
one of the core committers and PPMC members of Apache CloudStack (incubating),
created <a href="http://wiki.cloudstack.org/display/COMM/DevCloud">DevCloud</a>.</p>
<p><code class="highlighter-rouge">DevCloud</code> is a virtual appliance shipped as an OVA image which runs on <a href="http://virtualbox.org">VirtualBox</a>
(an opensource type-2 or desktop hypervisor) and can be used for CloudStack’s
development and testing. The original DevCloud required 2G of RAM, and ran
Ubuntu Precise as dom0 over xen.org’s Xen server which runs as a VM on VirtualBox.</p>
<p>A developer would build and deploy CloudStack artifacts (jars, wars) and files
to DevCloud, deploy database and start the management server inside DevCloud.
The developer may then use CloudStack running inside DevCloud to add DevCloud as
a host and whatnot. DevCloud is now used by a lot of people, especially during
the first release of Apache CloudStack, the 4.0.0-incubating, DevCloud was used
for the release testing.</p>
<h3 id="my-experiment">My Experiment</h3>
<p>When I tried DevCloud for the first time, I thought it was neat, an awesome all
in a box solution for offline development. The limitations were; only one host
could be used that too in basic zone and it would run mgmt server etc. all inside
DevCloud. I wanted to run mgmt server, MySQL server on my laptop and debug with
IntelliJ, so I made <a href="https://cwiki.apache.org/confluence/display/CLOUDSTACK/DIY+DevCloud+Setup">my own
DevCloud</a>
setup which would run two XenServers on separate VirtualBox VMs, NFS running on
a separate VM and all the VMs on a host-only network.</p>
<p>The <code class="highlighter-rouge">host-only</code> network in VirtualBox is a special network which is shared by
all the VMs and the host operating system. My setup allowed me to have two hosts
so I could do things like VM migration in a cluster etc. But it would crash a lot
and network won’t work. I learnt how bridging in Xen worked and using tcpdump
found that the packets were dropped but ARP request was allowed, the fix was to
just enable host-only adapter’s promiscuous mode to allow all. I also tried to
run KVM on VirtualBox, which did not work as KVM does not support PV and requires
HVM so it cannot run on processors without Intel-VT or Amd-V. None of which is
emulated by VirtualBox.</p>
<h3 id="motivation">Motivation</h3>
<p>CloudStack’s build system was changed from Ant to Maven, and this required some
<a href="https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+devcloud+environment+setup">changes in DevCloud</a>
which made it possible to use the original appliance with the new build system.
The changes were not straight forward so I decided to work on the next iteration
of DevCloud with the following goals:</p>
<ul>
<li>Two network interfaces, <em>host-only</em> adapter so that the VM is reachable from
host os and a <em>NAT</em> so VMs can access Internet.</li>
<li>Can be used both as an all in one box solution like the original DevCloud but
the mgmt server and other services can run elsewhere (on host os).</li>
<li>Reduce resource requirements, so one could run it in 1G limit.</li>
<li>Allow multiple DevCloud VMs hosts.</li>
<li>x86 dom0 and xen-i386 so it runs on all host os.</li>
<li>Reduce exported appliance (ova) file size.</li>
<li>It should be seamless, it should work out of the box.</li>
</ul>
<h3 id="devcloud-20">DevCloud 2.0</h3>
<p>I started by creating an appliance using Ubuntu 12.04.1 server which failed for me.
The network interfaces would stop working after reboot and few users reported
blank screen. I never caught the actual issue, so I tried to create the
appliance using different distributions including Fedora, Debian and Arch.
Fedora did not work and stripping down to a bare minimum required a lot of work.
Arch VM was very small in size but I dropped my idea to work on it as it can be
unstable, people may not be familiar with pacman and may fail to appreciate the
simplicity of the distribution.</p>
<p>Finally, I hit the jackpot with Debian! Debian Wheezy just worked, took me some
time to create it from scratch (more than ten times) and figure out the correct
configurations. The new appliance is available for download, <a href="http://home.apache.org/~bhaisaab/cloudstack/devcloud/devcloud2.ova">get DevCloud 2.0</a>
(867MB, md5checksum: 144b41193229ead4c9b3213c1c40f005).</p>
<p>Install VirtualBox and import the new DevCloud2 appliance and start it. In
default settings, it is reachable on ip <code class="highlighter-rouge">192.168.56.10</code> with username <code class="highlighter-rouge">root</code> and
password <code class="highlighter-rouge">password</code>. Next start hacking either inside the DevCloud appliance or
on your laptop (host os):</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># ssh inside DevCloud if building inside it:
$ ssh -v root@192.168.56.10
$ cd to /opt/cloudstack # or any other directory, it does not matter
# Get the source code:
$ git clone https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git
$ cd incubator-cloudstack
# Build management server:
$ mvn clean install -P developer,systemvm
# Deploy database:
$ mvn -pl developer,tools/devcloud -Ddeploydb -P developer
# Export the following only if you want debugging on port 8787
$ export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=800m -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n"
# Run the management server:
$ mvn -pl client jetty:run
# In Global Settings check `host` to 192.168.56.1 (or .10 if inside DevCloud)
# and `system.vm.use.local.storage` to true, restart mgmt server.
# Set the maximum number of console proxy vms to 0 if you don't need one from
# CloudStack's global settings, this will save you some RAM.
# Now add a basic zone with local storage. May be start more DevCloud hosts by
# importing more appliances and changing default IPs and reboot!
</code></pre></div></div>
<p>Make sure your mgmt server is running and you may deploy a basic zone using
preconfigured settings in <code class="highlighter-rouge">tools/devcloud/devcloud.cfg</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mvn -P developer -pl tools/devcloud -Ddeploysvr
# Or in case mvn fails try the following, (can fail if you run mgmt server in debug mode on port 8787)
$ cd tools/devcloud
$ python ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
</code></pre></div></div>
<h3 id="diy-devcloud">DIY DevCloud</h3>
<p>Install VirtualBox and get the <a href="http://www.debian.org/devel/debian-installer/">Debian Wheezy
7.0</a>. I used the netinst i386
iso. Create a new VM in VirtualBox with Debian/Linux as the distro, 2G RAM, 20G
or more disk and two nics: host-only with promiscuous mode “allow-all” and a NAT
adapter. Next, install a base Debian system with linux-kernel-pae (generic),
and openssh-server. You may download my <a href="http://home.apache.org/~bhaisaab/vms/debian-wheezy-basex86.ova">base system from
here</a>.</p>
<p>Install required tools and Xen-i386:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ apt-get install git vim tcpdump ebtables --no-install-recommends
$ apt-get install openjdk-6-jdk genisoimage python-pip mysql-server nfs-kernel-server --no-install-recommends
$ apt-get install linux-headers-3.2.0-4-686-pae xen-hypervisor-4.1-i386 xcp-xapi xcp-xe xcp-guest-templates xcp-vncterm xen-tools blktap-utils blktap-dkms qemu-keymaps qemu-utils --no-install-recommends
</code></pre></div></div>
<p>You may need to build and install mkisofs. Remove MySQL password:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mysql -u root -p
&gt; SET PASSWORD FOR root@localhost=PASSWORD('');
&gt; exit;
</code></pre></div></div>
<p>Install MySQL Python connector 1.0.7 or latest:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pip install mysql-connector-python
# Or, if you have easy_install you can do: easy_install mysql-connector-python
</code></pre></div></div>
<p>Setup Xen and XCP/XAPI:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ echo "bridge" &gt; /etc/xcp/network.conf
$ update-rc.d xendomains disable
$ echo TOOLSTACK=xapi &gt; /etc/default/xen
$ sed -i 's/GRUB_DEFAULT=.\+/GRUB_DEFAULT="Xen 4.1-i386"/' /etc/default/grub
$ sed -i 's/GRUB_CMDLINE_LINUX=.\+/GRUB_CMDLINE_LINUX="apparmor=0"\nGRUB_CMDLINE_XEN="dom0_mem=400M,max:500M dom0_max_vcpus=1"/' /etc/default/grub
$ update-grub
$ sed -i 's/VNCTERM_LISTEN=.\+/VNCTERM_LISTEN="-v 0.0.0.0:1"/' /usr/lib/xcp/lib/vncterm-wrapper
$ cat &gt; /usr/lib/xcp/plugins/echo &lt;&lt; EOF
#!/usr/bin/env python
# Simple XenAPI plugin
import XenAPIPlugin, time
def main(session, args):
if args.has_key("sleep"):
secs = int(args["sleep"])
time.sleep(secs)
return "args were: %s" % (repr(args))
if __name__ == "__main__":
XenAPIPlugin.dispatch({"main": main})
EOF
$ chmod -R 777 /usr/lib/xcp
$ mkdir -p /root/.ssh
$ ssh-keygen -A -q
</code></pre></div></div>
<p>Network settings, /etc/network/interfaces:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
allow-hotplug eth1
iface eth1 inet manual
auto xenbr0
iface xenbr0 inet static
bridge_ports eth0
address 192.168.56.10
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
gateway 192.168.56.1
dns_nameservers 8.8.8.8 8.8.4.4
post-up route del default gw 192.168.56.1; route add default gw 192.168.56.1 metric 100;
auto xenbr1
iface xenbr1 inet dhcp
bridge_ports eth1
dns_nameservers 8.8.8.8 8.8.4.4
post-up route add default gw 10.0.3.2
</code></pre></div></div>
<p>Preseed the SystemVM templates in <code class="highlighter-rouge">/opt/storage/secondary</code>, follow directions
from
<a href="http://incubator.apache.org/cloudstack/docs/en-US/Apache_CloudStack/4.0.0-incubating/html/Installation_Guide/management-server-install-flow.html#prepare-system-vm-template">here</a>.
Configure NFS server and local storage.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ mkdir -p /opt/storage/secondary
$ mkdir -p /opt/storage/primary
$ hostuuid=`xe host-list |grep uuid|awk '{print $5}'`
$ xe sr-create host-uuid=$hostuuid name-label=local-storage shared=false type=file device-config:location=/opt/storage/primary
$ echo "/opt/storage/secondary *(rw,no_subtree_check,no_root_squash,fsid=0)" &gt; /etc/exports
$ #preseed systemvm template, may be copy files from devcloud's /opt/storage/secondary
$ /etc/init.d/nfs-kernel-server restart
</code></pre></div></div>
<p>Hope this posts helps you understand how you can build your own DevCloud appliance. If you have any questions, please send an email to CloudStack’s Developer Mailing list: <a href="mailto:dev@cloudstack.apache.org">dev@cloudstack.apache.org</a></p>
Tue, 27 Nov 2012 00:00:00 +0530http://rohityadav.cloud/blog/devcloud/
http://rohityadav.cloud/blog/devcloud/cloudstackCloudStack CloudMonkey<p>About 2-3 weeks ago I started writing a CLI (command line interface) for <a href="http://cloudstack.apache.org">Apache CloudStack</a>. I researched some options and finally chose Python and cmd. Python comes preinstalled on almost all Linux distros and Mac, and cmd is a standard package in Python with which one can write a tool which can work as a command line tool and as an interactive shell interpretor. I named it <code class="highlighter-rouge">cloudmonkey</code> after the project’s mascot.</p>
<div class="post-image">
<img src="/images/cloudstack/cloudmonkey-mac.png" /><br /><p>CloudMonkey on OSX</p>
</div>
<p>At this time, Apache CloudStack has around 300+ restful <a href="http://cloudstack.apache.org/api.html">APIs</a>, and writing api handlers (autocompletion, help, request handlers etc.) for each API seemed a mammoth task at first. Marvin (the ignored robot) came to rescue. <br /></p>
<div class="post-image">
<img src="/images/cloudstack/cloudmonkey-ubuntu.png" /><br /><p>CloudMonkey on Ubuntu</p>
</div>
<p>I grouped the apis based on their starting lowercase substring of their name, for example for the api listUsers, the substring will be list. Based on such pattern, I wrote the code so that it would group APIs based on such verbs and create handlers on the fly and add them to the shell class. The handlers are actually closures so, this way every handler is actual a dynamic function in memory enclosed by the closure generator for a verb. In the initial version, when a command was executed first time based on its verb, command class from appropriate module from cloudstackAPI would be loaded and a cache dictionary would be populated if a cache miss was hit. In later version, I wrote a cache generator which would precache all the APIs at build time to cheat on the runtime lookup overhead from O(n) to O(1). This cache would contain for each verb the api name, required params, all params and help strings. This dictionary is used for autocompletion for the verbs, the commands and their parameters, and for help strings.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>grammar = ['list', 'create', 'update', 'delete', ...]
for rule in grammar:
def add_grammar(rule):
def grammar_closure(self, args):
if not rule in self.cache_verbs:
self.cache_verb_miss(rule)
try:
args_partition = args.partition(" ")
res = self.cache_verbs[rule][args_partition[0]]
except KeyError, e:
self.print_shell("Error: invalid %s api arg" % rule, e)
return
if ' --help' in args or ' -h' in args:
self.print_shell(res[2])
return
self.default(res[0] + " " + args_partition[2])
return grammar_closure
</code></pre></div></div>
<p>Right now <code class="highlighter-rouge">cloudmonkey</code> is available as a community distribution on the <a href="http://pypi.python.org/pypi/cloudmonkey/">cheese shop</a>, so <code class="highlighter-rouge">pip install cloudmonkey</code> already! It has a <a href="https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+cloudmonkey+CLI">wiki</a> on building, installation and usage instructions, or watch a <a href="http://www.youtube.com/watch?v=BjkGp3egv9g">screencast</a> (<a href="http://home.apache.org/~bhaisaab/cloudstack/cloudmonkey/cloudmonkey-screencast-user-transcript.txt">transcript</a>, <a href="http://home.apache.org/~bhaisaab/cloudstack/cloudmonkey/cloudmonkey-screencast-user.mov">alternate link</a>) I made for users. As the userbase grows, it will only get better. Feel free to reachout to me or the Apache CloudStack team on our IRC or on the mailing lists.</p>
Wed, 21 Nov 2012 00:00:00 +0530http://rohityadav.cloud/blog/cloudmonkey/
http://rohityadav.cloud/blog/cloudmonkey/cloudstackCADE9<p>CADE9 is my small embedded project consisting of ATmega32 microcontroller with custom hardware which I hand soldered on a matrix board. It uses the open source <a href="http://www.cocoos.net">cocoOS</a> as scheduler. On top of CADE9 one can implement classic arcade games like snakes, pong, bricks, breakout etc. (arCADE games with max. displayable score limit of 9, so CADE9 :)</p>
<div class="post-image">
<img src="/images/cade9.jpg" />
</div>
<p>Hardware Specs:</p>
<ul>
<li>ATmega32 uC @ 16MHz</li>
<li>12 x 7 custom hand soldered (3mm) LED matrix</li>
<li>Two 7-segment displays to display scores.</li>
<li>Five buttons for input: Up, Down, Left, Right and Fire.</li>
<li>One buzzer for audio effects.</li>
</ul>
<p>In the video below, I play a game of 1-player pong; the left bat is the computer and top 7-segment display shows score of the computer. The player with a score of 9 dot wins! Because I did not use any good locking mechanism, it sometimes misses a good hit. The code is pretty naive.</p>
<div class="post-image"><iframe width="100%" height="450" src="http://www.youtube.com/embed/F4D6QbLzOpM?rel=0" frameborder="0" allowfullscreen=""></iframe></div>
Sat, 10 Apr 2010 00:00:00 +0530http://rohityadav.cloud/blog/cade9/
http://rohityadav.cloud/blog/cade9/systems