If you have enabled git information in the shell prompt (like branch name, working tree status, etc.) [1], an upgrade to F18 breaks this functionality. What’s worse, __git_ps1 (a shell function) isn’t found, and a yum plugin goes looking for a matching package name to install, making running any command on the shell *very* slow.

Avi Kivity announced he is stepping down as (co-)maintainer of the KVM Project at the recently-concluded KVM Forum 2012 in Barcelona, Spain. Avi wrote the initial implementation of the KVM code back at Qumranet, and has been maintaining the KVM-related kernel and qemu code for about 7 years now.

In his keynote speech, he mentioned he’s founding a startup with a friend, and hopes to create new technology as exciting as KVM. He also mentioned they’re in stealth mode right now, so questions about the new venture didn’t get any answers.

He returned to the stage on the second day of the Forum to talk about the new memory API work he’s been doing in qemu, and in his typical dry humour, he mentioned he was supposed to vanish in a puff of smoke after his keynote, but the special effects machinery didn’t work, so he was back on stage. Avi later rued the lack of laughter at this joke, and that made him very sad. To offer him some consolation, it was pointed out that not everyone knew of his departure, as many had missed his keynote. He quipped “that’s even worse than not getting laughs”.

His leadership, as well as his humour, will be missed. Personally, he’s helped me grow during the last few years we’ve worked together. But I’m sure whatever he’s working on will be something to look forward to, and we’re not really bidding him adieu from the tech world.

I’ve tried several RSS feed readers, offline as well as online: aKregator, Liferea, rss2email being the ones tried for a long time. One drawback with these offline tools is they may miss feeds when I’m offline for prolonged periods (travel, vacations, etc.). Also, they’re tied to one device; can’t switch laptops and have the feeds be in sync. I tried Google Reader for a while as well, for a solution in the “cloud”, which worked for a while, but not anymore.

So I started to search for an online feed reader, preferably with hosting services, since I didn’t want to keep up with updates to the software. I found several free readers, and Tiny Tiny RSS seemed like a really good option. The developer hosts an online version of the reader, which I used for quite a while. (The online service is soon going to be discontinued.) I was quite content with that option, but when OpenShift was launched, I thought I’d try hosting tt-rss myself: it initially began as an experiment to using OpenShift. Then, when I moved this blog to OpenShift, I realised it didn’t really take much effort to host the blog, and that I could switch my primary instance of tt-rss from the developer-hosted instance to my own. It turned out to be really easy, and here I’ll share my recipe.

After this initial setup, I copied all the files from the ttrss src dir to the php/ directory of the OpenShift repo:

cp -r ~/src/Tiny-Tiny-RSS/* ~/openshift/ttr/php/

Next is to add all the files to the git repo:

cd ~/openshift/ttr/
git add php
git commit -m 'Add tt-rss sources'

Now to set up the environment on the server for tt-rss to work in. E.g. creating directories where tt-rss will store its feed icons, temporary files, etc. This is needed, as the OpenShift git directory is transient: it’s deleted and re-created whenever ‘git push’ is done. So to store persistent data between git pushes, we need to use the OpenShift data directory. Create an app build-time action hook to setup the proper directory structure each time the app is built (i.e. after a git push). Learn more about the different build hooks here.

The last icons bit is a modification from the default of ‘feed-icons’. If you’re setting up a new repo, there’s no need to deviate from the default, but when I had deployed the tt-rss instance, the default icons directory was ‘icons’, which unfortunatley clashes with Apache’s idea of what $URL/icons is. So I used ‘ico’. Remember to modify the bit in the build hook above to create the appropriate symlink if this ICONS_URL is changed.

These config settings are the ones specific to OpenShift. Modify the others to suit your needs.

Lastly, add a cron job to update the feeds at an hourly interval:

cd ~/openshift/ttr
mkdir .openshift/cron/hourly

I created a new file, called update-feeds.sh, in the new .openshift/cron/hourly directory, and added the following to it:

The 2012 edition of the Linux Plumbers Conference concluded recently. I was there, running the virtualization microconference. The format of LPC sessions is to have discussions around current as well as future projects. The key words are ‘discussion’ (not talks — slides are optional!) and ‘current’ and ‘future’ projects — not discussing work that’s already done; rather discussing unsolved problems or new ideas. LPC is a great platform for getting people involved in various subsystems across the entire OS stack in one place, so any sticky problems tend to get resolved by discussing issues face-to-face.

The virt microconf had A LOT of submissions: 17 topics to be discussed in a standard time slot of 2.5 hours for one microconf track. I asked for a ‘double track’, making it 5 hours of time for 17 topics. Still difficult, but reducing a few topics to ‘lightning talks’, we could get a somewhat decent 20 minutes per topic. I contemplated between rejecting topics and thus increasing the time each discusison would get, or keeping all the topics, and asking the people to wrap up in 20 minutes. I went for the latter — getting more stuff discussed (and hence, more problems / issues ‘out there’) is a better use of time, IMO. That would also ensure that people stay on-topic and focussed.

There was also a general change in the way microconfs were scheduled this time: the microconfs were not given a complete 2.5-hour slot. Rather, they were given 3 slots of 45 minutes each. This helped the schedule pages to show the topics of the microconfs being discussed at that time, so the attendees could pick and choose the discussion they wanted to attend, rather than seeing a generic ‘Virtualization Micrconf’ slot. I think this was a good idea. Individual microconf owners could request for modifications to this scheme, of course, and some microconfs just chose to run the entire session in one slot, or reserved one whole day in a room, etc. For the virt microconf, I went with six separate slots, scheduled in a way to avoid conflicts with other virt-related topics in other sessions, giving a total of 4.5 hours for 17 topics.

I segregated the CFP submissions so I could schedule related discussions in one slot, to avoid jumping between subjects and to also help concentrate on specifics in an area. Two submissions, one on security and one on storage, were by themselves, so I clubbed them into one ‘security and storage‘ session. The others were nicely aligned, so we could have ‘x86‘, ‘MM‘, ‘ARM‘, ‘Networking‘ and ‘lightning talks’ topics in separate slots. Since there were 4 network-related talks, I asked for a double slot (two 45-min slots back-to-back), and clubbed the lightning talks in the same session, which was scheduled to be the last session for the virt microconf.

Given this, I would say the microconf went quite well — the notes and slides are up at the LPC 2012 virt microconf wiki, and we could get good discussions going for most of the topics, given the time constraints. Of course, a major benefit of going to conferences is to meet people outside of the sessions, in the hallways and at social events, and the discussions continued there as well. I did bank on this extra time we would have into the ‘reject vs take all of them’ problem mentioned earlier. From what I heard, the beer at the social events failed to stop technical discussions, so it all worked out for the best.

Each microconf owner (or a representative) had to do a short summary at the end of the LPC, for the benefit of the people not present for some sessions. I did the virt summary in roughly these words:

We had a quite productive virtualization microconfierence. We received a lot of submissions, and accepted them all, which meant we had to limit the time for each discussion in the slots, but we could divide the slots by a general topic, effectively increasing the discussion time for the larger topic.

We had a healthy representation from the KVM as well as Xen sides. For example, in the MM topic, we discussed NUMA awareness for KVM as well as Xen. Dario Faggioli presented the Xen side, and Andrea Arcangeli spoke on the Linux/KVM side. Andrea spoke about AutoNUMA. It has been contentious on the mailing lists, and from the Kernel Summit discussions, it looked like some agreement will be reached soon. Xen uses a similar approach to AutoNUMA, and they would end up pushing the patches soon as well. Daniel Kiper spoke about integrating the various balloon drivers in the kernel to remove code duplication.

Both AMD and Intel publically announced new hardware features for interrupt virtualization for the first time here, and it was interesting to see them compare notes and find out what the other is doing and how, for example do they support IOMMU? x2apic? Etc.

New ARM architecture support work was presented by Marc Zyngier for the KVM effort, and Stefano Stabellini for the Xen effort. Much of the work seems to be done, and patches are in a shape to be applied for the next merge window. There are a few open issues, and they were discussed as well.

We had quite a few talks for the networking session. Alex Williamson spoke about VFIO, which seemed to get mentioned a lot throughout the conference in multiple sessions. This is a new way of doing device assignment, and progress looks positive, with the kernel side already merged in 3.6, and qemu patches queued up for 1.3. Alex Graf then talked about ‘semi-assignment’, a way to do device assignment (or pci passthrough) while also getting proper migration support. The effort involved writing device emulation for each device supported, and the approach wasn’t too popular. IBM and Intel guys have been doing virtio net scalability testing, and John Fastabend spoke about some optimisations, which were generally well-received. We should expect patches and more benchmarks soon. Vivek Kashyap spoke about network overlays, and how creating a tunnel for networks for VMs can help with VM migration across networks.

We also had a session on security, by Paul Moore, who gave an overview of the various methods to secure VMs, specifically the new seccomp work.

Lastly, we had Bharata Rao talk about introducing a glusterfs backend for qemu, replacing qemu’s block drivers, which gives more flexibility in handling disk storage for VMs.

The organisers are collecting feedback, so if you were there, be sure to let them know of your experience, and what we could do better in the coming years.

The GNOME default of ‘hibernate’ or suspend-to-disk on very low battery power isn’t optimal for many laptops — hibernate is known to be broken on several hardware setups, it frequently results in file system corruption, and just causes pain. That, combined with the weird behaviour of the GNOME power manager to put the system in hibernate, even when the battery isn’t low, annoyed me enough to go hunting for a way to change the default.

The GUI doesn’t expose a ‘sleep’ setting; it just offers hibernate and shutdown, so here’s a tip to just put the system to sleep state (suspend to RAM), which is a much well-behaved default for me.

Install dconf-editor, and go to

org.gnome.settings-daemon.plugins.power

and modify the

critical-battery-action

to suspend.

For the curious, the weird behaviour of the GNOME power manager I mentioned above is noted in these bug reports:

Updating a Fedora 16 guest to a Fedora 17 guest via preupgrade gave me the ‘Oh no, something has gone wrong!’ screen at the GDM login screen. It’s quite frustrating to see that screen because you can’t switch to a virtual terminal for troubleshooting, or even reboot or shutdown.

To send the key sequence Ctrl+Alt+F2 to the guest to switch to a virtual terminal, use the qemu monitor by pressing

Ctrl+Alt+2

and use sendkey to send the key sequence:

(qemu) sendkey ctrl-alt-f2

Then go back to the guest window by issuing

Ctrl+Alt+1

After logging in as root, I poked in the gdm log files in /var/log/gdm/ and saw the fprint daemon was causing some errors. Removing the fprintd package fixed this, but this is just a workaround, not a solution:

Some devices, like the Galaxy Nexus and the HP Touchpad* (via the custom Android ROMs) don’t expose themselves as USB storage devices. They instead use MTP or PTP to transfer media files (limiting to only photos and audio/video files being shown from the device).

This happens due to there being no separate sdcard on these devices, and ‘unplugging’ an sdcard from a running device to be exposed to the connected computer could cause running apps on the device itself to malfunction. Android developer Dan Morill explains this here. He also mentions how the Nexus S doesn’t have this problem.

There are several apps that can open shares to the device using one of several protocols (DAV, SMB, etc.). However, one quick way I’ve found to copy files to and from the device connected via USB to a computer is by using the adb tool. It’s available as part of the ‘android-tools’ package on Fedora.

To copy a file from the computer to an android device connected via usb, use this:

adb push /path/to/local/file /mnt/sdcard/path/to/file

This will copy the local file to the device in the specified location. Directories can be created on the device via the shell:

adb shell

and using the usual shell commands to navigate around and create directories.

* On the Touchpad, WebOS can expose the storage as a USB Storage Media. The current nightly builds of CM9 can’t.

I moved this blog a while back from Blogger to WordPress. I was looking to move away from Blogger/Blogspot, to something self-hosted. I had come up with the following list to make the move seamless (for me as well as regular visitors):

Ability to use custom domains: Since I used blogger’s custom domains feature to redirect the blogger/blogspot links to my domain, I wanted to retain that functionality

Make the move seamless to site visitors

Preserve links and link structure. All earlier links, rss feeds, etc., should continue to work as they did with the earlier setup (helps in maintaining search engine rankings)

No dependence on 3rd party server/software for leaving comments: Some blogging platforms are simple and minimal; they however end up using other services for comments to blog posts. I didn’t want that to happen — all the content should be on one sever without the users needing any sort of registration elsewhere.

Easy to manage the software: Shouldn’t be too time-consuming to keep the blog up

Red Hat‘s OpenShift PaaS platform had just announced support for domain aliases for applications, so I started looking at what would be involved in moving the blog on their platform.

Read on for my experiences and details on deploying this WordPress blog on OpenShift.

I already had played with OpenShift a bit, and loved their workflow of deploying apps using git. Deploying a wordpress install on OpenShift would mean I wouldn’t have to manage my own servers, operating systems, software updates, etc. It’s all on the stable and secure RHEL platform, with PHP managed by the RHEL team. So all I would need to worry about is just the wordpress installation itself. As long as I routinely check for security updates to wordpress, and push those updates to the site, I should be doing OK.

So I created a new php-5.3 app using ‘rhc-create-app’. mysql is needed for the database, so I also added an instance to the app with the command

rhc-ctl-app -e add-mysql-5.1 -a <appname>

To manage the mysql instance, a phpmyadmin cartridge is desirable too:

I had used both, log. and www. for the blog, so let’s continue using both so that both domains continue working. Of course, I changed the DNS CNAME entries for www. and log. over to <appname>-<domainname>.rhcloud.com via my name provider’s site.

Next, using the admin credentials on the mysql db, I then created a new db and a new user and gave the user all permissions on that db. All this is quite simple using the phpmyadmin interface.

That’s it, all set with the app on OpenShift.

I then went and downloaded the latest wordpress release (3.2.1 then) zip file and extracted the files in a local directory.

Now here’s where I started using the power of git and OpenShift: I created a git repo in the wordpress directory and added all files to it, and made an initial commit. This is my base from where I’ll use wordpress. New wordpress releases can be copied in this directory, and a new commit will map to the upstream release version. Any modifications to files I make in my wordpress installation (e.g. theme changes) are tracked in another branch in the same directory, with that branch being rebased on top of the latest release (the master branch).

With this setup, I can just copy the contents of this directory into my app’s php directory and push the changes to OpenShift. The ‘php’ directory is where all the app code resides. I then added all files in the git repo and committed the result. I then created the wp-config.php file as a copy of the wp-config-sample.php file, modified it to suit my installation, committed the change, and also added the file to the other wordpress directory created in the first step above. I then just pushed the changes, and the app was live on the cloud and I could get started with wordpress’s wizard-based installation.

Now here’s one oddity of hosting apps on OpenShift: the app directory isn’t writable, or isn’t the place where the app itself can make changes and assume they’d be preserved (I think this is a good thing). Since the app is deployed via git, any content written to the server app directory can be lost on the next git push. For wordpress, this means the ‘uploads’ directory has to be given a place where images, etc., can be uploaded without problems.

The OpenShift people have helpfully given us some environment variables and hooks in the app deployment process, which can be used to do this right.

The default wordpress uploads directory is ‘wp-content/uploads’. We can continue using this directory, with the following snippet placed in ‘.openshift/action_hooks/build’:

This ensures the ‘wp-content/uploads’ location is available for wordpress to put stuff into, and it also ensures the content goes into a place where OpenShift will not destroy the data on the next git push.

OK, having done all this, I was now ready to import my older blog posts. I installed the blogger-to-wordpress and livejournal-to-wordpress plugins (well, since I’m doing this, I thought I might as well import my older lj entries), git push’ed them, and did the import from the web interface.

Comments from livejournal entries and some blogger posts didn’t get fetched. I don’t know why that happened. I tried the import a couple more times, but those posts didn’t show up. I just decided to not bother about that; if there was any frequently-visited post, I could always go back and import it by hand. Since I didn’t expect to do any more imports, I removed those plugins and pushed the result again.

There is a blogger-to-wordpress redirect plugin, but that plugin does a lot more than just redirecting: it imports images uploaded to blogger or picasaweb on the blogger posts, generates blogger template to redirect blogger posts to wordpress, maps blogger posts to wordpress posts, etc. Now most of this functionality is one-time; importing pictures, generating blogger template for redirection, etc., doesn’t need to be present all the time (can’t be too careful with php apps and security). I used the plugin to import all the blogger/picasaweb pictures it could fetch, and removed it as well.

I then enabled wordpress’s custom URL structure, which allows blogger-like post URLs, with the year and month as well as post title in the URL. Enabling this needs .htaccess modifications, which wordpress can’t make directly in our setup (because it can’t write to the app directory). So created a new .htaccess file in the php/ dir. in the OpenShift app directory and included the snippet wordpress helpfully tells you it would include if the directory were writable (my code is in the snippet below).

I also took some hints from the blogger-to-wordpress plugin and created a minimal plugin that maps blogger URLs to wordpress URLs, and installed this plugin.

Next up was to ensure the older feeds kept working, and also ensuring the contents of the wp config file, and directory listings weren’t displayed. I also searched for some wordpress hardening tips, and compiled a fun-looking .htaccess file, snippet included below:

I also installed the WP-Piwik and smart-404 plugins. WP-Piwik is a plugin that adds Piwik javascript code to give me a summary of the visits to the site, and the search keywords people use to land on my site. More on Piwik and its setup in a follow-up blog post. Smart-404 shows a list of pages with similar titles to the one being used in the 404 page. I had noticed a few 404 page hits via Piwik.

I’ve enabled the Akismet plugin that comes with the wordpress distribution, and it has flagged over 600 comments as spam so far, with just 2 false-positives. That’s impressive, but I intend to look further into this:

Is there a way to reduce spam comments?

Why do wordpress sites get spammed so much?

What I’ve seen so far is people search for specific terms on the ‘net, land on some post, and put the spam comment. So these are actual humans, not bots. Since they’re investing enough effort into finding blogs and adding comments, spam prevention techniques like CAPTCHAs aren’t going to work all the time. Akismet is working fine so far, so I’ll continue using it, but I’m going to think / search for ways to mitigate spam.

Overall, the move was really painless, done within a weekend and the most time was spent in learning about WordPress and moving the existing posts to the new blog. There were hardly any OpenShift issues, it stayed nicely out of the way, and I really like that about the platform.

I still haven’t figured out a way to map Blogger labels to WordPress Categories/Tags; these are new concepts (to me), and I’ll probably get something done here with some more htaccess trickery.

Apparently my initial submission was about 3x longer than the average article on opensource.com. I’ve covered events running up to the conference on this blog, and with the osdc article, I’ve covered the conf as well. There still might be a few things left which I’ll post about here in the coming days.

My second talk at FUDCon Pune was on Virtualization (slides) on day 2. While I had registered the talk well in advance, I wasn’t quite sure what really to talk about: should I talk about the basics of virtualization? Should I talk about what’s latest (coming up in Fedora 16)? Should I talk about how KVM works in detail? My first talk on git had gone well, and as expected for this FUDCon, majority of the participants were students. Expecting a similar student-heavy audience for the 2nd talk as well, I decided on discussing the basics of the Linux Virt Stack. Kashyap had a session lined up after me on libvirt, so I thought I could give an overview of virt-manager, libvirt, QEMU and Linux (KVM).

And since my registered talk title was ‘Latest in Linux Virtualization’, I did leave a few slides on upcoming enhancements in Fedora 16 (mostly concentrating on the QEMU side of things) at the end of the slide deck, to cover those things if I had time left.

As with the previous git talk, I didn’t get around to making the slides and deciding on the flow of the talk till the night before the day of the talk, and that left me with much less sleep than normal. The video for the talk is available online; I haven’t seen it myself, but if you do, you’ll find I was almost sleep-talking through the session.

To make it interactive as well as keep me awake, I asked the audience to stop me and ask questions any time during the talk. What was funny about that was the talk was also being live streamed, and the audio signal for the live streaming was carried via one mic and the audio stream for the audience as well as the recorded talk was on a different mic. So even though the audience questions were taken on the audience mic, I had to repeat the questions for the people who were catching the talk live.

I got some feedback later from a few people — I missed to introduce myself, and I should have put some performance graphs in the slides, as almost all users would be interested in KVM performance vs other hypervisors. Both good points. The performance slides I hadn’t thought about earlier, I’ll try to incorporate some such graphs in future presentations. Interestingly, I hadn’t also thought of introducing myself. Previously, I was used to someone else introducing me and then me picking up from there. At the FUDCon, we (the organisers) missed on getting speaker bios, and didn’t have volunteers introduce each speaker before their sessions. So no matter which way I look at it, I take the blame as speaker and organiser for not having done this.

There was some time before my session to start and there were a few people in the auditorium (the room where the talk was to be held), so Kashyap thought of playing some Fedora / FOSS / Red Hat videos. (People generally like the Truth Happens video, and that one was played as well.) These, and many more are available on the Red Hat Videos channel on YouTube. There was also some time between my session and Kashyap’s (to allow for people to move around, take a break, etc.), so we played the F16 release video that Jared gave us.

Overall, I think the talk went quite well (though I may have just dreamed that). I tried to stay awake for Kashyap’s session on libvirt to answer any questions directed my way; I know I did answer a couple of them, so I must have managed to stay up.