These XO-1.75 is based on the Marvell Armada 610 SoC (armv7l, non-NEON), which promises countless hours of fun enumerating and obtaining obscure pieces of software which are needed to make the laptop work.

One of these is the xf86-video-dove DDX for the Vivante(?) GPU: The most recent version 0.3.5 seems to be available only as SRPM in the OLPC rpmdropbox. Extracting it reveals a "source" tarball containing this:

At the Gentoo Miniconf 2012 in Prague we
will install Gentoo on the OLPC
XO-1.75, an ARM based laptop designed as an educational tool for
children. If you are interested in joining us, come to the Gentoo booth and
start hacking with us!

As the quantity and quality of this year's entries will attest, Gentoo is
alive, well, and taking no prisoners!

We had 70 entries for the 2012 Gentoo screenshot contest, representing 11 different window managers / desktop environments. Thanks to all that participated, the judges and likewhoa for the screenshot site.

In the last few weeks, the conference team has worked hard to prepare the conference. The main news items you should be awere of are the FAQ which has been published, the party locations and times, the call to organize BoF sessions and of course the sponsors who help make the event possible. And we’re happy to tell you that we will provide live video streams from the main rooms during the event (!!!) and we announced the Round Table sessions during the Future Media track. Last but not least, there have been some interviews with intresting speakers in the schedule!

Talking about video interviews, there will be more videos in those channels: the openSUSE Video team is gearing up to tape the talks at the event. They will even provide a live stream of the event, which you can watch via flash and on a smartphone at bambuser and via these three links via ogv feeds: Room KirkRoom McCoy and Room Scotty. Keep an eye on the wiki page as the team will add feeds to more rooms if we can get some more volunteers to help us out.

Despite all our work, this event would be nothing without YOUR help. We’re still looking for volunteers to sign up but there’s another thing we need you for: be pro-active and get the most out of this event! That means not only sitting in the talks but also stepping up and participating in the BoF Sessions. And organize a BoF if you think there’s something to discuss!

Party time!

Of course, we’re also thinking about the social side of the event. Yes, there will surely be an extensive “hallway track” as we feature a nice area with booths and the university has lots of hallways… But sometimes it’s just nice to sit down with someone over a good beer, and this is where our parties come in. As this article explains, there will be two parties: one on Friday, as warming-up (and pre-registration) and one on Saturday, rockin’ in the city center of Prague. Note that you will need your badge to enter this party, which means you have to be registered!

Sponsors

As we wrote a few days ago, all this would not be possible without our sponsors, and we’d like to thank them A LOT for their support!

Big hugs to Platinum Sponsor SUSE, Gold Sponsor Aeroaccess, Silver Sponsor Google, Bronze Sponsor B1Systems, supporters ownCloud and Univention and of course our media partners LinuxMagazine and Root.cz. Last but not least, a big shout-out to the university which is providing this location to us!

FaQ

On a practical level, we also published our Conference FAQ answering a bunch of questions you might have about the event. If you weren’t sure about someting, check it out!

More

There will be more news in the coming days, be sure to keep an eye on news.opensuse.org for articles leading up and of course during the event. As one teaser, we’ve got the Speedy Geeko and Lightning talks schedule coming soon!

Be there!

Gentoo Miniconf, oSC12 and LinuxDays will take place at the Czech Technical University in Prague. The campus is located in the Dejvice district and is next to an underground station that gets you directly to the historic city center – an opportunity you can’t miss!

We expect to welcome about 700 Open Source developers, testers, usability experts, artists and professional attendees to the co-hosted conferences! We work together making one big, smashing event! Admission to the conference is completely free. However for oSC a professional attendee ticket is available that offers some additional benefits.

All the co-hosted conferences will start on October 20th. Gentoo Miniconf and Linuxdays end on October 21st, while the openSUSE Conference ends on October 23rd. See you there!

It’s been a while since I’ve done anything visible to anyone but myself. So, what the heck have I been doing?

Well, for starts, in the past year I’ve done a serious amount of work in Python. This work was one of the reasons for my lack of motivation for Gentoo. I went from doing little programming / maintenance at work to doing it 40+ hours a week. It meant I didn’t really feel up to doing more of it in my limited spare time. So I took up a few new hobbies. I got into Photography (feel free to look under links for the photo website). I feel weird with the self promotion for that type of thing, but, c’est la vie.

As the programming at work died down some, I started to find odd projects. I spent some serious time learning Go [1] and did a few small projects of my own in that. One of those projects will be open sourced soon. I know a fair few different languages, and I know C, Python, and Java pretty decently. While I like all of the ones on that list, I can’t say that I truly buy into the philosophies. Python is great. It’s simple, it’s clean, and it “just works.” However, I find that like OpenSSL, it gives you enough room to hang yourself and everyone else in the room. The lack of strict typing coupled with the fact that it’s a scripting language are downsides (in my eyes). C, for all that it is awesome at low level work, requires so much verbosity to accomplish the simplest tasks that I tend to shy away from it for anything other than what must be done at that level. Java… is well Java. It’s a decent enough language I suppose, but being run in a VM is silly in my eyes. It, like C, suffers from being too verbose as well (again, merely my humble opinion).

Enter Go. Go has duck typed interfaces, unlike Java’s explicit ones. It’s compiled and strictly typed. It has other modern niceties (like proper strings), along with a strong tie to web development (another area C struggles with). It has numerous interesting concepts (check out defer), along with what I find to be a MUCH better approach to error handling than what exists in any of C, Java, or Python. Add in that it is concurrent by design and you have one serious language. I must say that I am thoroughly impressed. Serious Kudos to those Google guys for one awesome language.

I also picked up a Nexus 7 and started looking into how Android is built and works. I got my own custom ROM and Kernel working along with a nice Gentoo image on the SD Card. Can anyone say “Go compiler on my Nexus 7?” This work also led me to do some work as far as getting Gentoo booting on Amazon’s Elastic Compute Cloud. Building Android takes for-freaking-ever, so I figured.. why not do it in the cloud!? It works splendidly, and it is fast.

So that covers new tricks. You mentioned goals and ideas?!

First, time to get myself off the slacker wagon and back to doing something useful. I no longer repulse at the idea of developing when I get home. That helps =p. One of the first things I want to spend some time addressing is disk encryption in Gentoo. I wrote here pertaining to the state of loop-aes. Both Loop-AES and Truecrypt need to spend a little time under the microscope as to how they should be handled within Gentoo. I’ll write more on his later when I have all my ducks in a row. I have no doubt that this will be a fun topic.

I also want to look into how a language like Go fits into Gentoo. Go has it’s own build system (no Makefiles, configure scripts, or anything else) that DOES have a notion of things like CFLAGS. It also has the ability to “go get” a package and install it. To those curious check out their website. All of these lead to interesting questions from a package management point of view. I am inclined to think that Go is around to stay. I hope it is. So we may as well start looking into this now rather than later. As my father used to tell me all the time, “Proper Prior Planning Prevents Piss Poor Performance.” Time to plan =).

That is, right after I sort out the fiasco that is my bug queue. *facepalm*

my main gentoo workstation is down. no more documentation updates from me for awhile.

it seems the desktop computer’s video card has finally bitten the dust. the monitor comes up as “no input detected” despite repeated reboots. so now i’m faced with a decision: throw in a cheap, low-end GFX card as a stopgap measure, or wash my hands of 3 to 6 years of progressive hardware failure, and do a complete rebuild. last time i put anything new in the box was probably back in 2009…said (dead) GFX card, and a side/downgraded AMD CPU. might be worth building an entirely new machine from scratch at this point.

i haven’t bothered to pay attention to the AMD-vs-Intel race for the last few years, so i’m a bit at a loss. i’ll check TechReport, SPCR, NewEgg, and all those sites, but…not being at all caught up on the bang-for-buck parts…is a bit disconcerting. i used to follow the latest trends and reviews like a true technoweenie.

and now, of course, i’m thinking in terms of what hardware lends itself to music production — USB/Firewire ports, bus latency, linux driver status for crucial bits; things like that. all very challenging to juggle after being out of it for so long.

Not that long ago we had our monthly Gentoo Hardened project meeting (on October 3rd to be exact). On these meetings, we discuss the progress of the project since the last meeting.

For our toolchain domain, Zorry reported that the PIE patchset is updated for GCC, fixing bug #436924. Blueness also mentioned that he will most likely create a separate subproject for the alternative hardened systems (such as mips and arm). This is mostly for management reasons (as the information is currently scattered throughout the Gentoo project at large).

For the kernel domain, since version 3.5.4-r2 (and higher), the kernexec and uderef settings (for grSecurity) should no longer impact performance on virtualized platforms (when hardware acceleration is used of course), something that has been bothering Intel-based systems for quite some time already. Also, the problem with guest systems immediately reserving (committing) all memory on the host should be fixed with recent kernels as well. Of course, this is only true as long as you don’t sanitize your memory, otherwise all memory gets allocated regardless.

In the SELinux subproject, we now have live ebuilds allowing users to pull in the latest policy changes directly from the git repository where we keep our policy at. Also, we will see a high commit frequency in the next few weeks (or perhaps even months) as Fedora’s changes are being merged with upstream. Another change is that our patchbundles no longer contain all individual patches, but a merged patch. This increases the deployment time of a SELinux policy package considerably (up to 30% faster since patching is now only a second or less). And finally, the latest userspace utilities are in the hardened-dev overlay ready for broader testing.

grSecurity is still focusing on the XATTR-based PaX flags. The eclass (pax-utils) has been updated, and we will now be looking at supporting the PaX extended attributes for file systems such as tmpfs.

For profiles, people will notice that in the next few weeks, we will be dropping the (extremely) old SELinux profiles as the current ones have been marked stable long time ago.

In the system integrity domain, IMA is being worked on (packages and documentation) after which we’ll move to the EVM support to protect extended attributes.

And finally, klondike held a good talk about Gentoo Hardened at the Flossk conference in Kosovo.

All in all a good month of work, again with many thanks to the volunteers that are keeping Gentoo Hardened alive and kicking!

Why this is needed

In testing linux bridging I noticed a problem that took me much longer then I feel comfortable admitting.
You cannot break out the VLANs to from a physical device and also use that physical device (attached to a bridge) to forward forward the entire trunk to a set of VMs.
The reason this occurs is that once linux starts inspecting for vlans on an interface to split them out it discards all those you do not have defined, so you have to trick it.

Setup

I had my Trunk on eth1. What you need to do is directly attach eth1 to a bridge (vmbr1). This bridge now has the entire trunk associated with it.
Here's the fun part, you can break out vlans on the bridge, so you would have an interface for vlan 13 named vmbr1.13 and then attach that to a brige, allowing you to have a group of machines only exposed to vlan 13.

The networking goes like this.

/->vmbr1.13 ->vmbr13->VM2eth1->vmbr1--->VM1\->vmbr1.42 ->vmbr42->VM3

Example

Here is the script I used with proxmox (you can set up the bridge in proxmox, but not the source for the bridges data (the 'input').
This is for VLANs 1-13 and assumes you have vyatta set up the target bridges. I had this start at boot (via rc.local).

The Keynote speaker for the Bootstrapping Awesome co-hosted conferences is going to be Agustin Benito Bethencourt. Agustin is currently working in Nuremberg, Germany as the openSUSE Team Lead at SUSE, and in the Free Software community he’s mostly known for his contributions to KDE and especially in the KDE eV. He is a very interesting guy, with a lot of experience about FOSS both from the community and the enterprise POV, which is also the reason I asked him to do the Keynote. I enjoy a lot working with him on organizing this conference, his experience is valuable. In this interview he talks a bit about himself, and a lot about the subject of his Keynote, the conference, openSUSE and SUSE, and about Free Software. The interview was done inside the SUSE office in Prague, with me being the “journalist” and Michal being the “camera-man”. Post-processing was done by Jos. More interviews from other speakers are about to come, so stay tuned! Enjoy!

The recently approved EAPI 5 adds a feature called "slot-operator dependencies" to the package manager specification. Once these dependencies are implemented in the portage tree, the package manager will be able to automatically trigger package rebuilds when library ABI changes occur. Long-term, this will greatly reduce the need for revdep-rebuild.

If you are a Chromium user on Gentoo and you don't use portage-2.2, you have probably noticed that we are using the "preserve_old_lib" kludge so that your web browser doesn't break every time you upgrade the V8 Javascript library. This leaves old versions of V8 installed on your system until you manually clean them up. With slot-operator deps, we can eliminate this kludge since portage will have enough information to know it needs to rebuild chromium automatically. It's pretty neat.

I have forked the dev-lang/v8 and www-client/chromium ebuilds into my overlay to test this new feature; we can't really apply it in the main portage tree until a new enough version of portage has been stabilized. I will be maintaining the latest chromium dev channel release, plus a couple of versions of v8 in my overlay.

If you would like to try it out, you can install my overlay with layman -a floppym. Once you've upgraded to the versions in my overlay, upgrading/downgrading dev-lang/v8 should automatically trigger a chromium rebuild.

I originally posted the question on gentoo-hardened ML, but Sven Vermeulen advised me to file a bug, so there it is: bug #436474.

The problem I hit is that my ~/.config/chromium/ directory should have unconfined_u:object_r:chromium_xdg_config_t context, but it has unconfined_u:object_r:xdg_config_home_t instead.I could manually force the "right" context, but it turned out even removing the directory in question and allowing the browser to re-create it still results in wrong context. Looks like something deeper is broken (maybe just on my system), and fixing the root cause is always better. After all, other people may hit this problem too.Here is what error messages appear on chromium launch:

All common questions regarding travelling, transportation, event details, sightseeing and much more, in this Frequently Asked Questions page. Feel free to ask more questions, so we can include them in the FAQ and make it more complete

I was updating one of my boxens and ran into Bug 434686. In the bug Martin describes the simple way we as users can apply patches to packages that fail from bug fixes. This post is more than anything a reminder for me on how to do it. epatch_user has been blogged about before, dilfridge talks about it and says "A neat trick for testing patches in Gentoo (source-based distros are great!)".

As Martin explained in the bug and with the patch supplied by Liongene, here is how it works!

I've just updated the text on the Gentoo Wiki page on Ruby 1.9 to indicate that we now support eselecting ruby19 as the default ruby interpreter. This has not been tested extensively, so there may still be some problems with it. Please open bugs if you run into problems.

Most packages are now ready for ruby 1.9. If your favorite packages are not ready yet, please file a bug as well. We expect to make ruby 1.9 the default ruby interpreter in a few months time at the most. Your bug reports can help speed that up.

On a related note, we will be masking Ruby Enterprise Edition (ree18) shortly. With Ruby 1.9 now stable and well-supported we no longer see the need to also provide Ruby Enterprise Edition. This is also upstream's advice. On top of this the last few releases of ree18 never worked properly on Gentoo due to threading issues, and these are currenty already hard-masked.

Since we realize people may depend on ree18 and migration to ruby19 may not be straightforward, we intend to move slowly here. Expect a package mask within a month or so, and instead of the customary month we probably won't remove ree18 until after three months or so. That should give everyone plenty of time to migrate.

In portage-2.1.11.22 and 2.2.0_alpha133 there’s support for expermental EAPI 5-hdepend which adds the HDEPEND variable which is used to represent build-time host dependencies. For build-time target dependencies, use DEPEND (if the host is the target then both HDEPEND and DEPEND will be installed on it). There’s a special “targetroot” USE flag that will be automatically enabled for packages that are built for installation into a target ROOT, and will otherwise be automatically disabled. This flag may be used to control conditional dependencies, and ebuilds that use this flag need to add it to IUSE unless it happens to be included in the profile’s IUSE_IMPLICIT variable.

Second, I’d like to introduce a few enhancements I’ve made on these (some being merged upstream already).

Third, I’d like to turn this into a bit of a tutorial into getting started with EC2 as well since these scripts make it brain-dead simple.

I’ve previously written on building a Gentoo EC2 image from scratch, but those instructions do not work on EBS instances without adjustment, and they’re fairly manual. Edowd extended this work by porting to EBS and writing scripts to build a gentoo install from a stage3 on EC2. I’ve further extended this by adding a rudimentary plugin framework so that this can be used to bootstrap servers for various purposes – I’ve been inspired by some of the things I’ve seen done with Chef and while that tool doesn’t fit perfectly with the Gentoo design this is a step in that direction.

What follows is a step-by-step howto that assumes you’re reading this on Gentoo and little else, and ends up with you at a shell on your own server on EC2. Those familiar with EC2 can safely skim over the early parts until you get to the git clone step.

To get started, go to aws.amazon.com, and go through the steps of creating an account if you don’t already have one. You’ll need to specify payment details/etc. If you buy stuff from amazon just use your existing account (if you want), and there isn’t much more than enabling AWS.

Log into aws.amazon.com, and from the top right corner drop-down under either your name or My Account/Console choose “Security Credentials”.

Browse down to access credentials, click on the X.509 certificate tab, generate a certificate, and then download both the certificate and private key files. The web services require these to do just about anything on AWS.

On your gentoo system run as root emerge ec2-ami-tools ec2-api-tools. This installs the tools needed to script actions on EC2.

Export into your environment (likely via .bashrc) EC2_CERT and EC2_PRIVATE_KEY. These should contain the paths to the files you created in the previous step. Congratulations – any of the ac2-api-tools should now work.

We’re now going to checkout the scripts to build your server. Go to an empty directory and run git clone git://github.com/rich0/rich0-gentoo-bootstrap.git -b rich0-changes.

chdir to the repository directory if necessary, and within it run ./setup_build_gentoo.sh. This creates security zones and ssh keys automatically for you, and at the end outputs command lines that will build a 32 or 64 bit server. The default security zone will accept inbound connections to anywhere, but unless you’re worried about an ssh zero-day that really isn’t a big deal.

Run either command line that was generated by the setup script. The parameters tell the script what region to build the server in, what security zone to use, what ssh public key to use, and where to find the private key file for that public key (it created it for you in the current directory).

Go grab a cup of coffee – here is what is happening:

A spot request is created for a half decent server to be used to build your gentoo image. This is done to save money – amazon can kill your bootstrap server if they need it, and you’ll get the prevailing spot rate. You can tweak the price you’re willing to pay in the script – lower prices mean more waiting. Right now I set it pretty high for testing purposes.

The script waits for an instance to be created and boot. The build server right now uses an amazon image – not Gentoo-based. That could be easily tweaked – you don’t need anything in particular to bootstrap gentoo as long as it can extract a stage3 tarball.

A few build scripts are scp’ed to the server and run. The server formats an EBS partition for gentoo and mounts it.

A stage3 and portage snapshot are downloaded and extracted. Portage config files (world, make.conf, etc) are populated. A script is created inside the EBS volume, and executed via chroot.

That script basically does the typical handbook install (emerge sync, update world (which has all the essentials in it like dhcpcd and so on), build a kernel, configure rc files, etc.

The bootstrap server terminates, leaving behind the EBS volume containing the new gentoo image. A snapshot is created of this image and registered as an AMI.

A micro instance of the AMI is launched to test it. After successful testing it is terminated.

After the script is finished check the output to see that the server worked. If you want it outputs a command line to make the server public – otherwise only you can see/run it.

To run your server go to aws.amazon.com, sign in if necessary, browse to the EC2 dashboard. Click on AMIs on the left side, select your new gentoo AMI, and launch it (micro instances are cheap for testing purposes). Go to instances on the left side and hit refresh until your instance is running. Click on it and look down in the details for the public DNS entry.

To connect to your instance run ssh -i <path to pem file in your bootstrap directory> ec2-user@<public DNS name of your server>. You can sudo to root (no password).

That’s it – you have a server in the cloud. When you’re done be sure to clean up to avoid excessive charges (a few cents an hour can add up). Check the instances section and TERMINATE (not stop) any instances that are there. You will be billed by the month for storage so de-register AMIs you don’t need and go to the snapshot section and delete their corresponding snapshots.

Now, all that is useful, but you probably want to tailor your instance. You can of course do that interactively, but if you want to script it check out the plugins in the plugin directory. Just add a path to a plugin file at the end of the command line to build the instance and it will tailor your image accordingly. I plan to clean up the scripts a bit more to move anything discretionary into the plugins (you don’t NEED fcron or atop on a server).

The plugins/desktop plugin is a work in progress, but I think it should work now (takes the better part of a day to build). It only works 32-bit right now due to the profile line. However, if you run it you should be able to connect with x2goclient and have a KDE virtual desktop. A word of warning – a micro instance is a bit underpowered for this.

And on a side note, if somebody could close bugs 427722 and 423855 that would eliminate two hacks in my plugin. The stable NX doesn’t work with x2go (I don’t know if it works for anything else), and the stable gst-plugins-xvideo is missing a dependency. The latter bug will bite anybody who tries to install a clean stage3 and emerge kde-meta.

All of this is very much a work in progress. Patches or pull requests are welcome, and edowd is maintaining a nice set of up-to-date gentoo images for public use based on his scripts.

EAPI 5 includes support for automatic rebuilds via the slot-operator and sub-slots, which has potential to make @preserved-rebuild unnecessary (see Diego’s blog post regarding symbol collisions and bug #364425 for some examples of @preserved-rebuild shortcomings). Since this support for automatic rebuilds has potential to greatly improve the user-friendliness of preserve-libs, I have decided to make preserve-libs available in the 2.1 branch of portage (beginning with portage-2.1.11.20). It’s not enabled by default, so you’ll have to set FEATURES=”preserve-libs” in make.conf if you want to enable it. After EAPI 5 and automatic rebuilds have gained widespread adoption, I might consider enabling preserve-libs by default.

Sep 13th I stabilized net-analyzer/munin-2.0.5-r1 (security bug #412881). I use automated repoman checks and USE="-ipv6", and everything was fine at the time I committed the stabilization (also, see no mention of net-server in that security bug).

Sep 18th the repoman fix has been released in portage-2.1.11.18 and 2.2.0_alpha129.Now the only remaining thing to do is pushing the portage/repoman fix to stable. I especially like how quickly the fix for root cause (repoman check) has been produced and released.

There are thousands of guides out there on this subject, however I still struggled to set up an IPSEC VPN at first. This is a HOWTO for my own benefit – maybe someone else will use it too. I struggled because most of the guides involved setting up the VPN on a NAT’d host and connecting to the VPN inside the network. I didn’t do that on my linode, which has a static public IP.

My objectives were clear:

Create a connection point that was semi-secure while connecting to open wifi networks

Where 1.1.1.1 is your public eth0 address and 10.152.2.0 is the subnet that xl2tpd will assign IPs from (can be anything, I picked this at the advice of a guide because it is unlikely to be assigned from a router on a public network)

Remember that sysctl.conf is evaluated at boot so run sysctl -p to get the settings enabled now as well.

Step 5: Configure firewall (iptables):
This is the critical step that I wasn’t grokking from the existing guides in the wild. Even when bringing the firewall down to test, you need the NAT/forwarding rules:

Conclusion: The above examples should be enough to get the VPN working. There are some tweaking oppurtunities that I didn’t document or elaborate on. There is plenty of examples out there to look at or research, however. This was all setup without the firewall configuration and the client would connect but there would be no onward internet activity. It acted just like there was a invalid DNS server configured, at that point I looked into setting up a NAT, dnsmasq on the local interface, and other wierd things. In the end, just needed to forward the traffic properly.

As you probably have seen in the schedule, we have multiple room that have ugly names from university like 107, 155 or 349. We would like to rename them during the conference so people can remember them more easily. So try your creativity and send us some ideas!

Gentoo Miniconf: It will take place on Saturday and Sunday with a plethora of amazing talks by experienced Developers and Contributors, all around Gentoo, targeting both desktop and server environments!

On Saturday morning Fabian Groffen, Gentoo Council member, along with Robin H. Johnson, member of the Board of Trustees, will give us a quick view of how those two highest authorities manage the whole project. Afterwards there are going to be a few talks regarding various topics, like managing your home directory, the KDE team workflow, the important topic of Security and a benchmarking suite, all performed by important people for the project. A cool Catalyst workshop will be next, followed by a workshop regarding Gentoo Prefix, and at the end we’re going to participate on BoFs regarding the Infrastructure and the Gentoo PR, which will cover hot topics, like the Git migration and our website.

On Sunday we’ll see how a large company (IsoHunt) uses Gentoo, the tools it has developed and the problems it has encountered. Then, a cool talk about 3D games and graphic performance is going to take place, followed by a presentation on SHA1 and OpenPGP, which is the precursor of the Key Signing Party!! The second part of the Catalyst workshop is next, along with a Puppet workshop. At the end there are again two BoFs, the first about automated testing and the second about how we can grab more contributors and enlarge our cool project.

And a sneak peek on the other co-hosted conferences:

Future Media, which will be held on Saturday is a special feature track talking about the influence of developments in technology, social media and design on society. It will have talks like the future of Wikipedia and Open Data in general by Lydia Pintscher or using FOSS and open hardware for disaster relief by Shane Couglan.

The first day in the openSUSE Conference, Michael Meeks will tell you all aboutwhat’s new in LibreOffice, Klaas Freitag will give everyone a peek under the hood of ownCloud and for the more technical users, Stefan Seyfried will show you how to crash the Linux Kernel for fun and backtraces. Saturday night there’ll be a good party and the next day musician Sam Aaron will talk about Zen and how to Live Program music like he did during the party. Later, Libor Pecháček will explain the process of getting software from the community into commercial enterprises and at the end of the day Miguel Angel Barajas Watson will show us how a computer could win Jeopardy using SUSE, Power and Hadoop. The openSUSE event continues on Monday and Tuesday with many workshops and BoF sessions planned as well as a few large-room discussions about the future of the openSUSE development- and release process.

On Saturday the LinuxDays track features a number of Czech talks like an introduction to Gentoo by Tomáš Chvátal with his talk titled “if it moves, compile it!” (‘Pokud se to hýbe, zkompiluj to!’). Fedora is represented by Jiří Eischmann & Jaroslav Řezník later in the day. There also few real ninja-style talks about low-level programming like Petr Baudiš about low level programming and Thomas Renninger on modern CPU power usage monitoring (these both are in English). During the Saturday there will also be track of graphics workshops in Czech (Gimp, Inkscape, Scribus) followed by a 3D printing workshop (reprap!). Sunday is kicked of by Vojtěch Trefný explaining how to use Canonical’s Launchpad as a place to host your project (CZ). Those interested in networking will be taken care off by Pavel Šimerda (news from Linux Networking) and Radek Neužil who explains how to use networks securely (both CZ). You can also learn all about how to set up a Linux desktop/server solution for educational purposes (EN) and follow Vladimír Čunát talking about NixOS and the unique package manager this OS is build on. The LinuxDays track will be closed by Petr Krčmář (chief editor of root.cz) and Tomáš Matějíček (author of Slax) talking about future of Slax (CZ).

i just finished hacking on our XML for the month. several months ago, sven mentioned the changes needed to get the handbooks updated with initramfs/initrd instructions for separate /usr partitions. it took me a few hours, but i finally closed bug numbers 415175, 434550, 434554, and 434732. thanks to raúl for the patches.

i initially started putting in the patches as-is, but then i noticed that the initramfs descriptions were just copied from the x86+amd64 handbook. so, i stripped them out, and rewrote them as an included section common to all affected architecture handbooks. that <include> is then dynamically inserted by our XML processor, dropping the instructions into the appropriate place, so that there’s no extraneous text duplication.

that bit about include href="hb-install-initramfs.xml" fills in the next subsection with whatever we put in the hb-install-initramfs.xml include, which is never viewed by itself. little tricks like this make it much easier to maintain the documentation…we make one change to an include, and it’s propagated to all documents that use it. same goes for things like <keyval> — that variable is set elsewhere in our documentation, so that as kernel versions or ISO sizes change, we can update that value in one place (handbook-$ARCH.xml). every instance of the variable is automatically filled in when you view the handbook in your web browser.

not to say everything was smooth sailing while updating the handbooks…i ran into a few snags. i figured out why my initial commit attempts were blocked by our pre-commit hooks: it’s not that the xml interpreter was giving me spurious errors on each check. (“why you blocking me? i’m head of the project! DON’T YOU KNOW WHO I AM?!”) instead, i forgot a slash in a </body> element. THAT ruined the next 300 lines of code. solution: fix, re-run xmllint --valid --noout, add commit message, push to CVS.

the handbooks are now all set for the new initramfs/initrd mojo for those poor, poor souls mounting /usr on a separate partition/disk. my own partition layout is much simpler; i’ve never needed an initramfs.

I regularly use monit to monitor services and restart them if needed (and possible). An issue I’ve run into though with Gentoo is that openrc doesn’t act as I expect it to. openrc keeps it’s own record of the state of a service, and doesn’t look at the actual PID to see if it’s running or not. In this post, I’m talking about apache.

For context, it’s necessary to share what my monit configuration looks like for apache. It’s just a simple ‘start’ for startup and ‘stop’ command for shutdown:

When apache gets started, there are two things that happen on the system: openrc flags it as started, and apache creates a PID file.

The problem I run into is when apache dies for whatever reason, unexpectedly. Monit will notice that the PID doesn’t exist anymore, and try to restart it, using openrc. This is where things start to go wrong.

To illustrate what happens, I’ll duplicate the scenario by running the command myself. Here’s openrc starting it, me killing it manually, then openrc trying to start it back up using ‘start’.

You can see that ‘status’ properly returns that it has crashed, but when running ‘start’, it thinks otherwise. So, even though an openrc status check reports that it’s dead, when running ‘start’ it only checks it’s own internal status to determine it’s status.

This gets a little weirder in that if I run ‘stop’, the init script will recognize that the process is not running, and reset’s openrc’s status to stopped. That is actually a good thing, and so it makes running ‘stop’ a reliable command.

Resuming the same state as above, here’s what happens when I run ‘stop’:

# /etc/init.d/apache2 stop
* apache2 not running (no pid file)

Now if I run it again, it checks both the process and the openrc status, and gives a different message, the same one it would as if it was already stopped.

# /etc/init.d/apache2 stop
* WARNING: apache2 is already stopped

So, the problem this creates for me is that if a process has died, monit will not run the stop command, because it’s already dead, and there’s no reason to run it. It will run ‘start’, which will insist that it’s already running. Monit (depending on your configuration) will try a few more times, and then just give up completely, leaving your process completely dead.

The solution I’m using is that I will tell monit to run ‘restart’ as the start command, instead of ‘start’. The reason for this is because restart doesn’t care if it’s stopped or started, it will successfully get it started again.

I don’t know if my expecations of openrc are wrong or not, but it seems to me like it relies on it’s internal status in some cases instead of seeing if the actual process is running. Monit takes on that responsibility, of course, so it’s good to have multiple things working together, but I wish openrc was doing a bit more strict checking.

I don’t know how to fix it, either. openrc has arguments for displaying debug and verbose output. It will display messages on the first run, but not the second, so I don’t know where it’s calling stuff.

If you need to convert .mts files to .mov (so that e.g. iMovie can import them), I found ffmpeg to be the best tool for the task (I don't want to install and run "free format converters" that are usually Windows-only and come from untrusted sources). This post is inspired by iMovie and MTS blog post.

First I tried just changing the container:

for x in *.MTS; do ffmpeg -i ${x} -c copy ${x/.MTS/.mov}; done

But QuickTime could not play sound from those files because of AC-3 codec. Also, the quality of the video playback was very poor. The other command I tried was:

Now QuickTime was able to play the sound, but problems with video remained. iMovie was unable to import the resulting files anyway (silently: I got no error message, just nothing happened when trying to import).

I’d like to announce a new initiative within the mips arch team. We are now supporting an xfce4-based desktop system for the Lemote Yeeloong netbook. The images can be found on any gentoo mirrors, under gentoo/experimental/mips/desktop-loongson2f. The installation instructions can be found here. The yeeloong netbook is particularly interesting because it only uses “free” hardware, ie. hardware which doesn’t require any proprietary code. It is manufactured by Lemote in China, and distributed and promoted in the US by “Freedom Included“. It is how Richard Stallman does his computing.

I’m blogging because I thought it was important for Planet Gentoo to know that mips devices are currently being manufactured and used in netbooks as well as embedded systems. The gentoo mips team has risen to the challenge of targetting these systems and maintaining natively compiled stage4′s for them. Why stage4′s? And why a full desktop for the yeeloong? These processors are slow, so the time from a stage3 to a desktop is about three days for the yeeloong. Also, the yeeloong sports a little endian mips64 processor, the loongson2f, and we support three ABIs: o32, n32 and n64, with n32 being the preferred. This significantly increases the time to build glibc and other core packages. I provide two images, a vanilla one and a hardened one. The latter adds full hardening (pie, ssp, _FORTIFY_SOURCES=2, bind now, relro) to the toolchain and userland binaries as we do for amd64 and i686 in hardened gentoo. I have not ported over the hardened kernel, however.

I allude above to “other” targetted devices. I am also maintaining some mips uclibc systems (both hardened and vanilla) which are on the gentoo mirrors under experimental/mips/uclibc. But I will speak more of these later as part of an initiative to maintain hardened uclibc systems on “alternative” architectures such as arm, mips, ppc as well as amd64 and i686.

You can read the full installation instructions, but here’s a quick summary, since it doesn’t follow the usual Gentoo method of starting from a stage3:

Prepare either a pen drive or a tftp server with a rescue image: netboot-yeeloong.img

Turn on the yeeloong and hit the Del key multiple times until you get the firmware promt: PMON>

If netbooting, add an IP address and point to the netboot-yeeloong.img. If using a pen drive then point to thei image on the drive and boot into the rescue environment.

Partition and format the drive.

Download the desktop image from a mirror via http or ftp. Its about 350 MB in size.

Unpack the image. It contains not only the userland, but also a kernel.

Reboot to the PMON> prompt. Aim to the kernel on the drive. PMON will remember your choice and you will not have to repeat this step.

Once installed, you will log in as an ordinary user with sudo with username and password = “gentoo”. The root password is also set to “root”. It is an ordinary Gentoo system, so edit your make.conf, emerge –sync and add whatever packages you like! File bugs to: blueness@gentoo.org with a CC to mips@gentoo.org.

If you have a Yeeloong or go out and buy one, consider trying out this image.

This is another (second) post about updating a system I rarely updated. If you're interested, read the first post. I recommend more frequent updates, but I also want to show that it's possible to update without re-installing, and how to solve common problems.

The idea for this really comes from the Unofficial ATI bugzilla at http://ati.cchtml.com which appears to be successful. For NVidia issues the official way has been to email linux-bugs@nvidia.com or the unofficial method of posting on http://nvnews.net and hoping for a reply. Unfortunately I don’t find forums terribly useful for bug reports and the search functionality is less than ideal for issues.

I’ve been thinking of spinning up a Bugzilla instance for an Unofficial NVidia Bugzilla and inviting all distros to use it as well as the NVidia Linux engineers. But obviously I’d need some user/developer interest in this.

If you’d like to experiment with EAPI 5_pre1, then you can refer to the corresponding portage documentation, and you may need to pay special attention to the new “Profile IUSE Injection” feature. Since the profiles currently aren’t configured for this feature yet, you’ll have to configure these variables yourself if your experimental ebuilds reference special flags (like x86, kernel_linux, elibc_glibc, and userland_GNU) without listing them explicitly in IUSE. Here’s an abbreviated example of what the variables should look like, which you can put in make.conf:

I have not populated all of the above variables exhaustively, but these values should be enough to get you started. If you need a more complete set of ARCH values to list in USE_EXPAND_VALUES_ARCH, then you can grab the exhaustive set of values from arch.list.

P.S. To avoid confusion, I’m reminding everyone that the Gentoo Miniconf and the czech Linuxdays conference will be held on 20-21 October, while the openSUSE Conference has two extra days, so it will be held on 20-23 October

P.S.2 Thanks a lot to Joanna Malkogianni and Triantafyllia Androulidaki for the pacman banner

There are a lot of changes from previous versions. In particular, some changes to existing directives may affect your existing traffic behaviour. So, please be sure to read the release notes at [1] and [2] before upgrading.

There are 2 new USE flags:

ssl-crtd: Adds support for dynamic SSL certificate generation in SslBump environments which allows icap inspection of SSL traffic without / with reduced certificate mismatch errors in browsers. See [3] for further info.

qos: Adds support for Quality of Service by allowing one to select a TOS / DSCP / Netfilter Mark value to mark outgoing connections with, based on where the reply was sourced. Also turns on zero-penalty-hit config option which used to be a separate patch but now is included with squid itself. Please see the qos_flows directive for further info [4].

One note regarding squid.conf: By default, Gentoo provided a huge squid.conf file with lots of comments. Upstream provides a small condensed squid.conf file which we will start to install as default from squid-3.2.1 onwards. I always found it difficult to see what the overall squid configuration was in the previous huge squid.conf file. Hopefully, this change will make life easier for squid admins. The old commented squid.conf file is still available as squid.conf.documented under /etc/squid directory. Please do try to migrate your settings to the new squid.conf file for ease of future upgrades.

As I migrated to clean data layout (see previous post) I decided to be cool&trendy guy and fire up my own lovely cloudy service.

First my thinking was bit off regular setup, because even if we have in-tree ebuild of owncloud it hard-requires apache, which I find overkill here.

So I introduce you to secret approach how to make it work with ngnix and sqlite3. Before you say that I should use *insertothercooldbname* please think of that my deployment is only for handfull users and I tested it with 5 users connected at once each of them having access to 1 tb shared datastore and it proven fast enough.

Setting up the stuff

As nginx does not have any fcgi we will use the fpm from php directly. For that we need to add it to runlevel rc-update add php-fpm default and set up a bit default number of spawned servers (config is in /etc/php/fpm-php5.4/php-fpm.conf). Also remeber to set there proper user/group there, or you won’t be able to store content in your cloud, just read from it.

Then we set up the nginx (/etc/nginx/nginx.conf and /etc/nginx/fastcgi_params). To keep this short and easy I will just post the config I used and let you to google for other nginx variables.
First the conf file:

After we start up the webserver and fcgi provider, we should be up and running to open the stuff in web browsers.

Few issues I didn’t manage to sort out in owncloud

External module to load all system users into it does not pass the auth

Google sync just timeouts everytime I try it (I maybe have just damn huge content here)

External storage support from within owncloud didn’t work for me, I just symlinked the data folder to the proper places under each user and logged into them in browser, then waited for 3 hours (1tb of data to index) and they were able to access everything.

I’ve been receiving a lot of questions lately from people wanting to use libvirt with virsh and not wanting to use a GUI (e.g. virt-manager). They’ll get gung-ho and install libvirt and start up virsh and be confronted with an error almost right away. Obviously from a user perspective, this is a bad experience so I think a little background is in order.

libvirt runs in two modes called system and session. These terms are identical to D-Bus so if you are familiar with that just think in those terms. If not, system is the instance that runs as a system daemon. It has an init script at /etc/init.d/libvirtd and will run as root. The session instance runs as a normal user. It is not started at boot time but dynamically by someone using virsh. The default when running virsh as root is to connect to the system instance. The default when running virsh as a normal user is to connect to the session instance. This is why people say their virtual machines have disappeared or they can’t connect typically. There are four ways to connect to the system instance as a normal user:

virsh -c qemu:///system

virsh and at the prompt connect qemu:///system

export LIBVIRT_DEFAULT_URI=qemu:///system and running virsh

edit /etc/libvirt/libvirt.conf and set uri_default=qemu:///system

Now if you haven’t built libvirt with PolicyKit support, by default only root will be able to communicate with the system instance. You will have to edit /etc/libvirt/libvirtd.conf and change unix_sock_rw_perms to something more open like 0770 or 0777 (the former will require changing unix_sock_group to a group your user is part of). Then restart libvirtd to get the new permissions.

The last issue to befall people relates to libvirt’s recent switch to using XDG_RUNTIME_DIR and XDG_CONFIG_HOME from the XDG Base Directory Spec. The defaults for these are $HOME/.cache/ and $HOME/.config/ respectively. The issue that gets people is that your X session manager creates these directories for you if they don’t exist but libvirt does not. So for people logging into a user that never uses X, they won’t have these directories. As a result when exiting virsh you will get an error that it couldn’t save your command history. Additionally you will not be able to start a session instance without these directories present. The simplest fix is to just do mkdir $HOME/{.cache,.config} and all should be well. Note: This last issue is now resolved for the forth coming 0.10.0 release.

Last wednesday Gentoo Hardened held its monthly online meeting to discuss the progress of the various subprojects, reconfirm the current project leads, talk about potential new projects and discuss some bugs that were getting on our nerves…

For the project leads, all current leads were reconfirmed: Zorry will keep tight ship as Gentoo Hardened project lead, and will also continue as the lead for the toolchain-related projects. Blueness keeps tackling the kernel, pax, grsec and rsbac subprojects, klondike the documentation and media and I will continue with the SELinux and integrity subprojects.

On the toolchain progress, Zorry is working on the 4.8 patches and hopes to be able to submit them upstream later this month. Blueness continues maintaining the uclibc architectures mentioned last month and is working on the documentation related to it.

On the kernel side, there were some reports submitted that were triggered by the integer overflow plugin. This plugin, called size_overflow aims to detect integer overflows where an increase of an integer value goes beyond its maximum and wraps around (resulting in either a negative or a small integer result). This is of course unwanted behavior, so a gcc plugin (by Emese Revfy) is used to detect such occurrences. Basically, this plugin will recalculate whatever is done with the integers on a double precision integer level and see if the logic result is the same. If it isn’t, then an overflow has most likely occurred. This is of course overly simply explained, but from what I can fond in the interwebs, not that far from the truth.

The reports are generally about network-related applications, like tor, which are terminated because something fishy occurred within the network handling code of the kernel (see for instance bug #430906).

In the SELinux camp, the documentation has been updated to inform users on how to create a new role (see also an earlier post of mine) and a few patches to the setools package have been added to support Python-2.7-only systems as well as systems using the latest swig. Also, all userspace utilities for SELinux should support both Python 2.7 and Python 3.x – the only remaining aspect is the SELinux code within Portage (see bug #430488).

Regarding grSecurity and PaX, blueness is working on the xattr PaX markings support in Gentoo, and a tracker bug has been opened to manage the changes needed. Vapier suggested to move towards xattr markings completely and drop the PT_PAX ELF header support, but this cannot be done until all file systems support user-level extended attributes. That being said, it is a good idea to do this in the long run though as extended attributes give greater flexibility and don’t manipulate the binaries of an application.

On the integrity subproject, the concepts and introduction documentation is online. I’m working on a few ebuilds that are needed to support IMA/EVM and should hopefully hit the hardened development overlay the next week. The primary focus now is to support creating a “secure image” which, when uploaded to a hosting service, would detect if the hosting service tampered with the image outside (i.e. by manipulating the image file itself).

Finally, on documentation and media, we will need to look into updating the prelude/LIDS documentation (host intrusion prevention/detection documentation) as it is quite old and obsoleted currently. Klondike also recently gave a talk about Gentoo Hardened (put the stuff online Francisco !) but I don’t recall anymore where – I’lll update when I see the meeting log ;-)

Imagine you are dumb guy like me, first what I did was to set up 3 1TB disks into one huge LVM copied data on it and then found out that grub2 needs more free space before the first partition to be able to load the LVM module and boot. For a while I solved this with external USB token plugged in the motherboard. But I said no more!

I bought two 3TB disks to deal with the situation, and this time I decided to do everything right and add UEFI boot instead of normal good old booting.

So as you can see I created 4 partitions. First is special case and it must be always created for EFI boot. Create it larger than 200 megs, up to 500, which should be enough for everyone.

The disk layout must be set up in parted as we want GPT layout (just google how to do it, it is damn easy to use), It accept both values like 1M, 1T and percetage like 4% to specify the resulting partition size.

Setting up the RAID

We just create simple nodes and plug /dev/sda2-4 and /dev/sdb2-4 to them. Prior creating the RAID make sure you have RAID support in your kernel.

After these commands are executed we have to watch mdstat until it is prepared (note that you can work with the md disks in the meantime, just the setting of the RAID will be slower as you will be writting on the named disks.

After we check the mdstat and see that all the disks are ready for play:

Now that we are ready we will use rsync to transfer living system and data (WARNING: shutdown everything that temper with data (like ftp/svn/git services). Only thing we are going to loose is few lines of syslog and other log services.

After the transfer you need to edit /etc/fstab to reflect new disk layout. Update kernel (if needed to support new RAID layout) and update /etc/defaults/grub if you did RAID like me to contain domdadm line for default command.

Preparing new boot over UEFI

We need to download latest archboot iso 64bit (gentoo minimal didn’t contain this lovely feature).
Grab some usb disk and plug it into our machine. We will format it to 32b fat: mkfs.vfat -F32 /dev/[myusb] , mount somewhere and copy the ISO image content to the usb folder (you can enter it in mc and just F5 it if you are lazy like me, but it is working with tar, p7zip or whatever else). Shutdown the computer, unplug old disks and with manic laughter turn the machine again on.

To boot the uefi just open boot list menu and select the disk which has UEFI around its name. It will open grub2 menu where you just select first option. We should be then welcomed by lovely arch installer. Not caring about it switch to another console and open terminal. Setup again the arrays using mdadm –assemble.

What you should do now? You should test the last stable version of tar (1.26 in this case) and check if you are able to reproduce these problems.

From the subsequent tests, you can see that tar-1.26 fails to respect CFLAGS(1), fails to sed one or more files(2), there are no test failures, and you can reproduce the extract issue.

Now, go to our bugzilla, and check if there are open bugs about these problems. If not, please open the bugs but pay attention about the blocks.

Since the first is reproducible in the last stable, is not a regression, it means no block.
Since the second is reproducible in the last stable, is not a regression, it means no block.
Since the third is not reproducible in the last stable, is a regression, it means block.
Since the fourth is reproducible in the last stable, is not a regression, it means no block.

For this case, you should open a new bug about the test failures and make it as a block for the current stabilization. Obviously, if there are open bugs that need a block, do it instead of open new(duplicate) bugs.

Now, apart the test failures and ignoring the failures 1 and 2, the obvious question is: “Why we should mark stable tar-1.26-r1 if it fails to extract stuff?”.
Here you should learn the regression concept; imagine you are an user, you are using tar-1.26 and you can’t extract some archives; we mark stable 1.26-r1 and you can’t do it too. There are no changes for you and no worsening. You can’t do before and you can’t do now again.

Probably this post is documented elsewhere, but I hope that can helps.

At work, I support three operating systems right now for ourselves and our clients: Gentoo, Ubuntu and CentOS. I really like the first two, and I’m not really fond of the other one. However, I’ve also started doing some token research into *BSD, and I am really fascinated by what I’ve found so far. I like FreeBSD and OpenBSD the most, but those two and NetBSD are pretty similar in a lot of ways, that I’ve been shuffling between focusing solely on FreeBSD and occasionally comparing at the same time the other two distros.

As a sysadmin, I have a lot of tools that I use that I’ve put together to make sure things get done quickly. A major part of this is documentation, so I don’t have to remember everything in my head alone — which I can do, up to a point, it just gets really hard trying to remember certain arguments for some programs. In addition to reference docs, I sometimes use shell scripts to automate certain tasks that I don’t need to watch over so much.

In a typical situation, a client needs a new VPS setup, and I’ll pick a hosting site in a round-robin fashion (I’ve learned from experience to never put all your eggs in one basket), then I’ll use my reference docs to deploy a LAMP stack as quickly as possible. I’ve gotten my methods refined pretty well so that deploying servers goes really fast — in the case of doing an Ubuntu install, I can have the whole thing setup close to an hour. And when I say “setup” I don’t mean “having all the packages installed.” I mean everything installed *and* configured and ready with a user shell and database login and I can hand over access credentials and walk away. That includes things like mail server setup, system monitoring, correct permissions and modules, etc. Getting it done quickly is nice.

However, in those cases of quick deployments, I’ve been relying on my documentation, and it’s mostly just copy and paste commands manually, run some sed expressions, do a little vim editing and be on my way. Looking at FreeBSD right now, and wanting to deploy a BAMP stack, I’ve been trying things a little differently — using shell scripts to deploy them, and having that automate as much as possible for me.

I’ve been thinking about shell scripting lately for a number of reasons. One thing that’s finally clicked with me is that my skill set isn’t worth anything if a server actually goes down. It doesn’t matter if I can deploy it in 20 minutes or three days, or if I manage to use less memory or use Percona or whatever else if the stupid thing goes down and I haven’t done everything to prevent it.

So I’ve been looking at monit a lot closer lately, which is what I use to do systems monitoring across the board, and that works great. There’s only one problem though — monit depends on the system init scripts to run correctly, and that isn’t always the case. The init scripts will *run*, but they aren’t very fail-proof.

As an example, Gentoo’s init script for Apache can be broken pretty easily. If you tell it to start, and apache starts running, but crashes after initialization (there’s specifics, I just can’t remember them off the top of my head) the init script thinks that the web server is running simply because it managed to run it’s own commands successfully. So the init system thinks Apache is running, when it’s not. And the side effects from that are that, if you try to automatically restart it (as monit will do), the init scripts will insist that Apache is already running, and things like executing a restart won’t work, because running stop doesn’t work, and so on and so forth. (For the record, I think it’s fair that I’m using Apache as an example, because I plan on fixing the problem and committing the updates to Gentoo when I can. In other words, I’m not whining.)

Another reason I’m looking at shell scripting as well is that none of the three major BSD distros (FreeBSD, NetBSD, OpenBSD) ship with bash by default. I think all three of them ship with either csh or tcsh, and one or two of them have ksh as well. But, they all have the original Bourne shell. I’ve tried my hand and doing some basic scripting using csh because for FreeBSD, it’s the default, and I thought, “hey, why not, it’s best to use the default tools that it ships with.” I don’t like csh, and it’s confusing to try and script for, so I’ve given up on that dream. However, I’m finding that writing stuff for the Bourne shell is not only really simple, but it also adds on the fact that it’s going to be portable to *all* the distros I use it on.

All of this brings me back to the point that I’m starting to use shell scripts more and more to automate system tasks. For now, it’s system deployments and system monitoring. What’s interesting to me is that while I enjoy programming to fix interesting problems, all of my shell scripting has always been very basic. If this, do that, and that’s about it. I’ve been itching to patch up the init scripts for Gentoo (Apache is not the only service that has strange issues like that — again, I can’t remember which, but I know there were some other funky issues I ran into), and looking into (more) complex scripts like that pushes my little knowledge a bit.

So, I’m learning how to do some shell scripting. It’s kind of cool. People always talk about, in general, about how UNIX-based systems / clones are so powerful because of how shell scripting works .. piping commands, outputting to files, etc. I know my way around the basics well enough, but now I’m running into interesting problems that is pushing me a bit. I think that’s really cool too. I finally had to break down the other day and try and figure out how in the world awk actually does anything. Once I wrapped my head around it a bit, it makes more sense. I’m getting better with sed as well, though right now a lot of my usage is basically clubbing things to death. And just the other day I learned some cool options that grep has as well, like matching an exact string on a line (without regular expressions … I mean, ^ and $ is super easy).

Between working on FreeBSD, trying to automate server deployments, and wanting to fix init scripts, I realized that I’m tackling the same problem in all of them — writing good scripts. When it comes to programming, I have some really high standards for my scripts, almost to the point where I could be considered obsessive about it. In reality, I simply stick to some basic principles. One of them is that, under no circumstances, can the script fail. I don’t mean in the sense of running out of memory or the kernel segfaulting or something like that. I mean that any script should always anticipate and handle any kind of arbitrary input when it’s allowed. If you expect a string, make sure it’s a string, and that it’s contents are within the parameters you are looking for. In short, never assume anything. It could seem like that takes longer to write scripts, but for me it’s always been a standard principle that it’s just part of my style. Whenever I’m reviewing someone else’s code, I’ll point to some block and say, “what’s gonna happen if this data comes in incorrectly?” to which the answer is “well, that shouldn’t happen.” Then I’ll ask, “yes, but what if it *does*?” I’ve upset many developers this way. In my mind, could != shouldn’t.

I’m looking forward to learning some more shell scripting. I find it frustrating when I’m trying to google some weird problem I’m running into though, because it’s so difficult to find specific results that match my issue. It usually ends up in me just sorting through man pages to see if I can find something relative. Heh, I remember when I was first starting to do some scripting in csh, and all the search results I got were on why I shouldn’t be using csh. I didn’t believe them at first, but now I’ve realized the error of my ways after banging my head against the wall a few times.

In somewhat unrelated news, I’ve started using Google Plus lately to do a headdump of all the weird problems I run into during the day doing sysadmin-ny stuff. Here’s my profile if you wanna add me to your circles. I can’t see a way for anyone to publicly view my profile or posts though, without signing into Google.

Well, that’s my life about right now (at work, anyway). The thing I like the most about my job (and doing systems administration full time in general) is that I’m constantly pushed to do new things, and learn how to improve. It’s pretty cool. I likey. Maybe some time soon I’ll post some cool shell scripts on here.

One last thing, I’ll post *part* of what I call a “base install” for an OS. In this case, it’s FreeBSD. I have a few programs I want to get installed just to get a familiar environment when I’m doing an install: bash, vim and sometimes tmux. Here’s the script I’m using right now, to get me up and running a little bit. [Edit: Upon taking a second look at this -- after I wrote the blog post, I realized this script isn't that interesting at all ... oh well. The one I use for deploying a stack is much more interesting.]

I have a separate one that is more complex that deploys all the packages I need to get a web stack up and running. When those are complete, I want to throw them up somewhere. Anyway, this is pretty basic, but should give a good idea of the direction I’m going. Go easy on me.

Edit: I realized the morning after I wrote this post that not only is this shell script really basic, but I’m not even doing much error checking. I’ll add something else in a new post.

Some may have noticed that the Gentoo Foundation has funded a bug bounty. This is something fairly new for the Foundation, and I wanted to offer some comments on the practice. Please note that while I’d love to see some of these make their way into policy some day, these are nothing more than my own opinion, and I reserve the right to change my opinion as we gain experience.

The recent bug bounty was for bug #418431, which was to address a problem with git-svn which was holding up stabilization of the latest version of git, which is a blocker for the migration of the Portage tree to git.

What follows are some principles for the use of bug bounties and how I think we fared in this particular case. I’d like to see the use of bounties expand, as right now I believe we under-utilize our donations. However, it is important that bounties be used with care as they have the potential to cause harm or be wasteful.

One more upfront note – I supported the git-svn bounty as it was ultimately worded, as did the other Trustees. Looking back I think we could have done things a little differently, but hindsight is always 20/20, and no doubt we’ll continue to learn as we experiment with this further.

1. Bounties Should Be Used Strategically
While the Foundation has money to spend, we aren’t swimming in it, so we can’t use bounties for any little bug that annoys us. Bounties should be reserved for matters where spending a little money has a large impact.

I think we did well here – the git-svn issue was going nowhere either within Gentoo or upstream, but the number of other blockers to the git migration are fairly small and within Gentoo’s control. Getting rid of this issue should open the way towards the git migration, which is of course of strategic importance to Gentoo.

2. The Solution Must Be Sustainable
This might also be stated as “consider the total cost.” Before agreeing to fund this bug there was some due diligence to ensure that upstream would carry forward any patches we generated. The problem was the result of changes on the SVN side, and the solution included some general cleanup and refactoring of code to make git-svn more maintainable upstream. Upstream also expressed an interest in accepting the fix, and it was the opinion of the package maintainer that this would be a one-time fix as a result.

When considering whether a solution is sustainable, we need to think about how we got where we are, and consider whether we’re just going to end up back in the same place again. If the solution won’t be maintainable, then any money spent is wasted unless it truly is a one-time event.

3. Gentoo Can’t Fix It With Volunteer Effort
Gentoo is a community distribution. We have some very talented developers. We can usually fix our own problems, and doing so as a volunteer community effort is usually the healthiest solution.

The sense for git-svn was that this was an upstream problem in a language our maintainers were not comfortable with. The bug languished despite attention by several developers and discussions in other forums. It was felt that offering a bounty would allow targeted expertise to tackle the problem, which otherwise was not of great interest to our community.

A policy to not offer bounties unless a bug has been open for some period of time except in unusual circumstances would be appropriate.

4. Be Ready To Capitalize On the Work
If the work is strategic (see #1), then we ought to have a plan ready for when the bug is closed. Otherwise there really should be no urgency to pay somebody to close the bug and it is basically a pig in a snake (clear the jam, and the problem just moves one step down the chain).

I think the jury is still out on how we’re doing here. I think there is a lot of enthusiasm about git but we could have a bit more organization here. None of this is intended as a slight to those who have been laboring hard to make this work – I hope getting this blocker cleared will inspire more to step up and resolve the other issues. (I won’t say more here as I don’t want to make this about the Git migration.)

5. Define the Problem and Success
A bounty is a contract. At the very least misunderstandings can lead to hurt feelings, and at worst they can lead to HIGHLY contentious, expensive, and distracting legal action. While a 10 page document shouldn’t be necessary for a token expense, any bounty should be very up-front about what exactly is to be done, and how success will be evaluated.

I think we could have done a little better in this regard, but there was some iteration on the wording of the bounty to clarify the “victory conditions.” I think it is important to focus on outcomes – in this case we wanted code that upstream was likely to accept. I’d actually have been happier making upstream acceptance a condition of payment, but the sense was that this would be inevitable but might delay payment unduly. I think the jury is still out on this one. What is important is that we don’t just achieve technical resolution of the bug, but that we fully realize the benefits we had in mind when we funded the bounty.

6. Cover Code Ownership and Licensing
This is a work for hire – we can dictate ownership of the code (yes, I realize that the legalities of this vary internationally, but the US is the only nation that legally recognizes the Gentoo Foundation at the moment, and the US will enforce this insofar as its jurisdiction allows). Per the Gentoo Social Contract, if we’re funding the creation of code, it ought to be free (generally GPL).

This was covered in the git-svn case. We didn’t insist on ownership of copyright, but we did ensure the code was licensed using the upstream license (GPLv2). My feeling is that if the bounty really represents payment for a majority of the work Gentoo should just own the code outright. If the bounty is really a token gesture for what is mostly a volunteer effort I think the author should retain copyright as long as the code is FOSS. In practice it doesn’t matter too much, so I think we should use discretion here.

7. Offers of Bounties Should Be Fair
This topic led to some internal debate, and I think that we can probably do a little better in terms of transparency in the future. The bounty was posted publicly on the bug, and anybody already interested in the bug and on the CC list would of course have gotten notification. In retrospect I think that bounties are a significant enough occasion that perhaps a proposal should be offered for comment on -dev or -nfp and the final version announced on -dev-announce. I think that the way we handled the git-svn case met all legal obligations, but I really want to make sure that the whole community has an opportunity to participate when they come up.

Another potential issue with bounties is that you can only pay one person (unless there is some side agreement to share it), and there can be resentment if work gets done but isn’t reimbursed. This was addressed in the present case by asking anybody working on the bug to state their intent. If a bounty is very large it probably would make sense to go through a more formal bidding process and just award a contract more conventionally.

I think that this last point of fairness is actually the most critical. While messing up on any of the others could cause us to waste a few hundred dollars, getting the fairness bit wrong could literally destroy the community. When you start paying people to do what used to be volunteer work the result can be demoralizing to the community. I think the key is to only do this when the community lacks the ability/desire to do the work itself, and especially when the work lies outside of our core expertise. Paying an accounting firm a reasonable fee to ensure our taxes are filed correctly isn’t viewed with much controversy. We should try to keep bug bounties limited to similar sorts of situations.

Trustees of course have duties both under the bylaws and under US law to properly manage conflicts of interest. These certainly apply to any kind of expenditure of money.

So, what do you think? I’m very open to criticism about how we handled our first bug bounty, and how the community feels about this use of money. As is evident from the Treasurer’s Report at today’s Annual General Meeting, Gentoo is currently receiving more than it spends in donations, so I think making a little more use of this approach will allow our supporters to benefit Gentoo. Seeing donations in action probably will help encourage an increase in donation as well. However, I think we also need to tread carefully here, as the community matters far more than squashing a few bugs.

Finally, while I’d like to see policy around bounties formalized, I think doing so right away would be a mistake. I think we should try to consciously apply principles like these but wait until we see how they work in practice before trying to codify them.

After reviewing several solutions to a security problem regarding screen lockers, I’ve found that the easiest workaround for switching virtual terminals and killing the screen locker application is to start one’s X session with the following command:

exec startx

That way, even if someone switches to the virtual terminal that was used to start X and presses CTRL+C, he or she will only be presented with a login prompt (instead of having full reign of the user account responsible for starting the session). Now that there’s a reasonable workaround for that problem, I set out to make keybindings and menu shortcuts for Openbox that would take care of both locking the screen, and putting my displays to sleep. Conceptually, this was a straightforward task, and I accomplished it with the following:

The only problem is that it doesn’t work every time. Though it tends to work nicely, there are times where slock will start, but the displays will not honour the xset command to go to sleep (I guess that when it comes to bedtime, monitors are a bit finicky like children ). I have tried adding a sleep time before the commands, thinking that there was some HID activity causing the wake, but that didn’t rectify the problem. If anyone has a proposed solution to the seemingly random failure of xset putting the displays to sleep, please let me know by leaving a comment.

The following month are expected to be really exciting (and scary, eheh), for many reasons. Explanation below.

My life is going to rapidly change in roughly one month, and when these things happen in your life, you feel scared and excited at the same time. I always tried to cope with these events by just being myself, an error-prone human being (My tech. English teacher doesn’t like me to use “human being”, but where’s the poetry then!) that always tries to enjoy life and computer science with a big smile on his face.

So, let’s start in reverse order. I have the opportunity to do the university internship at Google starting from October, more precisely at Google Ireland, which is located in Dublin. I think many of the Googlers had the same feelings I currently have before me, scared and excited at the same time, with questions like “do I deserve this?”, “am I good enough?”. As I wrote above, the only answer I have found so far is that, well, it will be challenging but, do I like boredom after all? Leveraging on professionality and humbleness is probably what makes you a good team-mate all the time. Individuals cannot scale up infinitely, that is why scaling out (as in team work) is a much better approach.

It’s been two years since I started working at Weswit, the company behind the award-winning Lightstreamer Push technology, and next month is going to be my last one there. Even though, you never know what will happen next year, once back from the internship at Google. Sure thing is, I will need a job again, and I will eventually graduate (yay!).
So yeah, during the whole University period, I kept working and besides it’s been tough, it really helped me out bidirectionally. In the end, I kept accumulating real-world expertise during this time.
Anything in my life has been risk-free, and I took the risk of leaving a great job position to pursue something I would have regretted for the rest of my life, I’m sure. On the other hand, I’m sure that at the end of the day, it will be a win-win situation. Weswit is a great company, with great people (that I want to thank for the trust they gave me) and I’m almost sure that the next one might not be my last month there (in absolute terms I mean). You never know what is going to happen in your life, and I believe there’s always a balance between bad and good things. Patience, passion and dedication is the best approach to life, by the way.

Before leaving for Dublin, we (as in the Sabayon team) are planning to release Sabayon 10. improved ZFS support, improved Entropy & Rigo experience (all the features users asked me about have been implemented!), out of the box KMS improvements, BFQ iosched as default scheduler (I am a big fan of Paolo Valente’s work) a load of new updates (from the Linux kernel to X.Org, from GNOME to KDE through MATE) and if we have time, more Gentoo-hardened features.

Let me mention here one really nice Entropy feature I implemented last month: Entropy adopted SQLite3 as its repository model engine since day one (and it’s been a big win!), even though, the actual implementation has been always abstracted away so that upper layers never had to deal with it directly (and up to here, there is nothing exciting). Given that a file-based database, like SQLite is, is almost impossible to scale out [1], and given that I’ve been digging into MySQL for some time now, I decided it was time to write an entropy.db connector/adapter for MySQL, specifically designed for the InnoDB storage engine. And 1000 LOC just did it [2]!

As you may have seen if you’re using Sabayon and updating it daily, Entropy version has been bumped from 1.0_rcXXX to just XXX. As of today though, the latest Entropy version is 134. It might sound odd or even funny, but I was sick of seeing that 1.0_rc prefix that was just starting to look ridiculous. Entropy is just about continuous development and improvement, when I fully realized this, it was clear that there won’t be any “final”, “one-point-oh” and “one-size-fits-all done && done” version, ever. Version numbers have been always overrated, so f**k formally defined version numbers, welcome monotonically increasing sequences (users won’t care anyway, they just want the latest and greatest).

I know, I mention “Equo rewrite” in the blog post title. And here we go. The Equo codebase was one of the first and long living part of Entropy I wrote, some of the code is there since 2007, even though it went through several refinement processes, the core structure is still the same (crap). Let me roll back the clock a little bit first, when the Eit codebase [3] replaced old equo-community, reagent and activator tools, it was clear that I was going to do exactly the same thing with the Equo one, thus I wrote the whole code in an extremely modular way, to the point that extra features (or “commands” in this case) could be plugged in by 3rd parties without touching the Eit kernel at all. After almost one year, Eit has proven to be really powerful and solid to the extent that now, its architecture is landing into the much more visible next-gen Equo app.
I tell you, the process of migrating the Equo codebase over will be long. It is actually one of many background tasks I usually work on during rainy weekends. But still, expect me to experiment with new (crazy, arguable, you name it) ideas while I make progress on this task. The new Equo is codenamed “Solo”, but it’s just a way to avoid file names clashing while I port the code over. You can find the first commits on the entropy.git repo, under the “solo” branch [4].

Make sure to not miss the whole picture: we’re a team and Sabayon lives on incremental improvements (continous development, agile!). This has the big advantage that we can implement and deploy features without temporal constraints. And in the end, it’s just our (beloved) hobby!

The PostgreSQL Global Development Group today released security updates for all active branches of the PostgreSQL database system, including versions 9.1.5, 9.0.9, 8.4.13 and 8.3.20. This update patches security holes associated with libxml2 and libxslt, similar to those affecting other open source projects. All users are urged to update their installations at the first available opportunity.

This security release fixes a vulnerability in the built-in XML functionality, and a vulnerability in the XSLT functionality supplied by the optional XML2 extension. Both vulnerabilities allow reading of arbitrary files by any authenticated database user, and the XSLT vulnerability allows writing files as well. The fixes cause limited backwards compatibility issues. These issues correspond to the following two vulnerabilities:

Fix syslogger so that log_truncate_on_rotation works in the first rotation.

Only allow autovacuum to be auto-canceled by a directly blocked process.

Improve fsync request queue operation

Prevent corner-case core dump in rfree().

Fix Walsender so that it responds correctly to timeouts and deadlocks

Several PL/Perl fixes for encoding-related issues

Make selectivity operators use the correct collation

Prevent unsuitable slaves from being selected for synchronous replication

Make REASSIGN OWNED work on extensions as well

Fix race condition with ENUM comparisons

Make NOTIFY cope with out-of-disk-space

Fix memory leak in ARRAY subselect queries

Reduce data loss at replication failover

Fix behavior of subtransactions with Hot Standby

Users who are relying on the built-in XML functionality to validate external DTDs will need to implement a workaround, as this security patch disables that functionality. Users who are using xslt_process() to fetch documents or stylesheets from external URLs will no longer be able to do so. The PostgreSQL project regrets the need to disable both of these features in order to maintain our security standards. These security issues with XML are substantially similar to issues patched recently by the Webkit (CVE-2011-1774), XMLsec (CVE-2011-1425) and PHP5 (CVE-2012-0057) projects.

As with other minor releases, users are not required to dump and reload their database or use pg_upgrade in order to apply this update release; you may simply shut down PostgreSQL and update its binaries. Perform post-update steps after the database is restarted.

All supported versions of PostgreSQL are affected. See the release notes for each version for a full list of changes with details of the fixes and steps.

So, apparently pgpool-II did a bit of a switcharoo some time ago, which I wasn’t too careful about. But, can you really blame me? pgpool-II’s documentation is among the worst I’ve seen. It’s a good thing they’ve commented their code, or I wouldn’t have been able to do some things cleanly.

You get a much nicer initscript now that actually works. The ebuild actually installs the SQL scripts from the aforementioned terrible documentation. In general, I’m fairly happy with the results now.

The next thing to work on is getting pgpoolAdmin into the tree as well. And, writing documentation so that people can actually understand how to accomplish a task without first translating what had been written. I’ve been working on this for a week. I need help from more experienced users of pgpool-II. I’ve started a rather bare wiki page.

Seriously. “Step 4. The file is confirmed.” What the hell is that supposed to mean? Who’s confirming it? Me? A program? Which file?!

At least it’s easier to read than MySQL’s documentation.

Addendum: I forgot to mention, that you’ll need to do an emerge –sync and that the package you’re looking for is dev-db/pgpool2-3.2.0-r1.

I wrote a small section on how to create additional roles to the SELinux policy offered by Gentoo Hardened. Whereas the default policy that we provide only offers a few basic roles, any policy administrator can provide additional roles for the system.

By using additional roles, you can grant users administrative rights to particular services without risking having them elevate their privileges to root (+ sysadmin). You should even allow them to get a root shell while remaining confined within their domain (and role).

The portage can’t handle the blockers without revbumps/rebuilds so I updated it in live/branch ebuild and with next releases (3.5 next week, 3.6 2 weeks) there won’t be any collisions and you can enjoy comparing these two suites against each other. For binary I was just too lazy so just reemerge 3.5.5.3 if you want to enjoy this.

Note: the plugin install and handling is still not fully tested in situations when you have both implementations around, but the eclass was writen with it on my mind so just try it and report bugs if it does not work. Altho there is one case I didn’t test at all -> What happens when one removes one the implementations and try to reinstall the extension. It should properly register itself under the only remaining one, but still the files will be kept in /usr/lib64/IMPLEMENTATION/…/extensions/install/ and registred in user config dir. Maybe we could run this deregister on package uninstall (portage can detect those)…