Saturday, December 04, 2010

Over time, the cache of apt, the Debian package installation application, keeps growing. Every time a package is installed or updated, the original packages is stored in the cache. After a few months of updates, the cache grows quite big, because nothing is deleted from it automatically.
Keeping the original packages from all software you have installed is certainly a good idea, but the packages that are replaced by newer ones are kept as well.

The cache can be cleaned. using this command :

apt-get autoclean

This removes all unneeded (old) package files from the apt cache.

If you want to clear the cache completely, and thus removing all packages, use this command :

apt-get clean

Both commands are quite safe to use, because they only remove the downloaded packages from the apt cache. The installed files naturally remain untouched.
This way you can sometimes free a few hundred megabytes after a few months.

Ubuntu is based on the Debian packaging system, so this applies to every flavor of Ubuntu as well.

Monday, November 08, 2010

Essentialy, this plugin forces your browser to use the secure (encrypted) https protocol when visiting certain websites (like Facebook, Google, ...). This way, no data is sent in cleartext over the internet. When visiting these websites, everything is encrypted by your browser before it leaves your computer, and is then decrypted by the server (and vice versa, when the server sends the webpage back to your PC).

Why would you want this?

If you login over a normal http connection, your username and password are sent over the internet, for everybody who can intercept that package to read. The same with sensitive data that might be visible on websites your are visiting.

You are probably wondering who can catch your packets? To do that you must most certainly be an internet wizard? Not true. Recently a Firefox plugin was published, called Firesheep, that allows a computer to look around on your local (wireless) network, and hijack Facebook and other sessions on social networking websites and actually see what they are seeing.
For instance, when you are using a free wireless accesspoint, someone with the firesheep plugin, can easily watch what you are doing. (With more sophisticated software this is also - and was already - possible, but this requires more skill)

So, to protect yourself from these kind of attacks, it's better to use the https protocol as much as possible. And a nice tool to help you do this is, is a plugin like HTTPS Everywhere, or Force TLS. The Force TLS plugin does about the same as HTTPS Everywhere, but is more difficult to setup, because you have to tell the plugin for which website you want to use https. Downside of the HTTPS Everywhere plugin, is that it only works for the preprogrammed websites.

Tuesday, October 19, 2010

This morning, one of my servers (Dell Poweredge R510, with a quad-core Intel Xeon E5520 CPU running Debian 5.0.4) had crashed. It didn't respond to anything : ping, ssh, smb. Not even when physically attaching a screen and a USB keyboard to it, did it respond.
After a reboot, the system started without a problem and everything seemed to work again.

After Googling, I didn't really find a sound explanation, but some of the things that was mentioned was a bug in the Intel CPU, which could be solved by updating the CPU Microcode.

I'm not sure this will solve my problem, which only happened once since I started using the server about half a year ago, but as was mentioned, it does little risk, doesn't slow down your machine and might solve a few problems.

So I installed two packages (you need contrib and non-free repositories):

apt-get install intel-microcode microcode.ctl

Package intel-microcode contains the updated microcode for Intel CPU's, while microcode.ctl does the update. Because the update is done in memory, the update is lost after a reboot, so you will have to do it again, but this package takes care of that.

Update 06Dec2010 : The microcode is automatically updated after a reboot. :)

Wednesday, October 13, 2010

For quite some time now, I've been thinking about a social networking website alternative, where you, as a user, can keep control of what you submit : who can see it, what can be seen, and most important, if you delete it, that it is really gone (at least from the website you originally put it on, as one can not be sure that something that is published on the internet can be completely removed from it). But I got an insight on how it could be done. It's only a rough idea, with a lot of conceptual and practical things still to be sorted out :

A way to reach this goal is to use encryption. When encrypting a message for instance with OpenPGP, you use the public keys of the ones that should be able to decrypt the message. To do so, they use their own - private - key, which is of course private, and can only be used by the one who owns it. So this way, only the ones you intend to be able to read your message will be able to read it.
So, in case of this conceptual open source social networking website, every user has a PGP-key. If you want to submit something to this website, for instance your place of birth, you encrypt it, using the public keys of a selection of your friends and submit that encrypted chunk of data to the website, where it is stored in the database.
If one of your friends accesses your page on this social networking website, all the data, in encrypted form, is requested from the website and decrypted using the private key of that friend. The data that was encrypted with that persons public key, will be decrypted, while the rest will remain unreadable, thus showing only the data that you intended for that person.
If on a later moment, you decide you want to change the list of people that will be able to consult your place of birth, you simple encrypt the same data with different keys and replace it in the database.

The kind of data you can encrypt is of course not limited to short texts, but can also be a picture, a piece of video, a link to a website, a piece of you DNA, ...
The upside of storing an encrypted version of the data you share, makes it unreadable to anyone who doesn't have the right decryption key. So even if your data remains in the database of the website, it will only be readable for the persons it was originally intended for. It will even not be readable by the maintainers of the website, unless you include their public key when encrypting.

So, the bottom line is that YOU should be able to keep control over your data. Encrypting the data is one thing, somewhat trusting the software that makes it happen and of course also the ones hosting it all, is as important.
And here the Open Source model comes in. The software is freely available, so the way it works can be checked and improved by the Open Source community.
And because the software is freely available, anyone can set up a social networking website. So as a user, you can choose who to trust when you join a group.

Of course, this is all just an idea and a very general concept.

One final remark : in order for this to work, the whole encryption mechanism should be invisible for the end user. And the encryption should be handled client side, for obvious reasons, otherwise the data you intend to be limited available, might end up unencrypted on some kind of server.

Sunday, May 16, 2010

Last week, the new website of my department, that I've been working on for the last few weeks, was launched. In its first week, it got about 1200 visitors.
Although not completely finished, I thought it had reached a stage were it could be made public. I consider it a starting point anyway, a basis to start improving on by adding new features.

The previous website was created about 4-5 years ago, and was in need of a face lift. I could just have replaced the css file, to refresh the layout, but the code running in the background was replaced as well.

After some comparing, I decided to use Drupal 6 as the CMF of the new website. I hadn't worked with Drupal before, but heard and read good things about it, being developed in PHP and strong focus on modules as strong points.
First, I started developing a custom theme that reflects the corporate style of my university. I didn't do this before either, so it took me some time to understand how a page is built in Drupal and what parts it consists of, but in the end I got it working.

The new website should be bilingual (English-Dutch), so I had to look for a way to do this. Luckily, Drupal provides some modules (Locale, Internationalization) to make a website multilingual, and tools to do the translation (core module Content Translation).

I wanted to integrate the login system into the single sign-on system (CAS) of my university, so that the users of the website wouldn't have to create yet another account with matching password for this website.
There is a CAS module available for Drupal, but I got a custom one from a colleague, which I modified further, to take advantage (through CAS) of groups that are defined in the LDAP system of my university and of groups I defined in a seperate database, and thus assigning the right roles to each user on login.

With the different roles defined and assigned, I used the taxonomy module (in Drupal core) to define different parts of the website. My department consist of three research groups, so each lab has it's own section within the website. All sections have a public and private (intranet) part, so by using Taxonomy Access Control access to these pages could be linked to the defined roles.

Some pages are just static HTML, being maintained using a WYSIWYG editor, provided by Drupal module CKeditor.
Others are dynamic, getting their data from a database table, using a PHP script to generate the page. Module Cache Exclude was needed to keep these PHP generated pages from being cached by Drupal. Otherwise changes made in the database, would not show.
And core module PHP filter, to be able to execute PHP code in a page.

As I mentioned before, I'm new to Drupal so I'm still discovering possibilities, new modules and ways to do things, every day. Things that are still on my list is learning how to write modules, find a way to integrate my current database management PHP code with Drupal, basically using the Form API, not only to insert data into a database, but also update and delete data, and expand the features of the new website (both on the intranet as on public pages). Luckily I've got a good guide for the upcoming developing journey.

Thursday, April 01, 2010

A few days after the first collisions in LHC, something strange has happened at the new particle accelerator at CERN.Physicists noticed, this morning, some unexplainable activity in the detector of CMS, one of the experiments.Remarkable, because there were no active beams at the time.Investigations have started, to try and find an explanation for this fishy behaviour.LHC managed to collide two 3,5 TeV beams of protons, for the first time on the 30th of March 2010, only a few days ago.Something might have been created during those first collisions, that only now was detected.

Fascinating for some, troubling for others, as some critics feared that LHC might swallow the Earth.Other options are faulty hardware in the detector, causing these unexpected events.Only after some more tests, it will be certain if there is a flaw in the detection, or the discovery of something amazing.Later today, more news will be available.

Update (2Apr2010) :As far as I know, everything is working fine at LHC. All experiments detect collisions as they should, because there were beams colliding now and at that moment. If you are wondering what this post is ranting about, check the date it was published and the first letters of each sentence. It might give you a clue. :p

And now a small summary of the hurdles and how I overcame them to get it all working :
I got this new 'toy' a few months ago, in december 2009.
It doesn't have a cd-rom drive (the housing of the server is too small to hold one), so I decided to try PXE to install the OS. I added a PXE service to my network and soon was able to boot a Debian netinstall.
But the installer didn't recognise the hardware (CPU, network, harddrive). It turned out the default kernel didn't support it, so I had to use a custom one.
I followed this manual on how to get it and finally install the custom kernel. The only thing I did different is using a PXE-boot instead of the USB pendrive.

Small success here, but the hard drive controller still didn't work, so I was only able to install Debian on an USB harddrive. It worked, but I wanted to use the internal harddrive and CF-card.
Tinkering a bit with the BIOS settings didn't help much either. I found several possible settings on different websites. In the end, these settings worked for me :

PCI IDE Bus Master : Enable (HD doesn't work when Disabled)

Onboard IDE operate mode : Legacy (could be set to Native as well, but this has the best performance so far)

Then I stumbled upon the book Linux Kernel in a Nutshell written by Greg Kroah-Hartman. In Chapter 7 (p. 52-56) he describes how you can find a kernel module for hardware you are using. I found that module ata/pata_rdc.c was the one I needed to get the IDE-controller (RDC Semiconductor, Inc. Device 1011 (rev 01)) working.

It was introduced in kernel 2.6.32.* and is still experimental. I get a lot of warnings when booting or accessing the CF or HD, and it only runs at UDMA/33, which is much slower than the possible UDMA/100. But I hope the driver will get better in future kernel releases.
So, after building a custom kernel with all necessary modules included, I was able to boot and install Debian on the CF card, with the internal hard drive containing data.

After installing and configuring all services, copying data from my old server, and setting up the backup scripts, the new server is ready to replace the current one.
After more than 6 years of loyal service and an uptime of (currently) 90 days, it will retire, or at least, it will be used less frequently.

To summarize : the new server isn't more performant than the previous one, and it doesn't have that much more memory or storage, but : It's much smaller, less power consuming and less noisy. It think it might even pass the wife test. ;)

Saturday, March 20, 2010

I've started creating a new website for my department, and I chose Drupal as the supporting framework. I didn't use Drupal before, so it took me some time to get to know how it works, and that also means getting into the code, because I have to implement some custom things, like creating a new theme to reflect the corporate layout of my university. It took me some time, but I'm almost where I want to be, with only some minor tweaks to do. Now I can get started on the structure of the website and later, I'll start exploring the code some more when I start building custom modules.

When I get home, I start digging into the internals of phpMyAdmin, an Open Source project that is written in PHP and provides a web interface for the MySQL database.After attending a talk about phpMyAdmin on FOSDEM 2010, a few weeks ago, I decided to get involved in that project so I sent an E-mail to the project admin. After some weeks of getting acquainted with the code, trying to solve some bugs and getting patches out, I got accepted last week as a developer on the phpMyAdmin project.

Besides digging into the code of both Drupal and phpMyAdmin, I'm learning to use vim, a powerful text editor, and git, a distributed version control system, because phpMyAdmin switched to git last week.

I almost forgot to mention that I'm trying to get my new server up and running as my current server is getting a bit outdated.

Thursday, March 18, 2010

Since 2005, Google organises the, by now, well known event Google Summer of Code (GSoC). This programming event gives students the opportunity to participate in popular Open Source projects. When they successfully complete their assignment by the end of the summer, they even get paid. :)

Sunday, February 07, 2010

Today I visited FOSDEM 2010, hosted at ULB in Brussels. It was a long day, but very interesting and rewarding. This is an overview of the talks and events I attended :

State of phpMyAdmin : overview of what phpMyAdmin is and what features will be included in the 3.3 release. Nice talk, good speaker, perfect to start the day with.

AHAH crash course : Matthias Vandermaesen, aka Netsensei, did this talk and announced on his blog that it would be a difficult subject.
He was right : I managed to understand the major concepts - it's a bit like AJAX, but then with HTML being sent by the server instead of XML that is parsed by JavaScript in the browser - but some things I didn't grasp fully (mainly the actual implementation in Drupal).
This was probably due to me not being up to speed with the Drupal API (but that will change soon, now that I'm starting to develop my department's website in Drupal), being the first time I heard about AHAH and the many interruptions of people who walked into the room during the talk.
In the end I think Matthias did a good job, and it was nice to see him in real life for the first time. Until now I've only communicated with him by mail and other electronic means.

Keysigning Party : Summary : about 150 people checking each others identity, during 1,5 hours in two rotating lines, outside in the cold, in front of the main building. Goal : expanding the GnuPGWeb of Trust.
Funny experience I must say. Hopefully it will be a bit warmer next time.

Apache Hadoop : Very interesting presentation about processing large datasets.

Scaling Facebook with OpenSource tools : Basically an overview of all OpenSource tools (MySQL, PHP, memcached, ...) and technology that are used to make Facebook work. And what they have improved about it, what they've created themself (HipHop, Hive, Thrift, Haystack, ...) and how they returned it to the community. Very informative.

Write and Submit your first Linux kernel Patch : The last talk of my day and effectively the last talk of FOSDEM edition 2010. Greg Kroah-Hartman was very good speaker and showed during the talk how to make a small change in kernel sourcecode and submit it as a patch. He showed how easy it is and send us home with the message we were now all capable of doing it ourself. :)

I'm glad I went, I learned some new things and was quite amazed by the number of people that were present. Unfortunately there were some talks I couldn't go to, because they overlapped with others. But I'm sure I'll learn more about those topics in the future.
I'm already looking forward to next year's edition. But for now I'll have to finish signing some keys.

Friday, January 01, 2010

First and foremost : A happy 2010!
May your wishes come true, may you be prosperous in your endeavors and have good health!

For me 2009 was a year with some fruitful events, like getting a new job, but there was some disappointment and misfortune - my grandfather passed away - as well. Hopefully I will learn from all that occurred and keep a fond memory of the nice things that happened.

And now for the resolutions! Last year I had one big resolution : smile more. And I must say I managed to do that! There is still room for improvement, but it went well.
So this year's resolutions are :