Over the past week, I have configured and used BtrFS, a copy on write file system, on my home server to share files with the rest of the housemates. Overall the experience has been good, configuration is easy to do and BtrFS gets the job done, but there is a lack of information feedback to the user.

For initial testing, I gathered some old disks together to see the quality of BtrFS. My server is a AMD Athlod 64 X2 Dual Core Processor, 2GB of memory, with three physical disk drives: 250GB (/dev/sda), 250GB (/dev/sdb), 500GB (/dev/sdc). Since I was interested in learning more about Linux, I decided to give a first shot at Arch. The first interesting feature of BtrFS that got my attention is the ability to either use the whole drive, or just a partition of a drive as part of a pool. In my setup, I was able to configure:

30GB with Ext4 on /dev/sda1, mounted at “/”

220GB with BtrFS on /dev/sda2, mounted at “/home”

250GB with BtrFS on /dev/sdb, part of the “/home” pool

500GB with BtrFS on /dev/sdc, part of the “/home” pool

And therefore the server has 30GB on “/”, and 970GB on “/home”.

BtrFS has the option of transparent compression, in which the files are automatically compressed while writing the disk, completely invisible to the user. The system will only compress files that has an actual size decrease, but since this was mostly going to be a storage server, I decided to enable force-compression using zlib, in which all files are going to be compressed.

Interestingly enough, I was able to send video files to the server at full 91Mbytes/second over SMB/Windows Network Drives, while transferring files over scp would usually cap at about 11Mbytes/second, probably due to extra encryption factor SSH presents. Compression does also increase CPU load, but not enough to cause a slow down on a full gigabit line, even on a old machine, such as the one I am using.

Later I decided to remove the 2TB drive from my main computer, and add it to Arch server. Without even turning off the computer, I was able to connect and add the drive to the BtrFS pool at /home. I was surprised by just connecting the drive, and using “mkfs.btrfs /dev/sdd” and “btrfs device add /dev/sdd /home” was able to transform the 970GB into 2.9TB.

After adding the device, I found it would be interesting to balance the devices, and I left it running overnight. The command literally emptied /dev/sda2 and /dev/sdb, left 58GB on /dev/sdc, and moved ALL 800GB of the data into /dev/sdd, the newly added 2TB drive, and crashed with the message “18 enospc errors during balance”. The files on those disks really does not seem balanced.

As mentioned earlier, one of the down sides of BtrFS is lack of feed back information to the user. On my current system, the following commands yields the following information:

du -sh /home

930G /home

Iteration through all files and sum of their sizes.

df -h

/dev/sda2 2.9T 881G 1.9T 32% /home

System command to display free space.

btrfs filesystem show

Total devices 4 FS bytes used 879.28GB

BtrFS administration command.

Three different commands, gives three different reports of how much space is being used. The difference between the first one, and the second two is due to compression, but there is no way to tell which files are actually have good compression ratio. There is a good answer at the FAQ, but I have since disabled “force-compression”, and just used normal compression.

In summary, the good parts about BtrFS, in the perspective of a home user:

Long ago I read a blog entry from a google employee on how to manage gmail effectively. I am currently unable to find the original source, but the idea is pretty easy to accommodate to. The method works by transforming the inbox into a “TODO” list, leaving only the active conversations on the inbox, and label/archiving all other mail. I receive about 100 new messages per day, and I am easily able to manage and stay focused only with the important activity.

Before changing your behavior on checking mail, first archive all the mail in the inbox. Use the checkbox to select all messages, than if necessary, click on the “Select all 22492 conversations in Inbox”, and now click on the “Archive” button you probably never used before. All the messages will now dissapear from the inbox, but do not worry, it is all still stored in just “All mail.”

Next enable keyboard shortcuts. Simply go to your settings and ensure “Keyboard shortcuts on” is checked. There are 5 main keyboard shortcuts you should remember at first, write it down on a sticky note and leave it by you if necessary to remember:

J – Go to previous message

K – Go to next message

E – Archive

Enter – Open message. (While on inbox)

Shift-? – Open help Menu (To learn more shortcuts)

Now every time you check your mail. For every unread message you have the choice to not archive the message, and leave it on your inbox to take an action later, or the choice to archive the message, and never think about it again. The inbox should never really have more than about 30 messages.

The advantage now is that your inbox only contains active conversations, and you can look at all of them every time you open your inbox; You know which messages which still need action, and the messages that you have responded, and you are waiting a reply back.

For example, here is my current inbox:

I usually have about 20 messages, but there is nothing much going on my life right now. From just a quick look at those 6 messages, I know will have to (1) reply Stephen about the robot, (2) probably just read and archive about my homework grade, (3) I remember there is going to be the Buddhist club election next Tuesday, (4) I still have the reminder to send a message to the internal club about the Feijoada, (5) I still have to play the Tribes Ascend Beta, and (6) I asked my friend for a list of good amateur radio to buy, and I still have to buy one.

My list of actions is going to consist of pressing “Enter” to open first message, read it, and reply to it, press “j” to go to the next message, read it and press “e” to archive it, and since I have some free time now, I will take a look at the list of radios again, buy one, and then archive it, now only leaving 4 messages on my inbox.

For everyone that does not use Gmail, it is quite easy to merge your e-mail with another account. For example, if you want to have your WPI email to arrive at your inbox, simply go to your settings, and “Accounts and Import” and follows the steps to add your WPI email using the “Check mail using POP3.” If there is any problems with this, feel free to leave a comment asking for help.

This is the second time that AT&T has angered me with their illogical service. The first time I wrote about AT&T’s poor choice of putting uninstallable applications on the android phone, the limitations that come with it, and other things, but I decided never to share it since it was more a rant.

I own an android phone, Samsung Galaxy S/Captivate, and I always took good care of it. I never root my device because I was afraid of loosing the warranty, but during a night about a week ago, my phone was ran out of battery. It had happen before and I figured it would not be a problem.

When I arrived back home, I plug the uncharged smart phone into the computer, and boot it. I take a look at the phone later again and notice that the back light was on, but all the pixels were black. I try unlocking the device, but the phone would just not respond to any input. Since it gave every sign of being frozen, I decided to take the battery off and back on, and try booting it again, and this time I notice that once the loud boot screen finishes, the phone keeps vibrating, like it does when a program crashes, and after a while it just freezes.

There is a key sequence to reset the Galaxy S, but apparently it does not work on AT&T’s version of the phone “Captivate.” Some friends also gave me a few hints on how to gather more information on why the phone was freezing, but it was near impossible for me able to do anything since the phone would freeze as soon it booted, and AT&T’s custom version of Android would block me of doing anything else.

I took the phone to AT&T store, and this is where the bigger joke began. The attendance was good; as soon as I arrived, one of the workers there asked me what I needed, and I explained that the phone would freezes as soon as it booted, she tried to make a factory reset, but the phone froze as soon she booted it. She explained to me that I needed to call customer support, and explain that the phone was not physically nor water damaged, and I needed to have the phone replaced.

She takes me to the side of the store, dials the customer support number on of the iPhone display phones, and hands me the device. After waiting some six minutes for an attendee, I quickly explain the situation, and he asks me for last four digits of the SSN of the owner of the account. I simply have no idea what the last four digits of my aunt’s SSN number is, I went to the AT&T store exactly to avoid this problem. I tell the customer support to stay on the line, while I get a second phone from display to call my aunt; Nobody answers. Then I get my netbook, cr-48, out of my bag, and since AT&T’s store wireless was horrible, I was using Verzion’s 3G to call my aunt through Google Voice.

The customer support guy finally accepts my answer, and transfers me again to another line. After some more ten minutes of waiting, I get to this other guy who speaks really fast, and explains me the terms of exchanging my phone for a new one, and says that I have to pay about $300 if I fail to do follow the instructions. Not fully understanding what he said, I agreed anyway since I had already spent about 40 minutes on the corner of the noisy store. Apparently they are mailing me a new phone, and I have to send my old phone back, at least I hope so because living without a phone for two weeks nowadays becomes a bit complicated.

In summary, I am displeased with the AT&T service. If I go to the store to have my phone looked at and fixed, I do not want to have to deal with somebody over the phone for 40 minutes.

This instructions will only work if you are using a linux desktop. For those who are using the Fossil Servers to create the virtual machine, here is another way to achieve the homework and compile your kernel.

Open a terminal, you can press “ctrl+f2″ to open the run window and type in “shell” to open your an instance of shell. Now connect to the fossil server by typing:

[nican@LNican]$ ssh -X (username)@fossilvm.cs.wpi.edu

Take a look at the ssh man page, the -X flag allows for the forwarding of X11, which basically means that the image programs that have a graphical interface will be displayed on your local computer. It is a pretty nifty option.

You should be prompted to type your password. Now that you are in fossil VM server, you should be able to create your own instance of the virtual machine, as given by the homework type in:

nican@FOSSILVM:~$ sudo clone-vm.sh <your virtual machine name>

The sudo command tells to execute the clone-vm.sh script using root privileges. The clone-vm.sh script is actually located at /usr/local/bin, but sadly read access is disabled at that file. I wanted to take a look on how it actually works.

You should be able to execute “sudo view-vm.sh <your virtual machine>” and see the screen in your guest machine, but it is probably very slow. Remember this command because you are going to need it in case you build a bad kernel and have to boot on a previously compiled version.

Once cloning the virtual machine is done, it should say that the virtual machine has started. Now you can ssh into your virtual machine by typing:

nican@FOSSILVM:~$ ssh -X student@<your virtual machine name>.

(Do not forget the “.” after the machine name. I am unsure why the “.” is there, but I believe it is either because of mis-configuration or because the VM was part of a larger of a larger domain space but now it is just an empty string. )

As before, still use the -X flag. Now X11 is going to be forwarded to the fossil VM connection, that is going to be forward to your local linux computer. Just to show how cool X11 forwarding is, type in:

student@nicanvm:~> firefox &

A new instance of firefox should open your computer, but it is actually being run inside of fossil. It is going to be a bit slow, but you can browser and download files directly to your guest machine.

Do not forget to change the student and root password in your machine. On your virtual machine type in to change the student password:

student@nicanvm:~> passwd

And type this in to change to change the root password:

student@nicanvm:~> sudo passwd

You might also want to take a look and play around with YaST. In your virtual machine, type in:

student@nicanvm:~> su
root@nicanvm:~> yast

(“su” allows you to work in the terminal as a root user, as oppose to “sudo” which just does 1 command.)

At this point you should be able to do the homework without any additional software.

Now for some other fun configurations you can do:

1. Password-less login

At your local machine, run:

[nican@LNican]$ ssh-keygen

This will create a private key at your machine that will grant you access to other machines without the need of a password. Now you need to copy the public key in the remote machine, run:

[nican@LNican]$ ssh-copy-id (username)@fossilvm.cs.wpi.edu

And you are going to be prompt for the last time for your password in the fossilvm. Now you can ssh into fossil without the need to type in a password.

2. SSH configuration

Create or edit the file at your local machine at ~/.ssh/config, and paste the content as follows:

and login directly into your virtual machine or the fossil server without much hassle.

3. Mount your VM as a local directory

Another awesome feature is being able to mount a remote machine as a local folder in your computer, but for that you are going to need a special software called “sshfs.” Install that in your computer with your package manager, depended on your flavor of linux.

Once installed, on your local machine, run: (To run this command properly, you need to do tip #2 first.)

This will create the folder “myvm” and mount the root directory of your virtual machine in that directory. Now be cautions because you are running commands inside your local computer in that directory, because every time a file is read inside that folder, it will actually be downloading it from the remote machine.

Recently I have been given the task to work with a PIC microprocessor. I had worked with the same processor many years ago and I still had the hardware to program it, and I remember it being a unnecessary complicated to be able to setup the IDE, the compiler and programmer together, but I see Microchip has not changed much.

In brief, I spent a whole day attempting to setup the environment to program the microchip, and ended up using an outdated version of the software provided at some an university website because at latest version at the website would just not work, but what I want to bring to attention is the comparison and the experience that I had to using Microchip compared to the experienced while working with the Arduino.

Arduino is easy to setup and understand it, and it just works out of the box, and even if Microship provided better professional tools, it would still not prove a match to the Arduino because of how easy it is to get into it and how much work has been done by the community. Microchip leaves a very bad first impression, the software is not very easy to understand and the documentation, nor the website very helpful. It is unlikely that I would go back to using their products, and it is a downturn for Microchip’s sales because in the future, if there is a choice for some bigger product, I would probably try to stay away from their company.

A lesson for everyone that first impressions is very important in a product. Even that I am just one simple buyer of Microship products, I now have a tendency of keeping away, and telling the people around me to keep away, from their products, and to use Arduino instead for the cheaper and more reliable choice.

Not so long ago a version of uTorrent for linux was built. uTorrent is a freeware, closed source, torrent client originally only built for windows, and due to the high number of requests for a port for Mac OS X and Linux, the developers have answered their request. In my opinion, the program has the best torrent client web user interfaces, and it does not follow short of the features it provides.

My first mistaken when using uTorrent is that I placed the “utserver” file together with all my other .torrent files. For some reason when I ran the application, without any configuration, it deleted all the .torrent files, but the application was working perfectly fine.

One odd behavior when accessing the web interface is that if you try to just access it at http://localhost:8080/, it simply drops an error saying “invalid request,” and that you must access it using http://localhost:8080/gui. The default user is “admin”, with a blank password. The credentials, and most other settings, can be changed using the web interface, and the change is saved to settings.dat, so you do not even have to touch the utserver.conf configuration file.

After a read through the manual, I decided to do something I never had done before, setup it as a service on my computer. I did some search around on how to setup the application as a service, but the only thing I found was this post, which answered what I wanted to do, but I wanted to learn more about the process. I entered the #opensuse IRC channel, and a guy told me to take a look at the examples at /etc/init.d/skeleton and /etc/init.d/skeleton.compact.

I created the utorrent user (useradd utorrent), created the utorrent directory (mkdir /srv/utorrent), copied all the files into the utorrent folder, gave permission for all the configuration and torrent files (chown -R utorrent utorrent), and started tinkering around with the skeleton file. Taking a look at the forum post, and the instructions in the file itself, it was a pretty straight forward configuration. I also found important learning about the “startproc“, to start a process with another user, and “id” command, to get the user/group id of the utorrent user, commands that I was not aware about.

Once I was finished with all my changes, I saved the changed skeleton file at /etc/init.d/utserver, gave it execute permission (chmod 755 /etc/init.d/utserver) and started the service (service utserver start) everything seems to be running perfectly fine.

Google Music Beta is a new service to provide songs directly from the cloud, so that all music files are synchronized and available everywhere. It has been two weeks since the beta opened for the public, and other major competitors, such as Amazon and Apple, are also working their version of the software.

The first thing that google asked me to do is to install a piece of software on my computer. Being google, it was a painless install and it has a simple interface. It asked what music source to sync; It gave the options from the iTunes or Windows Media Player library, or from selected folders. I choose to sync selected folders, and made the mistake to choose my biggest folder of music, because it started to scan the whole folder and upload every single song to google. The manager ignores .flac files.

The Music Player is easy to understand, mostly coded in javascript, but requires flash to play the songs (Because chrome itself can not open .mp3 files). It has the same shortcomings as almost any other music player, that it does not know how to organize the songs very well; For the 700 uploaded songs, there are 200 albums, which more than half of the albums have 1 song. There is also no easy way to select more than one album to play at once.

Oddly enough when you play a song, the browser sends a simple page request to get song, and google returns a json string with the url to the mp3 file at googleusercontent.com. The url does not seem to be protected and any other person seems to able to open it, but there is a really short timeout, about 10 minutes or less, until the url is no longer valid.

On the website settings, it shows there is a limit to upload 20,000 songs, and it also gives me the option to delete all my songs and withdraw from the beta. The application also does not give the option to download music back to your computer.

There is also an android app, that anyone can get without the beta. The app gives me the option to select songs to be available when off-line, and also gives me the option to allow to only download songs from the internet while connected to a wi-fi connection. The option is by default off, but really important for me to leave it on considering the limited 200MB/month that AT&T provides.

In conclusion, it is another step towards the cloud services. All the basic functionality is there, but there is no amazing must-see piece of technology. I feel like we are turning technology around and going back to the age that computers were only terminals to a main frame, which for me makes a lot of sense in some ways.

For the past 2 weeks I have been developing a simple application using node.js, an asynchronous event programming experience. The whole idea is to have one running thread that continually calls callback functions and that can never be blocked be I/O calls. Every time the programmer has to make a call from the database, read a file, or wait for a network response, he has to leave a function, so when the system is finished getting the needed information, the function will be called.

The main advantage of having such a program design is that it works extremely fast, and there are no multi-threaded problems. Just to show of an example of how a program might look like, take a look at this snipped:

//callback to when the program finished fetching all the results
http.createServer(function (req, res) {
//Calls back when the database finished getting all the news
db.query("SELECT * FROM `news` LIMIT 0,10", function( err, results ){
if( err != null ){
res.writeHead(500, {'Content-Type': 'text/plain'});
res.end('Unable to select from database: ' + err );
return;
}
res.writeHead(200, {'Content-Type': 'text/plain'});
//This is a javascript function that is NOT called async
result.forEach(function(news){
res.write("<p>" + news.title + "</p>");
});
//Actually close the stream, but can be left open as long as you want.
res.end();
}).listen(1337, "127.0.0.1");

The interesting part about this code snipped is that any new requests can be answered between each of the two asynchronous calls, and the thread will be on hold while doing nothing, so no extra resources are going to be used.

One good aspect of nodejs is the active community. There are lots of libraries out there with active developers updating modules on a daily basis. There is also a great tool, npm, which makes the installation for modules easy. Some of the libraries that I am currently using are:

FlowJS: Instead of embedding functions one inside of another, you just make a list of functions to be called, making code a lot easier and cleaner to read. You can also specify a group of callbacks that needs to be called before progressing the chain of calls.

Vows: A testing framework. A bit confusing to understand, but actually nice to use. Makes testing comprehensive, and also has a easy to use command line tool.

Mongoose: A MongoDB library and driver. Easy to create schema for documents on different collections.

It takes a while to get use to the programming style. An aspect that I find weird is the method in which to handle errors. There are two ways in which I know of to handle errors. One is to have a callback function if the call succeeds and one callback function for otherwise, but this usually makes the code hard to follow. Another method, which I adopted, is always make the first argument of the callback function null if there was no error, and an error object otherwise, and always have a check if the error is not null.

Overall, it is a very interesting and efficient method of programming. There is a lot to learn and use, and there is also quite a few javascript tricks.