Sunday, April 15, 2012

This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995

I explained in a previous post why I wanted to use rtorrent instead of the torrent client supplied with the storcenter.
There is a new developpment: thepiratebay.se switched to magnets only for file sharing: the version of rtorrent previously installed did not support magnets....
The good news is that the new rtorrent (0.9.1) does support IP filtering natively!

The problem is it was difficult to compile for the storcenter as the gcc toolchain available on the storcenter is very old... but no worries, I compiled it for you!

If you don't want to connect remotely to rtorrent to manage it from you computer, you can skip the rest of this section...
Install nTorrent on your computer http://code.google.com/p/ntorrent/
Install xml-rpc on the NAS:

Security warning: if you follow these steps, anybody that can access port 8081 of you NAS will be able to send commands to rtorrent! You want to make sure that this port is only accessible from your local network.

4. Ip filteringa. download the file
Ip filtering support is build into rtorrent-0.9.1, but you still need to configure the download of the filter files:

Chances are that you are trying to compile for ARM (or an exotic architecture) and your GCC version is too old compared to the source code you are trying to compile!
There is an Easy fix: upgrade your GCC.

If you can't upgrade your GCC for any reason (for example you are on an embedded hardware you don't have full control on), follow the steps below!

1. Find the source code file that's right for the architecture you are trying to compile on
You are going to find it inside a GCC source tarball.
To find it, go into your gcc source gcc/config and do

grep '__sync_fetch' */*

to find the right file.
For ARM, it is:

gcc/config/arm/linux-atomic.c

2. Compile the source code file and link in to the program you are compiling

Friday, March 2, 2012

It is the digital age, the amount of personal data we produce keeps going up: digital pictures, HD movies and documents take an ever increasing amount of space.
That's a lot a memories and information we don't want to loose (or can't afford to).

Companies have devised backup plans for a long time but the concept is now entering homes though cloud storage (and other means). When your data is lost, it is too late: you need to devise a plan now!

I will focus on the needs of individuals and deal with 3 different data types:
- photos
- videos
- documents (excel, doc, text, pdf...)

The cloud will shield you from hardware failure, physical destruction but might not protect you against human error.... Added bonus is that you can share your data with other people :)
You also take on additional risks: like the risk of you online account being hacked or the risk of your data becoming visible to everybody because of a misconfiguration on your side.

The good news is that there usually are free allocations for each service but you might have to pay for a feature you really need.

For movies, you have:
- Youtube: Videos can be uploaded for free (up to 20GB per video)
The problem is that the videos are automatically edited (and reduced in quality) and it is not easy to download them once they are in the cloud!

For documents, you have:
- Google Docs: storage of documents, presentation and spreadsheets in google format is unlimited, you get 1GB for other type of files. Additional space can be bought (and shared with picasa). See the above link in Picasa for pricing details. The problem is there is no easy way to synchronize a local folder with Google Documents...
- Dropbox: 2B free storage. Local folders can be synchronized with dropbox.

2. Use a backup Service

This will shield you from hardware failure (but it might be slow to recover the data), physical destruction and human error.

The idea is to send your compressed and encrypted data to a remote server where it is stored. You can usually access our backups from a website and from a specific software.
The problem is that all your data goes through the internet and it can be very slow to do the initial upload if you have a lot of data. For example, if you have 1TB of data to backup, it can take months to do the initial backup!
Same problem when you try to do a full backup: it will be usually an order of magnitude faster than doing the initial backup but it can still take a few days.

If you also need to recover fast from an hard drive failure (the most common hardware failure) you can use a local redundant RAID configuration (like RAID 1 or 5). Please note that RAID alone will not prevent data loss: you are still vulnerable to other hardware failures (like RAID controller failure, destruction of the device and human error).

Let's compare the different plans out there. I will focus on 3 providers: Mozy, Carbonite and Crashplan.

Plan Name

+10GB

Unlimited

Family unlimited

Home

HomePlus

HomePremier

50GB

125GB

Yearly Price

25 USD

50 USD

120 USD

59 USD

99 USD

149 USD

72 USD

120 USD

Space

10 GB

Unlimited

Unlimited

Unlimited

50 GB

125 GB

Number of computers

1

1

2-10

1

1

1

1 (add computer +2USD/month)

Suported Os

Windows, Mac, Linux, Solaris

Windows, Mac

Windows

Windows, Mac

Automated backup

All files

All Except video

All files

All files

Whatever your usage, Crashplan seems always cheeper and has more features. I use it myself and I am very satisfied with it....

The free crasplan software also allows you to backup on a friend computer (running crashplan as well). This means that you can backup your data without paying anything, provided a friend is ready to allocate you some data for backup.

3. Case studies

We just need to find the most cost effective combination of the above:

Profile A:10GB of pictures and a few documents: 5USD/year
Pay 5USD/year for Picasa storage (20GB)
Use DropBox free allocation to store the documents.
Problem: the backup process is manual: if you forget to upload your pictures to Picasa, they are not uploaded, unless you use the software I wrote to automatically upload to Picasa: see my post here). You are still vulnerable to Human error.

Profile B:100GB of pictures, 200GB of movies and a 10GB of documents: 50USD/year
Cheapest alternative is Crashplan Unlimited (50USD / year)
The backup process is now automatic: no need to worry about forgetting to backup something. On top of that, you are protected against human error as you can retrieve former versions of a file.

If your data is spread across different computer, you can buy a NAS and run crasplan on the NAS (see my post on how to install crasplan on an Iomega NAS here. Alternatively, you have the simpler option to buy Crashplan unlimited Family.

4. Conclusion

If you care about your data: take the time to devise your backup/data recovery plan now! You can always find a way that fits your budget.

You can get a reduced quality backup for your pictures and videos for free. Truct me, it is better to have a reduced quality backup than nothing!

Thursday, March 1, 2012

The goal of this tutorial is to install vuze headless (as a command line application). Most of the tutorials found on the web suggest to do the configuration of vuze in the UI before starting it in headless mode. Unfortunately, this is not possible on a NAS where you have to X server and no screen...

Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995 but uses unsupported features on the hardware. Please use at your own risk.

Since Vuze if a java program, the same steps should allow you to install vuze as an headless client on any hardware running java.

Unfortunately, I ran into a lot of JVM crashes with vuze headless and oracle jvm ejre1.7.0 (for ARM). On top of that, vuze is quite an heavy program in terms of CPU and memory usage, which is annoying for the type of hardware we are looking at (like a NAS). Therefore, I don't recommand to install vuze on a NAS. I suggest you look at rtorrent, which is much more reliable (see my tutorial How to install rtorrent with IP filtering).

This is installing IP filtering as well. If you don't want that, just skip the set "Ip Filter Autoload File" command.

You are good to go now. To have vuze automatically start at boot, you need to create the script in /etc/init.d (you can adapt the azureus script provided inside the install).
If you have an Iomega NAS, look at this tutorial to see how to have the program run at boot.

You can now connect to the web Vuze UI from your web browser at
http://ip_of_nas:6883/. Please note that the web UI is not as rich as the regular UI (most options are not available in the web UI)

Sunday, February 26, 2012

I am a fan of google docs: I often needs to access and edit my documents while I am away, and google docs offers a great way to do that.
The problem is: I have a lot of large pdfs there, and they can take a while to load: I would love to have a local copy when I am in the office...
On top of that, I always like to have a local copy of stuff... just in case! Call me paranoid but what happens if your account is hacked? or if google unilaterally closes your account because they consider you don't respect the terms of use? Better be safe than sorry...

I couldn't find anywhere an application that would do what I want (get a local backup of my google documents and update it regularly).
There is the google "takeout" application but you can not schedule regular downoads...
A project like google-docs-fs seems promissing but it does only support google documents (and not any other file you may have uploaded if you have -like me- a premium account). Plus, my analysis is that there are too many possible points of failures if you rsync this file system... I need something more robust.

I decided to code what I need myself: a java command line application that can be used to schedule regular downloads of all your google docs documents.

1. Presentation gdocsauploader.jar

The features implemented:
- re use data from a previous data to avoid re-downloading files that haven't changed
- rotating backups (for example, a maximum of 7 backups backup.zip being the most recent one and backup.7.zip being the oldest one)
- zip archive or just a folder archive (takes more space but easier to access)
- configurable document export mode (export google spreasheets as xls or as csv)
- download only once documents that are in multiple folders (gdocsbackup.removeduplicateddocuments)
- archive without folder structure (all documents in a zip, like google takeout) or with folder structure (much easier to navigate)
- support for any type of files.

TODO:
- use hard links on operating systems that support it (that would substantially reduce that amount of disk needed for multiple backups with a lot of unchanged documents)
- fix the bug that forces you to use a temp directory on the same partition as the destination directory

In my setup, I want to install pgdocsauploader.jar as a daily cron on my NAS, but you can install it anywhere.

The program is configured using the config file gdocsuploader.properties which reads as follows:

#use system defined proxy
gdocsbackup.usesystemproxy=true
#google account username and password
gdocsbackup.username=xxxx
gdocsbackup.password=xxx
#the path where we want to backup
gdocsbackup.backuppath=C:\\Users\\xxx\\Documents\\Data\\
#the name of the backup archive.
#the zip archives will be named: backuprootname.zip backuprootname.1.zip
#the folder archives will be named: backuprootname/ backuprootname.1/
gdocsbackup.backuprootname=gdocs_backup
#the number of backup files to keep
gdocsbackup.nbbackupfiles=7
#TRUE is you want to stroe backup as zip file.
gdocsbackup.usezip=FALSE
#zip compression level (0-9) with 9 being the most compressed (and most CPU intensive)
gdocsbackup.zipcompresslevel=6
#use hard links to link new data identical to older data. This does save a lot of space (you can't use this option with usezip)
#not supported yet!
gdocsbackup.usehardlinks=FALSE
#document export format: one of doc html odt pdf png rtf txt zip
gdocsbackup.documentexportformat=doc
#presentation export format: one of pdf png ppt txt
gdocsbackup.presentationexportformat=ppt
#spreadsheet export format: one of xls csv pdf ods tsv html (NB: first sheet export only for csv and tsv)
gdocsbackup.spreadsheetexportformat=xls
#try to replicate the directory structure in the zip
docsbackup.keepdirectorystructure=TRUE
#show documents that appear at different places in the folder tree only once (in the first folder where it is found)
gdocsbackup.removeduplicateddocuments=TRUE
#log file (for linux, good practice is to put it in /var/log/ or /opt/var/log (and make sur logrotate works correctly))
gdocsbackup.logfile=C:\\gdocsbackup.log

All options are self explanatory. You can customize it as required by your setup.

As the program is java, it can be run on any OS / Architecture supporting Java.

Please note that in order to "rotate" backups, the program will delete the oldest backup! Don't modify the backups or store anything there!
The program only gets information from the google server: it does not update or delet anything: you are safe there!

To determine if the file was already downloaded, the last_update tag given by google is checked. I suggest you do a full backup from time to time to avoid an error propagating from backup to backup (to do that, just add the option full download after the "properties" file launching the jar)

2. Steps to install the gdocsbackup on a linux based NAS
The setup is easy to adapt to any machine running linux. I didn't do a tutorial for Windows or Mac as I lack some knowledge to do it, but it can of course be done... feel free to adapt it and post your results and hints in the comments!
This tutorial assumes some vi ans linux knowledge...

This is how I installed the gdocsbackup.jar on my NAS (an Iomega Storcenter ix4-200d). Please note that the procedure is unsupported by Iomega! use at your own risk!

a. Download and setup of gdocsdownload
First, you need to ssh into your NAS (see my other post if you have am Iomega Storcenter)
Then:

the program will need to be started from a script so that we can set correct folder permissions and TMP folder.
You need to make sure there is enough space in your temp folder (my /tmp/ folder is way to small, that's why I use /opt/tmp/

vi gdocsdownloader

and then type:

#!/bin/sh
#this is to have a backup that's readable by everybody
#but only writeable by the owner.
#change it to suit your needs
umask 022
#use a tmp file with enough space to fit all your docs
#NB: it seems like there is a bug somewhere and the tmp directory has to
#be on the same partition than the destination directory....
#please choose a tmp file respecting these conditions
#/opt/bin/java -Djava.io.tmpdir=/opt/tmp/ -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar $@
/opt/bin/java -Djava.io.tmpdir=/mnt/pools/A/A0/data/perso/gdocs/ -jar /opt/usr/local/gdocsdownload/gdocsdownload.jar $@

Saturday, February 11, 2012

I like to have a copy of my pictures on picasa to be able to share them with friends and family. I usually upload them in reduced resolution, to stay within the free storage space given by google.
The problem is that uploading them can be a pain: the picasa software can be very slow to upload them, especially if your are accessing the pictures on your NAS with a wireless network.

Instead of using the picasa software, I tried to use googlecl tools (http://code.google.com/p/googlecl/) to do that but it turns out I couldn't get it to do want I want (no sync folder option + no resize of picture on the fly).
There is an unsupported patch to synchronize folders with googlecl (http://code.google.com/p/googlecl/issues/detail?id=170) but that doesn't solve the problem of image resizing...I did not even test it...

1. Presentation of my solution: picasauploader.jar

To solve the problem, I wrote a small piece of java code (picasauploader.jar) that:
- creates any new album in picasa web when a new folder is created on the disk
- upload (and resizes if necessary) new pictures on the disk to picasa web

In my setup, I want to install picasauploader.jar as a daily cron on my NAS, but you can install it anywhere.

You just need to organize your pictures as
/path/albumname/picture.jpg
and use /path in the picasauploader.properties

The jar is configured using the config file picasauploader.properties which reads as follows:

#use system defined proxy
picasauploader.usesystemproxy=true
#picasa/google account username and password
picasauploader.username=xxxx
picasauploader.password=xxx
#semi column separated directories
picasauploader.diskpaths=/xxx/yyyy;/aaaa/bbbb
#can be either:
# private: accessible to anybody with the direct link but not without the direct link
# protected: not accessible except from your account
# public: available for everybody to see
picasauploader.albumcreationaccess=private
#if you want to resize images before uploading (aspect ratio is kept)
#Note: only JPEG images are resized...
#max Height in px
picasauploader.maxheigt=1600
#max Width in px
picasauploader.maxwidth=1600
#jpg quality when resizing
picasauploader.resizequality=85
#log file (for linux, good practice is to put it in /var/log/ or /opt/var/log (and make sur logrotate works correctly))
picasauploader.logfile=/opt/var/log/picasaupload.log

All options are self explanatory. You can customize it as required by your setup.

As the program is java, it can be run on any OS / Architecture supporting Java.

Please note that for safety, the program does not delete anything on picasa web (nor on the disk, of course). Therefore, it is very safe to use.

Known Limitations:
- only suports JPG GIF PNG BMP image formats
- picture resizing is only supported for jpg images
- only the name is used to determine if a picture was already uploaded: if a picture was already uploaded and then changed on disk, it won't be uploaded again.

2. Steps to install the picasauploader on a linux based NAS
The setup is easy to adapt to any machine running linux. I didn't do a tutorial for Windows or Mac as I lack some knowledge to do it, but it can of course be done... feel free to adapt it and post your results and hints in the comments!
This tutorial assumes some vi ans linux knowledge...

This is how I installed the picasauploader.jar on my NAS (an Iomega Storcenter ix4-200d). Please note that the procedure is unsupported by Iomega! use at your own risk!

a. Download and setup of picasauploader
First, you need to ssh into your NAS (see my other post if you have am Iomega Storcenter)
Then:

Thursday, January 26, 2012

EDIT: this post is outdated now, please see this post instead with a much newer version of rtorrent that works with magnets and natively supports ip filtering

This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
The torrent software supplied with the Storcenter doesn't work well for me: some torrents never load, some disappear etc.. plus there is no ip filtering capability. The aim of the tutorial is to install rtorrent on the NAS which seems the most logical choice for a NAS (light weight and reliable) and explains how to enable ip filtering directly within rtorrent which is especially usefull since peerguardian/moblock can't be installed on the NAS because some kernel modules are missing...

If you don't want to connect remotely to rtorrent to manage it from you computer, you can skip the rest of this section...
Install nTorrent on your computer http://code.google.com/p/ntorrent/
Install xml-rpc on the NAS:

Then, create the directories rtorrent/download and rtorrent/torrents inside the torrent share using regular NAS access (to have the right permissions)

4. Configure the software for remote access
This is only if you want to manage your rtorrent remotely:
Thanks to http://www.nslu2-linux.org/wiki/HowTo/RtorrentWithRemoteGUI for the setup.
Security warning: if you follow these steps, anybody that can access port 8081 of you NAS will be able to send commands to rtorrent! You want to make sure that this port is only accessible from your local network.

did you install xmlprc correctly? is ld.so.conf updated correctly? did you run ldconfig?
to connect to the running instance:

/opt/bin/screen -r rtorrent

and kill the terminal (putty) to exit or press Ctrl-a d. For remote access: you can start lighthttpd on the NAS

/opt/etc/init.d/S80lighttpd start

and then start nTorrent on your computer and connect to your NAS port 8081 (by default) on path /RPC2. 5. Get rtorrent to start automatically on reboot
Follow the tutorial How to run a program at boot on Iomage Strocenter You just need to add the following lines to the script:

6. Get a peerguardian like protection
First, I tried to install peerguardian linux but ran into a wall: the LifeLine OS on the Iomega Storcenter does not have the right kernel modules. I tried to recompute the kernel from the sources given by IOMEGA on their website (it is available for download in the support section) I got it to compile but insmod of the required module (x_tables.ko) does freeze the kernel (hard reboot required). Since I could not think of a safe way to push further in this direction (without risking to brick the NAS), I investigated other possibilities...(post a comment if you want more details on the kernel compilation)
I thought about abandoning rtorrent altogether and try Vuze (which has ip filtering).I got it to run but it was pretty unstable (jre crash)...

Luckily, someboby wrote a patch for rtorrent so that it supports ip filtering. I got it to compile on my NAS (version 0.8.6) and here is the result.You just need to:

that will give you a rtorrent with ip filtering supported. Note: this only work if you previously installed version 0.8.6 of rtorrent! The precompiled version will only work if you have an "armel" architecture. Otherwise, you need to recompile from source (see point 7 below)

That's it: you just need to restart rtorrent to enjoy ip filtering. The ip filter file will be update everyday thanks to the cron (and rtotorrent will reload it).

7. In case you want/need to compile rtorrent with ip filtering yourself!
This is usefull if you are compiling a different version or a different architecture (please comment and report your success if you do so).

Then, take care of litorrent. I recompile libtorrent to install the correct headers as I don't find them anywhere (I didn't look for a deb archive with the correct headers but that might have done the trick...):

This one came from a mismatch between heders and libs. Recompiling the package xml-rpc from source did fix the issue.
8. What's next
In a different post, I will detail how to install Vuze headless (without graphical interface)... I don't recommand installing vuze because I ran into stability issues while testing it (several jre crashes). However, a new Jre version might solve the issue...
On top of that, you can't do much from the Web UI: when you want to setup something, you ofen have to use the Vuze command line inteface and I did not find any proper documentation for it.
Note as well that rss feeds features don't work in headless mode.

Monday, January 23, 2012

This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! I undertand it works (but still isn't unsupported by IOMEGA) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
This post is extracted from a previous post (how to install crashplan on Iomega storcenter). I am planning more tutorials on the Iomega storcenter ix4 and wanted centralize this part in case it needs to evolve with future firmwares...

2. Create a script that runs at boot
Iomega OS (EMC LifeLine) does not respect what is inside /etc/rcx.d/. If you have another brand of NAS, chances are that installing the script into /etc/rc2.d/ will result in the script running at boot...
If you already downloaded the scripts to have a command run at boot, you can just add the new command below the one you already have in /opt/init-opt.sh. Just make sure the commands all return immediately (or add a & at the end) so that the script does not get stuck before reaching the last command.

This tutorial uses unsupported features of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! It should work (again, it is unsupported) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
The aim of the tutorial is to be able to add programs to you NAS without having to go too deep in the system. This is also helpful to compile natively on the NAS without needing to cross compile for your architecture....

2. Directory Structure on the NAS
The Lifeline OS (Iomega's OS) does put most of the root file system in read only mode. It is not much use to try to put stuff there anyway because the partitition is very small
You can type:

to build the list of available packages. The problem of this setup is that you won't be able to install some packages because installation will fail because part of the filesystem is readonly.
Thanks to ipkg, there is an easy fix:

ipkg install ipkg-opt

This installs the binary /opt/bin/ipkg-opt. The idea is then to use this binary instead of the regular ipkg: as a result all packages will be installed in /opt/ and you won't run into problems with the read only filesystem.
The only drawback is that /opt/bin/ is not in your path... There is a simple remedy for that:

PATH=/opt/bin:$PATH

Note: this is not persistent (if you start another shell, you will need to do that again).
Also, as a one time persistent thing, I recommand to do

vi /etc/ld.so.conf

and add

/opt/lib/

at the end. That's the main problem with /opt installed software: you might end up to get duplicated libraries between /lib and /opt/lib (ldd and ldconfig are your friends).

so that you config in /etc/ipkg.conf remains useable with /opt/bin/ipkg and /opt/bin/ipkg-opt

Then type:

/opt/bin/ipkg update

to setup the list of available packages for /opt/bin/ipkg /opt/bin/ipkg-opt

4. Install utilities and optware-devel
First install the utilities you miss to do some actual linux stuff:

ipkg-opt install zip unzip bzip2 gzip

If you want a full gcc toolchain to compile your own applications from source.

ipkg-opt install optware-devel

The compilation can be slow but this allows you to natively compile on your NAS (I think it is simpler because there is no need to set up cross compiling on another box)...

5. Install armel/debian compiled software
Unfortunately, you will soon discover that some of the packages you want are not available for ipkg.
You can then either compile your own software (see next point) or get some ready-made debian archives....
In this case, I suggest to use the following command (for example for libsigc++-2.0-dev):

Note: do not use /tmp/ as the space available there is very small...
Note2: be careful to choose packages compiled for your architecture (armel in my case)! The above command will install your soft as if /opt/ was the root directory (you will end up with /opt/usr/lib directories and the like). As a result, you might need to add stuff in your PATH or edit /etc/ld.so.conf.
Be careful not to make a mess of your system or you will soon end up with several times the same library (with different versions) at different locations... You will need to sort this manually(ln, rm...)

6. Compile from source
For example, a very classic install for libnfnetlink:

Don't forget the --prefix=/opt to specifiy you want to install your package.

When compiling from source, you run into the usual complation problems you can get with linux (libraries/includes not found etc...). It gets even more annoying because default stuff does not work well anymore (package manager is not where expected etc), and sometimes you end up having to specify the complie flags yourself.
For example, I recently had to edit the configure script of a source tarball to add:

-dev packages can be difficult to find with ipkg, this is where you often need to get a .deb package or compile the library from source just to get the header files right...

7. Conclusion
As you noticed, it is just a matter of using the tools (and using them right). It just gets a little bit more complicated because the usual package manager does not work out of the box, the procedure is unsupported by the hardware vendor and precompiled packages can be difficult to find for armel...

Thursday, January 12, 2012

This tutorial uses an unsupported feature of the IOMEGA Storcenter ix4-200d. It worked for me but use it at your own risk! I undertand it works (but still isn't unsupported by IOMEGA) on the ix2 Storcenter as well.
Tutorial tested on IOMEGA Storcenter ix4-200d firmware 3.1.14.995
This post is extracted from a previous post (how to install crashplan on Iomega storcenter). I am planning more tutorials on the Iomega storcenter ix4 and wanted centralize this part in case it needs to evolve with future firmwares...