2009/08/13

In this article, I would like to share with you the workflow I have developed, to handle the photos I make using Linux and free software exclusively.

Preparing

I do not follow any specific routine, not even formatting the cards before I start to make photos (except those times when I go on a several day trip). Probably, the only aspect worth mentioning is that I always shot RAW, never JPEG.

Unloading

I use Rapid Photo Downloader to unload the photos: when I plug the card into the reader (it is faster to use a card reader than connect the camera by USB) the program makes all the job. Rapid Photo Downloader downloads the pictures to a folder inside "/home/eperez/Pictures" named something like "20090727" (using the date and time when the download is performed). It also stores a back-up copy on an external drive, and deletes the files from the card (once they have been downloaded and copied).

I use these folders as workspaces, and this way I have all the files belonging to a "session" inside the same folder; it is very convenient when I need to locate pending work quickly, and to know how long it has been waiting for me. Besides, having this back-up copy esases the task of discarding the bad photographs, as I do not have to think twice before deleting a file.

Browsing

Sometimes, I may need to export the photos quickly, perhaps because I want to show them to somebody, or send them by email. In this case, I just extract the embedded JPEG image that each RAW file contains. I have the following script (associated to Nautilus's context menu), which creates a folder named EXP, and exports there a reduced version of each RAW file:

Sorting

I tend to experiment and play a lot, and I frequently get home with several panos and HDRs in the memory card. I prefer to keep each batch of photographs inside an different folder, to handle them separately. I have another script (associated to Nautilus's context menu, too), that creates a new sub-folder (named "000", "001", ...) and moves the selected files there:

Selecting

Now comes the selection of the photographs that I will keep: I delete the duplicates and all those that are of no use to me (out of focus, motion-blurred, ...). I use Geeqie, for this task, as I can quickly browse the pictures and delete the bad ones, or compare two similar photographs and keep the best one (very useful when I shot bursts, for example).

Processing

Once I have a clean folder, it is time for the processing. I have tried several packages, and the one that bests fits my needs is RawTherapee; the final result is really good, and it is very fast when dealing with lots of photographs, because you can copy-and-paste parameters from one file on to another. With my old computer I edited first all the files, and then made a batch export; but now I have a faster computer, and I can edit and export at the same time. Anyhow, the outcome will be a folder with all the files in JPEG format.

Before going on to the next step, I usually do a last review of the files: using Eye of GNOME I just open the folder as a sideshow and watch each file full-screen; basically, I try to find serious processing errors.

Retouching

Some photographs deserve more attention, perhaps because I need to correct a small flaw, or because I want to play a little more with them. I do not export those files to JPEG but to lossless TIFF. For the retouching, I mostly use GIMP, with some plug-ins and brushes I have collected.

Panoramas

Next come the panoramas. I use Hugin to do the stitching; but first I need to convert the RAW files to a format that the program can handle, like 16bit TIFF. In this case, I just need a direct conversion, without any adjustment, because I will do that after the stitching. The following script takes care of all the job; it creates a new folder named EXP, and makes the export there, using UFRaw:

Those TIFF files can be opened with Hugin now; then I make the stitching and export the result as a new 16bit TIFF file; I open that file with RawTherapee, make the adjustments of color, curves, sharpness, etc., ..., and then I export the result as a final JPEG file.

HDRs

I explained my workflow with HDRs at "HDR and Linux": basically, I just need to execute a script (you can find about it at "Script to make HDRs with Linux") and then I can make the adjustments with GIMP using layers; finally, I export the result as a JPEG file.

HDR Panoramas

Sometimes I make HDR panoramas, taking several exposures at each camera location. With these, I fist make an independent panorama for each exposure, and later produce the HDR from these panoramas. This procedure is explained at "HDR night panorama from Barcelona".

Storing

Once I have all the photos from the same event processed, I store them using this schema:

Inside my "Pictures" folder I have a folder for each year, and inside that another folder for each event; I use a sequential number and the name of the event, like "/home/eperez/Pictures/2009/19-Cerdanya".

I store all the final JPEG files there, a folder with the RAW originals and the PP2 side-cars used by RawTherapee, and another folder with the XCF files from GIMP (on the retouched photos); I tend to discard all intermediate files used for panoramas and HDRs, because those take lots of space and are relatively easy to recreate.

The JPEG files get renamed (using pyRenamer) to "nnn_mmm.jpg", where "nnn" is a sequential number different for each event, and "mmm" is the number of the file within the event; RAWs and the associated PP2s keep their original filenames, while XCFs have the same name than their corresponding RAW.

Finally, to be able to find the folder that contains each file, I keep another folder named "/home/eperez/Pictures/0000", where I make symbolic links to each event folder, using the global number of the event; for example, "/home/eperez/Pictures/0000/090" points to "/home/eperez/Pictures/2009/19-Cerdanya".

Cataloguing

Unfortunately, I have still not found a Linux-native package that meets my needs to catalog the photographs; I am currently using Picasa, just to TAG the files and make searchs easier.

6 comments
:

Anonymous
said...

hi edu,

i'm interested in controlling a camera through a web interface. i run chdk on a canon g11. there are some basic control functions with gphoto2 which i imagine can be used to build a web interface, but i see little documentation about the same.

If you search for "gphoto2" and "tethered", you will find several documents describing how to control a camera from your computer, using the command line. From there, you 'just' have to make a web interface to those commands.

I've just opened that post to look for more interesting techniques ( after reading the 'aging' guide ) and that's awesome - so much information to explore and possible use. Thank you for that and looking forward to read more !

Thank you for nice article: it was interesting to know some information about workflow: this kind of articles are quite rare on the web, especially for linux. For me it was a revelation I can sort images by use of _. Before I used it like YYYY/YYYY-mm/ which is also not bad but your's idea is much better. Will take it into account.

As for cataloging: you can have a look at jBrout (http://jbrout.manatlan.com/; http://code.google.com/p/jbrout) which is very quick and deals with what inside photograph: exif, xmp, iptc. It can browse RAW files in fullscreen very quickly as it uses jpeg thumbnail from it. It also has nice features for tagging photos, but this is really slow procedure if we speak about RAW files (it uses jhead cli to put tags inside photo, and to tag 1 photo you need to wait around 10 sec on core2duo), so for tagging and other exif operations I use usually exiftool - it is around 20 times faster! This exiftool is also very usefull for renaming/moving images to the folders based on exif data.