Philip Hodgetts’ unique blend of business and production knowledge gives him insight into the current state of the industry, and a remarkably accurate look forward. Here he shares his thinking, and points to articles of interest from other sites, with context as to why they're interesting.

Archive for June 1st, 2009

Right now I’m in the middle of updating and adding to my digital photo library by scanning in old photos, negatives and (eventually) slides. Of course, the photos aren’t in albums (too heavy to ship from Australia to the US) and there are not extensive notes on any because “I’ll always remember these people and places!” Except I don’t remember a lot of the people and getting particular events in order is tricky when they’re more than “a few” years old, or those that were before my time because a lot have been scanned in for my mother’s blog/journal.

Source Metadata is stored in the file from the outset by the camera or capture software, such as in EXIF format. It is usually immutable.

Added Metadata is beyond the scope of the camera or capture software and has to come from a human. This is generally what we think about when we add log notes – people, place, etc.

Derived Metadata is calculated using a non-human external information source and includes location from GPS, facial recognition, or automatic transcription.

Inferred Metadata is metadata that can be assumed from other metadata without an external information source. It may be used to help obtain Added metadata.

See the original post for clearer distinction between the four types of metadata. Last night I realized there is at least one additional form of metadata, which I’ll call Analytical Metadata. The other choice was Visually Obvious Invisible Metadata, but I thought that was confusing!

Analytical metadata is encoded information in the picture about the picture, probably mostly related to people, places and context. The most obvious example is a series of photos without any event information. By analyzing who was wearing what clothes and correlating between shots, the images related to an event can be grouped together even without an overall group shot. Or there is only one shot that clearly identifies location but can be cross-correlated to the other pictures in the group by clothing.

Similarly a painting, picture, decoration or architectural element that appears in more than one shot can be used to identify the location for all the shots at that event. I’ve even used hair styles as a general time-period indicator, but that’s not a very fine-grained tool! Heck, even the presence or absence of someone in a picture can identify a time period: that partner is in the picture so it must be between 1982 and 1987.

I also discovered two more sources of metadata. Another source of Source Metadata is found on negatives, which are numbered, giving a clear indication of time sequence. (Of course Digital Cameras have this and more.) The other important source of metadata for this exercise has been a form of Added Metadata: notes on the back of the image! Fortunately Kodak Australia for long periods of time printed the month and year of processing on the back. Rest assured that has been most helpful for trying to put my lifetime of photos into some sort of order. The rate I’m going it will take me the last third of my life to organize the images from the first two thirds.

Another discovery: facial recognition in iPhoto ’09 is nowhere near as good as it seems in the demonstration. Not surprising because most facial recognition technology is still in its infancy. I also think it prefers the sharpness of digital images rather than scans of prints, but even with digital source, it seem to attempt a guess at one in five faces, and be accurate about 30% of the time. It will get better, and it’s worth naming the identified faces and adding ones that were missed to gain the ability to sort by person. It’s also worthwhile going through and deleting the false positives – faces recognized in the dots of newspapers or the patterns in wallpaper, etc. so they don’t show up when it’s attempting to match faces.