Talking about greatness to society and a little bit of skin. At university one of my projects was a system that used CBIR to try and diagnose skin cancer. The doctor would take an image of the suspect area it then would be compared against a database of cancers. It would then return a suggested likelyhood of being cancer. It also allowed the doctor to build a history of images allowing easy comparision over time.

I always felt good about working on projects like this, gives a warm fuzzy feeling.

In fact, there is a bar [sunsetbeachbar.com] located right in the flight path of the runway. I just met a guy who came back from there, and said it's quite interesting to have planes landing so close to you.

They still fly that approach every day. Here's a picture [airliners.net] taken this year from the same beach on St Maarten (SXM). That airport is famous for it's low approach. That's a nude beach, by the way, and there are many photos to prove it if you dig around airliners.net. From Wikipedia [objectsspace.com]:

"The island is served by many major airlines that bring in large jets, including Boeing 747s, carrying tourists from across the world on a daily basis. This fuels the island's largest revenue source, tourism. The airport is famous

1. Don't be ridiculous. When you take a picture of a very fast moving object, you get a blurred picture. Unless the plane was hovering in mid-air, that photograph is impossible.

2. Have you ever been under a plane? It's very very loud. A 'holy grail', where you're deafened every 10 minutes, I don't think so. Also plane-spotters are worse than train-spotters. They all look the same anyway.

3. If I wanted to cause a terrorist atrocity, that's the beach I'd go to. A simple rocket-launcher, in combination with

1. Of course, you're right, it's simply not possible to take a clear picture of a fast-moving object like, say, a race car [img256.exs.cx], an airplane [af.mil], or a bullet [rice.edu].

2. When landing, their engines will be throttled back, and therefore much quieter than usual. Also, ever heard of earplugs?

3. First you have to get the terrorists and their rocket launchers TO the beach without being noticed. Might be kind of hard considering it's a popular tourist spot on a small island in the middle of the Caribbean.

1. A camera works by opening a shutter and allowing lens onto some film. Whilst the shutter is open, the plane is moving,

2. Have you ever been near an airport? It's not fucking quiet. Wear earplugs? Must be great having to wear earplugs 24/7. Do you realise those things hurt your ears?

3. Yeah, because terrorists are all dark-skinned and wear turbans and have names like Al-Sharaqa Jazeirain. You can see them a mile off. You can probably recognise them from a distance because they're the ones firing the mac

Yes it is. the 747 Gear looks like that so that the rear tires hit before the front tires. If you look, the gear "post" is perpendicular to the plane, while the wheels sit at an angle and straighten out once the plane is actually on the ground.

I was just thinking about this the other day. I think content-based image search is one of the Next Big Things. Cameras are so ubquitous now (for better or worse), but having to rely on metadata to give meaning to images requires lots of effort up front.

It will be interesting if we ever get to a stage where we can just search for a random object (or person) in a database of photos. Then you could take pictures of everything with an always-on camera and if you need to find where you put your car keys, just do a search.

Dunno about that. Here's what I get after clicking on a picture of an A-10 Warthog: A Tornado, a 767, a 747, A Fokker F-7 turboprop, a Dassault Falcon business jet, a Luftwaffe A310, a Harrier, an F-18 Hornet, another Tornado, a Lockheed P3 Orion sub hunter, a Sikorsky Super Stallion helicopter, a Concorde... and soforth. No other A-10s. Hard to think of a more diverse crop of aircraft.

Most of these aircraft are airborne but a couple are on the ground. If I cli

Forget slashdotting Airliners.net, how long before the TSA shuts down that website? The trainspotting hobby has already died off following terrorism fears, I can't help but think that other enthusiast sites like Airliners.net will be next.

Now I've never seen trainspotting, but I find that this happenns with a lot of movies/topics. Trying to filter through reviews, dvd sales, video games, etc. when trying to find info on the subject. (ie: I noticed this first when "The Mothman Prophecies" first came out and I wanted to know more about these supposed mothmen.

The TSA would probably have some difficulty shutting down Airliners.net, seeing as it's in Sweden. Furthermore, airliner photography is perfectly legal in the US, and even TSA reps have said so. Usually the only people with a problem with it are the rent-a-cops.

Some Applications of Our Research
1. Airliners.net
A site with almost 1,000,000 aviation images.

Wow !!! I tested their Sample search [airliners.net] and all the results were aeroplane photos !!! Ok, ok the site only has airplanes but still..:)

On a more serious note the alogorithms seem to look for similatity in the colors and lighting rather than the subjects (for example it shows the interior of a cabin in photos similar to a whole plane in the sky. To really see its effectiveness we need to test in in

The objective of this work is to recognize all the frontal faces of a character in the closed world of a movie or situation comedy, given a small number of query faces. This is challenging because faces in a feature-length film are relatively uncontrolled with a wide variability of scale, pose, illumination, and expressions, and also may be partially occluded. We develop a recognition method based on a cascade of processing steps that normalize for the effects of the changing imaging environment. In particular there are three areas of novelty: (i) we suppress the background surrounding the face, enabling the maximum area of the face to be retained for recognition rather than a subset; (ii) we include a pose refinement step to optimize the registration between the test image and face exemplar; and (iii) we use robust distance to a sub-space to allow for partial occlusion and expression change. The method is applied and evaluated on several feature length films. It is demonstrated that high recall rates (over 92%) can be achieved whilst maintaining good precision (over 93%).

why bother making an algorithm that can recognise which images are porn and which are not when you can just set up a web site where people will do it for free? It reminds me of those "enter the characters in this image" tests that places like Yahoo do to ensure you can't sign up for a million email accounts a day. They're so easy to get around cause all you have to do is present the image to a man who wants porn and he'll happily provide his character recognition skills without charge.

I just did a quick search based on this [designer.am] image of a Qantas logo (that's the main Australian airline, in case you're wondering...) It's red, with a white kangaroo in the middle. My theoretical aim was to find photos of Qantas planes.

What I got was an awful lot of red planes - some of which were actually Qantas planes, but I think more by coincidence (i.e., they're red) than design. Many images had nothing to do with Qantas, or even a red plane - they simply had a lot of red in the image.

This is impressive in some ways, but in others it seems like it's simply looking for similar patches of colour. I haven't done enough testing to see what happens if,say, I gave it a half red half green image.

Interesting, but not ready for public consumption just yet. A bit like A.L.I.C.E. the artifial intelligence system actually - neat, but not practical. Yet!

I don't believe it claims to find photos that contain the image you reference. It's more like the system is searching for images that are similar to the one one you give overall.

So the search system went looking for large red triangles on a white backgrounds. Obviously, there were none, so it settled for the next best thing - white non-triangular planes on light blue background or something.

Programs like GQview (unix/linux) offer functions to search for similar images, mainly used to find duplicates.

It's not quite "put in an image and find me all the similar ones" but the underlying technology is the same, usually creating some kind of "signature" of each image and then comparing the signatures to find others visually similar.

The Mona Lisa (famous and out of copyright) is often plagarized in whole or in part as part of commercial or satiric artistic works. These types of visual database engines have frequently been explained to me as being able to input the Mona Lisa and get a list of images that used the entirety of the image or just a part (such as the highly-praised subtle smile).

The big problem to me is specifying input. I know the "look" of the Mona Lisa's smile, but even with the best pen input methods I'd never be able

I've not finished it, but I started a book called "Mind at Light Speed" by David Nolte a while back. He describes three stages of machines of light, and I can't do the book justice here.

However, he put forth the concept of replacing the bit as the common unit of data with actual images - best described as holographic images of light manipulated by light. A picture really _would_ be worth a thousand words in such a system!

Yeah, in my early undergrad days I took a 200 level course for a gen ed requirement of discrete mathmatics. Wang was the professor, and too this day I haven't had a course that was as difficult or completely freaking insane as the one he gave.
Glad he is doing more research and less teaching.

Pattern Rcognition [williamgibsonbooks.com] is a novel by William Gibson, basically set in the present day or very near future. Image based search plays a central role in the plot. It's a very good read.

I was looking at a picture of a plane on that web site and there was a link that said "Click for similar images". And what do you know? It brought up more pictures of planes. This is amazing stuff. How did it understand that I was looking at a picture of a plane?

Whatever algorithm they're using, it seems to be sensitve to the horizon line, colour, shading, orientation of the aircraft, etc. It seems to be operating at the level of a pigeon (who have been shown to discriminate photos depicting trees, water, and particular people - as well as art by Picasso and Monet. See http://www.pigeon.psy.tufts.edu/avc/huber [tufts.edu] for other examples. It will be some time before algorithms can match on the basis of model numbers and such. It took humans quite a while to evolve a cor

The GIFT (the GNU Image-Finding Tool) is a Content Based Image Retrieval System (CBIRS). It enables you to do Query By Example on images, giving you the opportunity to improve query results by relevance feedback. For processing your queries the program relies entirely on the content of the images, freeing you from the need to annotate all images before querying the collection.

GIFT [gnu.org]
It worked pretty well for me in the demos they linked too. I have been waiting for this type of application to gain momentum.

A.net is very stringent on the photos they accept. You can submit hundreds of photos, and get rejected for such things as 'badmotive' (a runway sign blocking a single tire), very mildly soft focus, and lots of other pretty anal things (IMHO). So while the image count they are dealing with is high, the obvious resulting similarity among images will result in a high number of matches.

Now, do this for something like Google Images or PBase or collections spanning infinite numbers of subjects and image sizes,

It seems that a favorite use of the image similarity search over there at airliners.net is for the spotters to run pix on airline and flightsim sites through the search, to see who on anet has been infringed upon copyright-wise.

Look up Bombardier in the forums on airliners.net, they have frequently asked a photog for permission to use their photos (for pay), then later say they elected not to use them (and therefore no payment to photog). But then they use the photos anyways without payment or acknowledge

They are doing it based upon the shades of color in the image. So if you query for a image of an aircraft in flight with a lot of white clouds behind it, you get more of the same, but you also get aircraft parked on snow-covered ground.

I've tried two different images of airplanes; one of a bright red flying car on bright green grass and one of SpaceShip One against a deep blue sky. Both times, the results looked surprisingly like my query images in color composition only. Red planes on grass and white planes against a blue sky. Inauspicious start.

Next experiment: I took a picture of a highly distinctive plane, a harrier, climbing at a steep angle and viewed in profile. I got, in return, a list of passenger jets, and even a helicopter. Hardly surprisingly, all of the result pictures had the same bluish white sky as my original image. That was literally the only similarity.

According to the introduction on the search page the heuristics used compares colors, contrast and shapes in the images themselves. I saw no correlation whatsoever between shapes, and any correlation in contrast seems to be to be the result of the search engine simply looking for images that contain the same colors in a similar ratio to the original. In short, nothing to see here, move along.

On the other hand, one of the projects listed under the Penn State University link looks fairly fascinating. The Riemann a-LIP project [psu.edu] (automatic linguistic indexing of pictures) doesn't allow user input of images, unfortunately, but it does show some fairly fascinating attempts at verbally qualifying image data. For example, it describes a blue and orange mandelbrot as pattern agate shimer abstract scene, and a sunset over a lake as Berlin Devon Namibia landscape lake scene. Okay, it may still need some work, but it sure beats the hell out of the "find the same color airplane engine".

There is a common computer vision story (actually it was a neural network, but it still applies).

Actually, this story (the veracity of I do not know) predates our modern concepts of "neural networks" - that is, multi-layer networks of nodes (typically three - input, output, and intermediary layers), in which the nodes simulate neurons via weighted thresholds and other mechanisms for "firing" an output based on inputs aggregated over time and/or frequency - coupled with back-propagation "learning"...

Of course, given the usual course of things, it will instead be deployed at JFK's formerly-TWA terminal, assigned facial recognition tasks, and immediately declare everyone to be among the 10-most-wanted terrorists. I can't wait.

About a year or so ago, I and three other Masters students worked on a similar project at the University of Southampton.

I've not RTFA (not had the time), but our approach was to split the images into segments (based on colour and texture) which were assumed to be objects. The segments would then be analyzed for various feature vectors, such as shape, texture, colour etc. These vectors would then be added into a database of numbers, and finally the segments grouped, giving a collection of classified sections which (hopefully) represent similar objects.

From related metadata such as keywords, you could then hope to build up an idea of what keyword matches which section. You could also come up with a relevance between two images, and thus search for similar images.

We didn't have enough time to make it bulletproof by any means, but our limited results were very promising.

Sorry I can't find the paper, but we've got some screenshots of the application here [soton.ac.uk] and here [soton.ac.uk] (you can see false colouring applied to the original image to display the segments)

Image search will kinda work for airplanes in this database,as there are a very limited set of airplane model numbers, which are going to be attached to each photo.

But if the database didnt have these text clues the image search is going to be unlikely to see the similarity between an 747 in the air, as seen from the ground, with a head-on view of a 747, or one at the gate, or one in a hangar, or one in twilight, or one of a different color.

I was trying this out a bit, and have to admit that it's cool that something like this exists at all.

However, I think it would be better if it were able to realize what the 'background' was and filter it out. (Though I couldn't begin to guess how you'd do this.)

For example, I searched for this image [airliners.net]. Many of the results [airliners.net] are of something completely different, such as a white jet. Which is nothing like a camo helicopter. But the sky and the ground are pretty similar, and I think that's how it's matching.

In general image understanding is equivalent to general AI. We won't get a CBIR system that works well before we get an AI that works and vice versa, because people expect to be able to match the *content* of the image they submit as template and not the general appearance of the image. The problem is then too unspecified.

Even in the restricted context of aeroplanes this is not a trivial problem. Someone in the list of replies submitted an image of a warthog (A-10) and got nonsensical results. Somehow the

Because it still has problems - you'll note that the pictures seem to be compared simply based on color similarity. That's the same thing imgSeek [python-hosting.com] does (a great program for sorting and searching your photos) on photo searches. It works wonderfully if you're searching a very limited picture subset (say, airplanes), but if you search a wide variety of pictures, the results can be quite amusing.

I could be wrong, but I got the impression that Imgseek uses the position as well. I have tried running it on a collection of photos from a concert, and it is very good at returning those with the stage in the centre. Then again, the object in the centre can effect the colour of everything on most cameras.

Google actually did take this technology and try it. The first version of their image search had a "find similar" link next to every image. These tended to work okay at first (they weren't great, but you usually got enough photos back that you could visually scan them and find something of interest that was related to the original image). After a few months, for some reason, the "find similar" links started returning increasingly nonsensical results. After it degenerated to the point of near uselessness, th