Photography All Mixed Up

artificial intelligence

In the continued blurring of borders between the art world and photojournalism an innovative project has sprung to life on the shores of Tate Britain. As newsworthy photographs from around the world roll in a computer brain mines its gallery archives looking for similarities. The resulting matches can be beautiful…

In 2011 Sabato Visconti removed the memory card from his digital camera to inspect his photographs and discovered an unusual glitch among the files. Random zeroes had been added to Jpeg files. What should have been simple reproductions of any one given scene turned out to be visualisations of his technology dreaming. It was accidental, but profound and this simple glitch lead Visconti on a path he now embraces; breaking software to push the boundaries of photographic imagery.

Visconti’s latest project concerns the popular social networking app SnapChat, most famous for its ephemeral approach to photographs that ‘self destruct’ Continue reading →

We’ve no shame in jumping on the Back to the Future Day train even if it is nearly over. However during our research we hadn’t found any other sources reporting on the cameras used in the epic trilogy – and we’re not talking about the JVC camcorder. Those of you who remember the iconic sequel will recall a certain device the Doc is seen with early on. It’s known as the Binocular Card.

There’s been plenty of efforts to analyse what the film got right and wrong, and despite every adolescent growing up with this imagination of the future in Continue reading →

NASA just released another ‘selfie’ image taken by the onboard AI on the Curiosity rover currently exploring the red planet. As usual though a plethora of social media cretins logged on to denounce the photograph and raise suspicions of its authenticity.

However one response so far has pretty much manage to silence the critics and I like it.

If the scientists at NASA are clever enough to send a robot to Mars, I’m pretty sure they can get it to take a selfie without the arm in the image.

Exactly.

If you’re still not absolutely convinced, or would like a brief explanation of how it’s taken you should watch this short animation.

It’s increasingly apparent that artificial intelligence’s inevitable ascension as the dominant species on our planet (and beyond) will not come as some have predicted in an instant, but a slow, invisible growth. The latest advancement in AI comes in the subdued revelation by Facebook that it now has an algorithm that can tell us all apart from the back of our heads. The announcement of DEEPFACE came and went mostly unnoticed.

The final algorithm was revealed and demonstrated by Facebook last week at the Boston CVPR 2015 conference. It’s been reported that Yann LeCun, head of Facebook’s artificial intelligence division says it worked with an 83% success rate after reviewing 60,000 public photographs of 2000 people from Flickr and running them through a sophisticated neural network. This figure rises significantly if a frontal face is recognised to 93.4%, making it possibly as accurate as a human brain.

The algorithm works quite simply by recognising silhouettes, clothes, hair colour and other distinguishable features that a person may be identified by and comparing them with other photographs. LeCun states that it easily recognises Mark Zuckerberg because he’s always wearing the same grey T shirt.

Thankfully, Yann LeCun recognises the romp to stardom AI is currently having and warns we must keep a watchful eye:

There is little doubt that future progress in computer vision will require breakthroughs in unsupervised learning, particularly for video understanding, But what principles should unsupervised learning be based on?

I for one would prefer not to be recognised by my behind, however if this is the future our society holds it’ll spur me on to dress better and certainly lose a few pounds to confuse those pesky Facebook neural networks.

This morning I stumbled on to the wonderfully engineered Word.Camera website via a PetaPixel blog post. The premise is simple: Convert a jpeg into somewhat meaningful English language.

To test it out I uploaded this small jpeg of my son on top of an abandoned building, silhouetted by the Sun.

I gave the computer an unusual image with plenty of contrast to decipher.

It took the algorithm a few minutes to process an answer. Finally I was presented with several paragraphs of text giving a better than vague description of the image it received:

Of course, a barbed wire, a men, and an energy. Thus, the barbed wire remains unknown. The men evokes typing, and the energy is made from an enterprising or ambitious drive. Probably, the barbed wire remains unknown.

…Yet, a silhouette and a sunset: the silhouette evokes outlining, and the sunset is not the time in the evening at which the sun beginning to fall below the horizon.

Ok, it’s a little jumbled, but understandable English for anyone who has at least a slim grasp on the language. The prose has a familiar air of poetry about it and perhaps with a little human refinement could even be passed off as professional.

I spoke to David Phillips, who operated a poem a day blog in 2014 to ask his thoughts on how the computer algorithm could shake up the industry. Continue reading →