Zooming user interfaces or zoomable user interfaces (ZUI, pronounced zoo‐ee) are not exactly a new concept in the field of HCI/IXD. A ZUI could generally be defined as a graphical environment where users can change the scale of the viewed area in order to see more detail or less, and browse through different documents or objects. Despite all the work and research carried out in the space over the years the ZUI has had somewhat limited success. Indeed the finding of an effective and if you excuse the pun, scalable solution has proved somewhat elusive. That is not to say that ZUIs haven’t been effectively implemented in certain scenarios, indeed success stories such as Google Maps, Microsoft Labs Seadragon and Prezi have capitalised on the obvious benefits of effective applications of zoomable interfaces.

The term itself was coined by one Franklin Servan‐Schreiber while working for the Sony Research Lab in partnership with Ben Bederson and Ken Perlin. One of the longest running efforts to create a ZUI has been the Pad++ project started by Ken Perlin, Jim Hollan, and Ben Bederson at New York University and continued at the University of New Mexico under Hollan’s direction. More recent ZUI efforts include Archy by the late Jef Raskin, and the simple ZUI of the Squeak Smalltalk programming environment and language. Bederson developed Jazz and later Piccolo at the University of Maryland, College Park, which is still actively being developed in Java and C#.

ZUIs use zooming as the main metaphor for browsing through hyperlinked information. Objects are presented within a zoomed page or canvas and can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom.

A good introductory read is the late great Jef Raskins passage on ZoomWorld in his seminal HCI tome The Humane Interface: New Directions for Designing Interactive Systems, in which he discussed his idea of using the ZUI as a solution to the navigational dilemma for users. It’s also worth noting that he spent the latter stages of his career working on implementations of this new UI paradigm with his research team.

Dr Ben Shneiderman, another noted researcher in the HCI field made the following observation, which nicely encapsulates the lure of zoomable interfaces:

“Humans can recognize the spatial configuration of elements in a picture and notice relationships among elements quickly. This highly developed visual system means people can grasp the content of a picture much faster than they can scan and understand text. Interface designers can capitalize on this by shifting some of the cognitive load of information retrieval to the perceptual system. By appropriately coding properties by size, position, shape, and color, we can greatly reduce the need for explicit selection, sorting, and scanning operations.”

The potential benefits of ZUIs are well-documented and as previously mentioned recent applications such as PREZI and Microsofts DeepZoom technology have nicely demonstrated certain use cases in which ZUIs are a viable and cognitively acceptable model. However the shortcomings are also well-documented with the most commonly cited bête noire being a phenomena commonly referred to as ‘Desert Fog’. This occurs when a person becomes disorientated whilst using a zoomable interface and loses track of where they are, which could be confusing for the user, which likely leads to frustration and ultimately results in the abandonment of whatever task it was they were trying to carry out. The user no longer has any on-screen landmarks or cues upon which to work out where they are. Unquestionably, this is a worse situation than most everyday/orthodox interfaces where at the very least a user can often infer the context of their operations by looking at what is on screen. With the presence of ‘desert fog’ within ZUIs, there is nothing on screen to aid this inference, and so a user is left in a proverbial ‘no-mans land’. Wayfaring, assistive navigational maps and various other interface features have been employed in order to address this undesirable scenario albeit with somewhat varying degrees of success. Perhaps seeking a singular solution is the incorrect approach, with the ZUI conundrum proving it could be a case of ‘One Size Fits Some’.

Every now and then however a demonstration or an advancement in technology comes along which reignites the buzz for zoomable interfaces, yesterday I happened upon one of these demos which actually inspired me to write this little piece. At this years CESA Developers Conference in Japan Sony revealed an upcoming technology which will be available shortly as an SDK to developers for both the PS3 and the PSP. Sony have christened it High-Resolution Image Enlargement Technology, and despite the rather long-winded name it does not fail to impress. When I watched the demonstration video I was taken aback with the speed and ease at which the system was able to handle such resolution-intensive content.

The video below showcases a number of the demonstrations – the main demo appears to be a release calendar which inside each entry, contains high-resolution photos or a video of whatever is being released that particular day. Make sure you stick around for the mosquito – it’s quite impressive. This is a genuinely astounding piece of technology that could well enable some pretty cool software applications, however the real selling-point for me is that is will be available on widely used consumer products.

There have been many exciting developments in the field of HCI recently, with Augmented Reality, Experimental Sensory Experiences and numerous other emerging technologies making the headlines. Over the past year two in particuliar have stood out for me personally.

At TED this year Dr Pattie Maes a professor at MIT with the Fluid Interfaces Group gave a mind-blowing demo under the moniker of SixthSense which featured a wearable device that enables new interactions between the real world and the world of data.

Another technology I happened upon recently was featured in this years Siggraph, this demonstration is called Touchable Holography. It involves mid-air displays, holographics and actual tactile feedback. Normally we can “see” holographic images as if they are really floating in front of us, however we cannot “touch” them, because they are nothing but light. To address this problem Takayuki Hoshi and Masafumi Takahashi of The University of Tokyo have ingeniously combined holographics with actual tactile feedback.

This project adds tactile feedback to the hovering image in 3D free space. Tactile sensation requires contact with objects, but including a stimulator in the work space dilutes the appearance of holographic images. The Airborne Ultrasound Tactile Display solves this problem by producing tactile sensation on a user’s hand without any direct contact and without diluting the quality of the holographic projection.

The potential applications of both of these technologies is huge and I watch for further exciting developments with great interest.

RFID and NFC will both undoubtedly play a huge role in the field of Interaction Design in the coming years. The Institute of Design at the Oslo School of Architecture and Design in Norway have been carrying out some very interesting research in the field and have in turn come up with some very innovative applications utilising these Near Field Communications technologies. Their Touch initiative is a research project that investigates Near Field Communication, a technology that in short enables connections between mobile phones and physical things.

For their Nearness project they put together a nice short video in collaboration with BERG which illustrates some of the potential applications of these technologies.

Whilst watching the clip I was immediately reminded of the mind-blowing Honda Accord Cog Commercial which was made a number of years ago. You can watch the advert below.

The Touch group have obligingly acknowledged their influences and also included a mention of the art movie filmed by Swiss artists Peter Fischli and David Weiss called ‘Der Lauf der Dinge’ or ‘The Way Things Go’. For their film they built a enormous, precarious structure 100 feet long out of common items. Using fire, water, gravity, and chemistry they created a mind-blowing chain reaction of physical and chemical interactions and precisely crafted chaos.

As a child I was fascinated by dominos and can recall watching with amazement those videos in which thousands of carefully placed pieces of plastic ran meandering paths on massive high-school gym floors. There is something hugely captivating about watching a chain of self-triggering events, and all of these movies use this technique to great effect.

From the world of art to the business of advertising and eventually arriving in the field of interaction design, the ‘visual chain of events’ device has been used to great effect. Lets see where it turns up next.