When users step in front of the display screen, the system will ‘see’ them and invite them to take control of the display using an on-screen cursor.

Prof Roberto Cipolla, who with Dr Bjorn Stenger at Toshiba and Tom Woodley at the Department of Engineering, has been pioneering the use of computer vision in human-machine interaction since 1992, said: ‘Gesture-control research is extremely exciting and is opening an array of possibilities for consumers, such as new interfaces for TVs and interactive displays in shop windows and information kiosks.’

Having developed the system, the researchers are now working on expanding its capabilities to ensure that the consumer will find it reliable and easy to use.

The researchers have also developed another vision-based interface with which users can switch the display’s language or contents simply by showing it different picture cards.

The system uses a single camera mounted on the top of the display and is started when a user raises his hand to initiate the interaction.

The software then tracks the person’s hand using multiple cues including colour, motion and appearance. It can reliably recognise and track the user’s gestures even under rapid motion or changing light conditions. Since different cues are less reliable in different situations this multiple-cue approach is key to making the human computer interface work robustly.

The team’s current research focuses on how to deal with multiple users (in a living room setting, for example), how to track with finer control and how to deal with difficult lighting conditions.

Visit the UK’s dedicated jobsite for engineering professionals. Each month, we’ll bring you hundreds of the latest roles from across the industry.