Display as a Service

Goal of the project “Display as a Service (DaaS)” is virtualizing the connection between pixel sources and displays. While the predecessor project “Display Wall” investigated the software foundations for efficiently encoding pixels at a source, sending them over the network, and displaying them synchronously across independent output devices, DaaS uses the developed infrastructure for research on making the architecture available via standard service interfaces over the Internet, integrating it closer to the hardware at both the source and the display, and scaling it up to more delivered pixels or more simultaneous users.

Instead of the classic 1-to-1 video connection, for example via DVI, HDMI, DisplayPort, or the wireless Intel WiDi or Apple AirPlay, DaaS provides resources for both pixel generation and consumption (displays) as network-reachable services, which can be interconnected arbitrarily in an n-to-m fashion. DaaS accomplishes this completely in software without the need for expensive hardware doing pixel distribution or synchronization. Pixels from one source (e.g., a DVD playback) can be watched at the same time and in full synchronicity for instance across a large tiled display wall consisting of multiple independent LCD screens and several mobile devices. The opposite scenario is possible as well: pixels from many independent sources in the network (e.g., a video plus multiple slide presentations from several WiFi-connected laptop computers) can for instance be displayed in combination on a single stationary or mobile screen.

Central research question in the DaaS project is the reduction of latency between pixel generator and consumer. Pixels fed into the system by a pixel-producing application undergo operations like scaling, color conversion, video encoding, and streaming, all of which need to be executed as fast as possible to keep the overall time needed for transporting from source to sink minimal. At the display end, the inverse operations are performed until the image is finally shown on the display device. The actually relevant figure is the end-to-end latency, which is the time needed for transporting an interaction signal from the user to the pixel-generating application and the pixels affected by this interaction (e.g., changed application state or virtual camera position) back to the user, who then sees the result. End-to-end latency needs to be kept below a certain threshold for users not to perceive visible interaction lag.

Further important research topic remains the synchronization of content at the display end, across multiple independent output devices forming a larger consecutive wall. Pixels that belong to the same stream need to be displayed at the exact time instant on each of the collaborating devices. This becomes especially important for timing-sensitive active stereo content, where the synchronization not only needs to affect the instant pixels are drawn, but also the instant displays do their internal refresh - in sync to external shutter glasses and typically 120 times per second.

This year, the DaaS project has produced three peer-reviewed publications, applied for one European Patent, supported building a tech-transfer DaaS display wall setup at Intel USA, and received the CeBIT Innovation Award 2013.