(web)Cameras objects

-
Aude Genton (project's assistant for object design) just sent me some documents/plans about the (web)cam objects. This is a shift in the object's language for the (web)cameras. They don't look technological anymore and have a second "domestic" function. They can be hanged on walls or ceilings easily and bring "computer vision" into homes or offices. They can also become blogjects, skype phone-cameras as well as "AR" or "Spatial configurations" trackers. They are challenging objects as we know that bringing tracking cameras into our own home ask some major questions.
In the actual state, we have 3 mirrors (for walls, tables and hands) where one of them is a computer as well and 1 light (ceiling).

"AR Ready" simple objects based on AR signs/patterns

-
-
Based upon the AR signs & patterns being developed by Tatiana Rihs (project's assistant for graphic design), we will be able to revamp lots of oldtech or paper based products adding to them AR, media, dynamic/networked content & interaction functionalities. They will become "AR ready" products.
We will use our XjARToolkit software for such extended rich media functionalities.
First samples by Tatiana includes wallpapers, t-shirts, papetries, post-its, ink plugs, stickers, posters, badges, fabric, etc.

Some illustrated applications

---
---
---
Bram Dauw (project assistant for Media & Interaction Design) is working on potential applications for the "AR" technology and possibly for the "Video Tracking of Spatial Configuration" as well. They imply the use of the "AR" signs and patterns or "AR-ready" products as well as the analysis and tracking of spatial configurations. Applications like an "I Love TV" t-shirt that would lit your TV up when you enter its field of vision or, depending on days, an "I Hate TV" that would on the contrary shut it down. By extension, you could imagine any interaction between a shirt or a sign in space, its location being tracked by the camera and a contextual triggered event. Or "AR" memory stickers that would let you cover your suitcase all over, helping you stick AR media memory items to it. Same with badges on clothes or even a strange t-shirt that would let you blur your image in front of monitoring camera! Or post-it applications, or wallpapers ones, or etc.

Video Tracking System of Spatial Configurations

The very first version of the "Video Tracking of Spatial Configuration (VTSC)" system developed by fabric | ch was delivered to Julien Nembrini from the EPFL. The system allows to control an unlimited set of volumes in a given space and allows to detect if a given volume is filled or not. A volume is obtained by a set of different USB webcam's point of views (shooting the same room from distinct locations for example). Intersections of defined zones in these point of views define volumes.
.
The system is working with a set of basic USB webcams. Each USB webcam is controlled by a dedicated application. The application managed 2D zones that must be monitored in the image obtained from its associated webcam: it detects if a zone was activated or not (filled). All these applications are networked (meaning that the controlled volumes can even be in distinct remote locations), all information centralized to a main controller application known as the moderator. The moderator is filtering received information and decides if a volume is activated or not (filled).
.
Within the framework of this project, a volume activation will suggest epfl's e-puck robots to organised themselves in a given configuration.
-

As the system is networked based, it can be deployed in a very convenient way. The number of involved USB webcams, the number of needed computers, the location of these computers can be adapted very easily to any kind of project/configuration. As mentionned previously, it is even possible to combined the monitoring of volumes that are not at the same location in order to control something else in another distinct location. It can also be easily integrated in the Rhizoreality system developed by fabric | ch..
.
A set of tests we have made has raised a set of limitations/observations to take in consideration while deploying a VTSC configuration. We have successfully plugged 4 USB webcams on the same computer (PC laptop, desktop) by using a USB HUB. Of course, application in charge of controlling a given webcam must be installed and runned on the same computer as the webcam (the video stream is not broadcasted). So one basic computer was able to host 4 applications for video image analysis and the moderator without any major frame rate loss.
One must kept in mind some USB limitation linked to cable length (around 10m max.). It should be possible to connect the webcam with a longer cable through the use of USB repetor or USB to RJ45 convertor but these options were not tested or used within the frame of this project.
Of course the more powerful the host computer is, the more it should be possible to connect webcams, keeping in mind that the USB bus has its own limitation in term of bandwidth which should of course restrict the number of camera that can be connected to the same computer without video signal or major frame rate loss.
.
The system is based on JAVA but video signals are accessed through DirectX, so VTSC is condemned to run under Windows. Things can evolve in time, through the change of the webcam's video signal access module.

VTSC - Tech. Review

Video tracking systems are usually set up for object motion tracking or change detection. These systems are assumed to be able to run in real-time, e.g. analyzing a live video stream and giving the expected result straight forward without time delay.
The obvious main purpose of such systems are usually linked to video surveillance (persons, vehicles) or even object guidance (missiles).

A large set of academic (http://citeseer.ist.psu.edu/676131.html) and commercial references exists exploiting a well known set of distinct methods. Usually the best is the algorythm, the worst is its CPU print.
Commercial solution usually proposes very good solution while using dedicated hardware, making possible to have high performance algorythm running in real-time.

In the framework of this project, a set of pre-defined constraints must be taken in account:

-> The tracking system must interact with an existing robot control system developped at the EPFL
-> Low cost hardware may be used for cameras and computers (video streams analysis)
-> Several tracked area activations, issued from several distinct cameras, may be combined to make one decision validated or not
-> The number of cameras must be maximized (in order to obtain a maximum of tracked configurations) where the needed set of computers to perform video analysis must be minimized

This set of constraints excludes the use of any commercial solutions that may have an important costs as well as may imply problems to adapt itself to the describe experiment scope.

It disqualified as well open-source or freely available video analysis systems because of their lack of functionalities: none of the tested projects were able to deal with several cameras connected to the same host computer for example.
Some of them imply the use of a particular type of camera, compatible with some specific drivers only (WDM for JMyron).

By developing a highly networked system based on commonly used technology (Microsoft DirectShow) we will be able to use any windows compatible webcam without any particular limitation. It implies as well to be able to access to several camera video streams through USB from the same host computer.
The network layer will ensure that all video analysis data can be centralized to a dedicated application in charge of validating a given decision (ex: 3 persons are sitting around the table true-false?) as well as making available this information to the robot's controller application (EPFL), still through network.

The video analysis itself can be freely based on methods described in the numerous research papers found in the literature, making possible to choose from one method or another according to the kind of CPU print we can allow for the application.

New video tracking methods may even be included later, making possible to have a set of networked video tracking applications running a different video analysis algorithm each.

Webcam sphere (spycam sphere)

-
In (distant) relation to the webcams objects & mirrors by Aude for the *Variable environment/* project, this is a monitoring sphere that track all the space around itself. It is made out of traditional video cameras and b&w screen displays, it can be apparently rolled. No software behind it. Project by Jonathan Schipper.
See the video HERE.

AR & Mirror (kind of)

-
An 'AR' project by Adam Somlai-Fisher where they seem to project images (Augmented Reality with brain images) on a mirror or something looking like a mirror. Again here, something to loook at for our project as we wanted to have some kind of screen based mirror or projection based mirror for augmented reality situations.
Check out the video HERE.

Tracking camera & lights

-
Camera tracking (vision) at the scale of a huge hall transmitted to a grid of rotating robotic neon lights. Behaviors of the lights and variability of the overall pattern depend on the number of users being tracked.
Video HERE. Project by Raphael-Lozano Hemmer