When we apply sensors to measure a workspace, we can meet some interesting questions about the information content of he examined system:• How can we estimate that something important happened in that space?• Can we estimate the degree of freedom of the space of events?• Measuring the events in the space, can we estimate the effect of an external impact?• If we directly manipulate some processes in the events’ space, can we measure the effect? Meaning that we want to measure the change in the events' process affected by the intrusion.• Without having a priori knowledge about the space, how much measure of geometrical structures or physical processes can be estimated through evaluating the sensor network?These questions appear as exciting challenges in surveillance systems, traffic control, alarm systems, embedded medical devices and security applications.When measuring a scene with a network of sensors, what is the measure of valuable information, which can be used as a feature-set for definite problems, like as indexing and retrieval.In a given scene (surveillance, industrial testbed, medical supervision) the valuable information does not exist alone; it must be related to joint scenes and time instants. We should compare the measured parameters to that of the other scenes and time instants that may have a different set of significant parameters.For this reason the task of comparison is not a simple indexing and retrieval, but chaining partly overlapping sets of parameters.When climbing to a higher level of abstraction, we may ask: the question is not ''what is it similar to?'', but ''Is it similar to anything so much that we should consider it?''.

We can manipulate the measured space of events. This control can affect the measured information content, resulting in a more structured association among objects.The scene may contain geometrical, physical and causality relations, what can be considered to better evaluate the scene.

We have built up several measurement environments for the project’ purposes, and we have achieved results evaluating the experiments in these setups:
1. Multicamera system: motion tracking, recognition of the behavior of the objects, the structural geometry given by the scene events,
2. Devices for depth measurements: images and point-clouds of LIDAR and Time-of-Flight cameras for motion tracking and shape detection,
3. Aerial and medical images/image series: detection of changes, finding featuring structures.
During the project the following important theoretical results have been published in the most important conferences and journals:
1. Change detection and structure recognition of the given scene,
2. Improved feature point set for low resolution pattern recognition and enhanced active contour detection,
3. Unusual motion flow pattern and crowd behavior detection on video sequences,
4. Depth information filters in 2D (graphs, deconvolution) and in 3D (LIDAR, TOF).