A real-time prototype of the framework is developed which is able to perform fullyreconstruction of human body (and objects) in a large scene. The real-timecharacteristic is achieved by using a parallel processing architecture on a CUDA-enabled GP-GPU[AAMD11].



A two-point-based method to estimate translations among virtual cameras in theframework is proposed and verified [AD12a] [AD11a] [AD10a] [AFQ+11].

The uncertainties of thehomography

transformations involved in the frameworkand their error propagations on the image planes and Euclidean planes have beenmodelized

using statistical geometry.

6

Within the context of the proposed framework, a genetic algorithm is developedto provide an optimal coverage of the camera network to a polygonal object (or ascene).

The quality of reconstruction using a camera networkdepends to mainly three parameters:

1.Number ofcameras

2.Thequality of the applied background subtractiontechnique

3.Thecameras configurations (e.g. positions)

21

22

C1

C2

X

Y

{W}

refπe1

e2

e3

e4

e5

An exemplary convex polygon with 5 edges are observed by two camera. The problem is how toarrange cameras to have optimum registration of the polygon with most completeness.

After registering with the present camera configuration: An extra part colored in red isregistered as a part of the object!

23

Solution: To use geometry (e.g. normal of the edges etc. ), define some costfunctions and applying GA.

11

Camera

LRF

Sensitivity toIllumination

Very

high

NA

Occlusion

handling

Weak

Fair

Sensitivitytotexture

High

NA

Precision

inrange sensing

Fair

Very good

Color sensing

Very good

NA

24

LRF is an active sensor which can be used as a complementary sensor to the cameras:

Comparison table

10=31)()()(xLCLCLCtRTEstimation of the rigid transformation,CT

L(α), among a stereo camera and a LRF

25

26

First virtual plane

Second virtual plane

47’th virtual plane

Statue

Setup and scene

27

Empirical analysis the effects of IS noise to the translation estimation method

Input noise indegrees

(roll, pitchand yaw of inertial sensor)

Output uncertainty in cm (on threeelements of the estimatedtranslation vector)

Outputuncertainty(cm)

Input noise (cm)

28

Outputuncertainty(cm)

Input noise (pixels)

29

30

Theuncertainties for pixelsof the virtual camera’s image planearedemonstrated bycovarianceellipses, where they are scaled 1000 times forclarity.

31

The

uncertainties

for

different

registered

points

on

the

Euclidean

inertial

plane,

demonstrated

by

covariance

ellipses.

The

blue

and

red

ellipses

stand

for

points

registered

by

the

first

and

second

camera,

respectively.

For

the

sake

of

clarity

the

covariance

values

are

scaled

500

and

600

times,

respectively

for

the

first

and

second

cameras

32

Uncertainties for anexemplarypixelx = [ 450 450 1 ]T

wheres = [π/2-π/2

0]T

33

1200x1200 cm2

34

10=31)()()(xLCLCLCtRTReprojection of LRF data on the image(blue points)

+

Result

Image

Range data

α

= 2o

α

= 12o

α

= 23.2o

(during 6 months)

35

36

•We investigated the use of IS for 3D data registration by using a network of cameras andinertial sensors.

•Avolumetric data registration algorithm was proposed.

•Normally the volumetric reconstruction of a scene is time consuming due to the hugeamount of data to be processed. In order to achieve a real-time processing, a prototypewas built using GP-GPU and CUDA

•A method to estimate the translation among cameras within the network was proposed.The certainty of the method has been evaluated in the presence of different noise.

•The issue of sensor configuration, particularly the cameras’ positions in the scene wasinvestigated and a geometric method to find an optimal configuration was proposed usinggenetic algorithm.

•A method to estimate the extrinsic parameters among camera and LRF was proposed as astep towards applying range data in the framework.

Integration of range data within the proposed inertial-based dataregistrationframework.