Detailed Description

The sensor is responsible for acquiring real world data and feeding it into the reconstruction volume. The life-time of reme_sensor_t sensor is usually the following

Create a new sensor (reme_sensor_create) using either the backend name or a specific sensor configuration file. Usually this method returns with success when the necessary drivers for the sensor where found. It does usually not mean that the sensor is actually there. We say usually, since you can force a test to open the sensor at this point using the require_can_open flag.

When dealing with sensor data ReconstructMe offers two types of views. The first view REME_SENSOR_VIEW_RAW corresponds to data passed as raw input to ReconstructMe. The second view type is REME_SENSOR_VIEW_RECONSTRUCTED and corresponds to a synthetic view generated by raytracing the volume from the current sensor to volume position.

Enumeration Type Documentation

Each sensor might provide different frame types that are all 2D images. Not all sensors support all frames, or the number of frames supported is configuration dependant.

Enumerator

REME_IMAGE_AUX

Auxiliary image if provided by sensor. Depending on the sensor type and its configuration the auxilary image can be of any type. Commonly this is either RGB or IR. Usually RGB 3 channels, 1 byte per channel.

REME_IMAGE_DEPTH

Depth image. RGB 3 channels, 1 byte per channel

REME_IMAGE_VOLUME

Rendered image of the volume as viewed from the current sensor perspective. RGB 3 channels, 1 byte per channel.

When dealing with sensor data ReconstructMe offers two types of views. The first view REME_SENSOR_VIEW_RAW corresponds to data passed as raw input to ReconstructMe. The second view type is REME_SENSOR_VIEW_RECONSTRUCTED and corresponds to a synthetic view generated by raytracing the volume from the current sensor to volume position.

Defines the basic tracking strategy to find the sensor position based on current and past sensor data.

Enumerator

REME_SENSOR_TRACKMODE_AUTO

Automatic mode. Try to use local search first. If that fails attempt to perform a global search followed by local search. If last tracking attempt was unsuccessful, start using global search immediately.

REME_SENSOR_TRACKMODE_LOCAL

Local search. Use local search only. Local search is fast and succeeds when the camera movement between two subsequent frames is small.

REME_SENSOR_TRACKMODE_GLOBAL

Global search. Use global search followed by a fine alignment of local search. Global search is slower than local search but succeeds in cases where the camera movement between two subsequent is rather frames large.

Temporarily switch to global search until tracking is found again. This hint is automatically cleared when tracking is found. It is a convinient way of letting the tracking module know that tracking should be done using global search until we are sure that tracking is found again.

REME_SENSOR_TRACKHINT_DONT_TRACK

Temporarily skip tracking for the current invocation. This is useful to avoid growing code complexity when tracking should not occur in a certain frame. The result of setting this track hint and invoking reme_sensor_track_position is the same as not calling reme_sensor_track_position once.

The tracking hint is an external user supplied information to support the camera tracking module. Any tracking hint given remains active until the next call to reme_sensor_track_position.

Supplying tracking hints becomes useful when the caller has external knowledge unknown to the tracking module. For example the caller might set REME_SENSOR_TRACKHINT_USE_GLOBAL to indicate that the tracking module is should resort to global tracking in the next iteration.

Get the sensor recovery position with respect to the world coordinate frame.

Whenever the sensor cannot find track, it puts itself into recovery pose. It then waits in the recovery pose for tracking to succeed. The recovery pose will be updated during ongoing tracking automatically. I.e when there is sufficient confidence that the last n-frames where tracked successfully, ReconstructMe generates a new recovery pose.

REME_SENSOR_POSITION_INFRONT Assume without loss of generality that the sensor held horizontally pointing towards the target. Then the position is chosen so that z-axis of the world coordinate is the natural up-direction, the sensor looks towards the positive y-axis of the world coordinate system, the sensor is located at the center of the front face of the reconstruction volume and is moved back (negative y-axis) by 300 units.

REME_SENSOR_POSITION_CENTER The sensor is placed in the center of the volume.

REME_SENSOR_POSITION_FLOOR The sensor is placed such that the volume is pinned to the floor according to reme_sensor_find_floor. This type makes use of the current depth-map to determine the floor.

Assume without loss of generality that the sensor held horizontally pointing towards the target. Then the position is chosen so that z-axis of the world coordinate is the natural up-direction, the sensor looks towards the positive y-axis of the world coordinate system, the sensor is located at the center of the front face of the reconstruction volume and is moved back (negative y-axis) by offset units.

Returns a transformation that positions the sensor in such a way that the volume is placed centered around the marker volume bottom plane (xy-plane) offset units way from the xy-plane of marker pose. A positive offset will therefore lift the volume above the marker, which can be useful in case you don't want to have data from the plane the marker resides on reconstructed.

Parameters

c

A valid context object

s

A valid sensor object

markerPose

Position of marker (reme_marker_detector_get_position) or any external coordinate frame.

offset

Amount of space between bottom plane of volume and marker coordinate system. Positive values will allow you to cut away data from the plane the marker resides in.

Set the sensor recovery position with respect to the world coordinate frame.

Whenever the sensor cannot find track, it puts itself into recovery pose. It then waits in the recovery pose for tracking to succeed. The recovery pose will be updated during ongoing tracking automatically. I.e when there is sufficient confidence that the last n-frames where tracked successfully, ReconstructMe generates a new recovery pose.

Position the sensor and volume with respect to each other using a predefined position.

Initially the sensor position is identity for all sensors. By calling this method the sensor position and recovery position change to an auto-calculated sensor position based on the value of reme_sensor_position_t.

This method is considered a helper method consisting of the following steps:

Each sensor might provide different frame types. Not all sensors support all frames, or the number of frames supported is configuration dependant. See reme_sensor_image_t for a complete enumeration of available image types.

Memory Management Rules Exception

The returned image remains valid until the sensor is destroyed or the dimension of the image changes. The pointer is recycled internally, which means that it will point to different values each time the sensor images are updated.

Each sensor might provide different frame types. Not all sensors support all frames, or the number of frames supported is configuration dependant. See reme_sensor_image_t for a complete enumeration of available image types.

Test if a specific image type was updated since the last call to reme_sensor_prepare_images, reme_sensor_prepare_image.

Sensor implementations are allowed to not provide an updated image for a specific type at all times. Notably is the network sensor and associated streaming service who might opt for sending a color image only every n-th frame to save bandwidth.

Parameters

c

A valid context object

s

A valid sensor object

it

Image type to access

result

Whether image type was updated since the last call to reme_sensor_prepare_images, reme_sensor_prepare_image.

The points are represented as an array of floats where each point consists of 4 coordinates Px Py Pz Pw Px Py Pz Pw ... . The w component is always zero. The i-th point starts at index i * 4 of coordinate array returned.

The number of points returned corresponds to the number of pixels of the native sensor. That is, if your sensor has a resolution of 640x480 (cols x rows) the number of returned normals is 640 * 480. The normals are returned in row-wise order. Those points that do not represent a valid data are marked with a sentinel value (NAN) in their x-coordinate. To access the point corresponding to pixel of row i and column j use i * cols * 4 + j * 4 where cols is the number of pixel columns of the native sensor.

Parameters

c

A valid context object

s

A valid sensor object

v

view type specification

coordinates

A mutable pointer to constant point data.

length

The number of coordinates returned. To get the number of points divide this value by 4.

The point normals are represented as an array of floats where each point consists of 4 coordinates Px Py Pz Pw Px Py Pz Pw ... . The w component is always zero. The i-th normal starts at index i * 4 of coordinate array returned.

The number of normals returned corresponds to the number of pixels of the native sensor. That is, if your sensor has a resolution of 640x480 (cols x rows) the number of returned normals is 640 * 480. The normals are returned in row-wise order. Those normals that do not represent a valid data are marked with a sentinel value (NAN) in their x-coordinate. To access the normal corresponding to pixel of row i and column j use i * cols * 4 + j * 4 where cols is the number of pixel columns of the native sensor.

Parameters

c

A valid context object

s

A valid sensor object

v

view type specification

coordinates

A mutable pointer to constant normal data.

length

The number of coordinates returned. To get the number of normals divide this value by 4.

Colors are only available if the current compiled context supports colorization of vertices.

The point colors are represented as an array of floats where each color consists of 4 channels r g b a r g b a ... . The a component is always zero. The i-th color starts at index i * 4 of array returned. The range for each channel is [0..1].

The number of colors returned corresponds to the number of pixels of the native sensor. That is, if your sensor has a resolution of 640x480 (cols x rows) the number of returned colors is 640 * 480. The colors are returned in row-wise order. To access the color corresponding to pixel of row i and column j use i * cols * 4 + j * 4 where cols is the number of pixel columns of the native sensor.

Parameters

c

A valid context object

s

A valid sensor object

v

view type specification

channels

A mutable pointer to constant color data.

length

The number of channels returned. To get the number of colors divide this value by 4.

In case REME_IMAGE_AUX or REME_IMAGE_DEPTH is passed, this method will fetch the data into internal memory. In case REME_IMAGE_VOLUME is passed, the previously prepared REME_IMAGE_DEPTH will be uploaded to the computation device for subsequent processing (reme_sensor_track_position, reme_sensor_update_volume).

This method is especially useful (when compared to reme_sensor_prepare_images) when only depth and image data is required. For example when recording, there is no need for REME_IMAGE_VOLUME and it should be skipped so no time is wasted waiting for the data to be uploaded to the computation device.

Tries to track the sensor movement by matching the current depth data against the perspective from the last position. Initially the sensor position is the identity position, unless otherwise specified.

Uses the current sensor position as the perspective to update the volume. If color support is enabled this method will also update the colors. Use reme_sensor_update_volume_selectively to change that behaviour.

The algorithm works best when a large portion of the image is covered by floor data and the sensor is held without roll (i.e no rotation around the sensor's z-axis). Note vertical front facing walls can be detected erroneously as floors if the make up the major part of the sensor image.

The floor is returned as a coordinate frame with the following properties:

the origin is located at the intersection of the sensor view direction and the estimated floor plane

the z-axis is normal to to the floor plane and points towards the natural ceiling