Today, it’s our pleasure to announce completely new scanning strategy that improves hand-held scanning in many ways. Although we are in a prototypical stage, we wanted to share the lasted achievements with our readers.

Limitations of hand-held scanning

One of the major issues with hand-held scanning is the fact that the set of tolerated scan motions does not match your natural sequence of movements. This means that you are often constrained to move slower than intended, plus you have make sure that the scanner points at areas of interest and keeps a certain distance to the object being scanned. Violation of any of these contraints leads to ‘tracking lost’ and corrupted data scenarios. We’ve seen unexperienced users being frustrated by these implicit scanning assumptions more than once. Moreover, this frustration quickly turned into to refusal of the 3D scanning technology all together.

Improving usability

So, we thought about ways to improve the usability of the system and came up with the following. In the video linked below you can see a new low-cost 3D scanning device that does not lose track no matter how jerky the movements are.

Features at a glance

Robustness
The new system is robust to any kind of jerky movements. Move naturally and never lose track again. In case you put the scanner aside for a pause, you can immediately pick up scanning from any location within the scanning area.

High accuracy
The system offers a constant error behaviour across the scanning area. Accumulation of errors due to drift is suppressed. The tracking accuracy is mostly independent of surface material and geometric structure of the scene.

We’ve just released a new version of ReconstructMe SDK that supports all currently available Intel RealSense (F200, R200, SR300) models. reme_sensor_create now accepts a librealsense driver argument that will try to open the default Intel RealSense camera. More options can be set by via sensor configuration files. Multiple camera support is also available. See reme_sensor_bind_camera_options for a list of available options.

This summer ReconstructMe is launching SelfieHD. SelfieHD is a tech preview of a new product that simplifies generating high resolution 3D busts like the one shown above. In case you are interested in getting an early access account, please fill out the following form. Note, the number of early access tickets is limited and participants will be selected manually based on the input provided.

Good morning everyone! We had a great Long Night of Research in April this year with more than 300 people visiting us at PROFACTOR. We scanned more than 80 people using a turntable and a single Intel SR300 camera using ReconstructMe. Here’s a good example of what those scans look like.

All models have been generated automatically without any manual interaction whatsoever! Please note that models are uncompressed and take a while to load up in your browser. Head over to entire scan collection.

License

Unless otherwise stated, all 3D model files are licensed under CC BY-NC 4.0. This means you can share or adapt them as long as you give appropriate credit and don’t use the material for commercial purposes.

Our team has been working hard in the past couple of months to improve overall reconstruction quality of ReconstructMe. We’ve put a lot of our attention towards generating photo realistic 3D scans using low-cost consumer sensors.

What we’ve come up with is a unique texturing pipeline that runs fully automatically and is able to compensate most of the artifacts caused by illumination, motion and other sources of errors. The interactive 3D viewer below shows a 3D bust generated using this work-in-progress technology.

The setup consists of a single INTEL sensor and a standard desktop PC running ReconstructMe. The bust was generated automatically. No manual interaction whatsoever.

Please be patient while loading as the geometry and textures are uncompressed.

As you know ReconstructMe already supports a variety of commodity 3D cameras and we are working hard on integrating new and exotic ones as soon as we take notice of them. We felt it is about time to put details into perspective. Therefore we are kicking off a camera review series to cover sensor specifications, installation instructions and more.

On behalf of the ReconstructMe team, I’m proud to announce ReconstructMe v2.4.1016. This update improves support for the following sensors

Intel RealSense F200

Intel RealSense R200

You can grab the latest version from our download page. We are releasing this version free of charge for non-commercial projects as announced recently.

Usage

To use Intel RealSense cameras on your computer you will need to install Intel RealSense camera drivers and use the correct ReconstructMe sensor configuration files. For your convenience, you can download both from below.

Once you have installed the necessary components, open ReconstructMe and set the path to the configuration file as shown in the screenshot below.

Troubleshooting

Please note that Intel recommends connecting the sensors to a dedicated USB3 port directly. Avoid using hubs or extension cables. When your sensor does not respond for longer period of time, restarting the Intel depth camera services might help. You can easily find these services in local services management console as shown below

From now on, ReconstructMe – our user interface for digitzing the world in 3d – is available to everyone for free!

We offer ReconstructMe free of cost and without limitations for private and non-commercial projects. This means you can download ReconstructMe and use it for everything from scanning for 3d printing, architecture, documentation and animation. For commercial purposes we continue offer royality fee based licenses of ReconstructMe and ReconstructMe SDK.

Head over to the download area and grab the latest version in order to set it free. If you already have ReconstructMe licensed, but your license expired, then re-open ReconstructMe and it will run in non-commercial mode instead of unlicensed mode.

As promoted in our previous post we added a new color tracking feature to the SDK and promised to release a new UI frontend version supporting it. Today it is my pleasure on behalf of the ReconstructMe team to announce this new fronted release.

In the video below you can see ReconstructMe UI in action. Both scenes are tracked mainly due to color information, as the geometric information alone (planar shape in first scene and cylindrical shape in second scene) do not suffice to estimate the camera position accurately.

Color tracking is currently enabled for all sensors that support a RGB color stream. Algorithm settings are chosen automatically, so you don’t have to configure anything. In case your sensor does not support RGB the algorithm gently falls back to geometric tracking only. Note that scanning colorized is not a requirement for the color tracking algorithm to work properly.

Here are some tips for best results

Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.

Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.

Try to avoid fast camera motions that potentially blur color images.

Try to avoid reflective materials. Although a reflection appears as texture, it visually changes when moving the camera.

After weeks of hard work we are proud to announce a new upcoming feature called color tracking. Color tracking incorporates color information into camera motion estimation. This allows ReconstructMe to keep track over planar regions, cylindrical shapes or other primitive shapes. The following video shows some challenging reconstructions that succeed with the help of color tracking.

The new tracking algorithm seamlessly blends geometric and color information together, leading to an overall improved tracking performance in almost all situations. During development we’ve paid attention to robustness and runtime. As far as robustness is concerned, we’ve made sure that fast variations in illumination or camera auto exposure do not affect the tracking performance.

From a developer and user point of view you should be aware of the following points to maximize tracking stability.

Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.

Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.

Try to avoid fast camera motions that potentially blur color images.

Discard the first few camera frames as we have observed cameras to vary exposure vastly in these frames.

Make sure that the color camera is aligned to depth camera in space and time.

In case tracking fails we’ve also added a recovery strategy that takes color information into account. This global color tracking allows you to recover by bringing the sensor in a position that is close to the recovery position shown as shown in the following video.

Our roadmap forsees that we first release a new end user UI version that supports color tracking in the coming days. This will allow us to have many people test the current state of the algorithm and hence provide us with valuable feedback.

We have just released new version of ReconstructMe. This is a bug-fix release that resolves immediate tracking lost issues on NVIDIA cards: our users reported immediate tracking-lost issues when starting a scan. The issue seems to occur on NVIDIA cards, with preference on the following models: GTX750, GTX970, GTX960, GTX840M, GTX850M. In case you are affected, please try to run the latest version.

We are happy to announce the release of ReconstructMe 2.3.952 today. The latest version can be downloaded here.

I’d like to briefly introduce the new SDK / UI features here and bring in-depth information in upcoming blog posts. The SDK / UI now supports the Intel RealSense F200 camera and we’ve reworked sensor positioning API to allow a more fine-grained control over the scan start position of the sensor with respect to the volume.

The UI now supports a rich set of sensor position options which include positioning the sensor based on a special marker seen by the sensor. This feature allows you to easily position the volume in world-space. The following video shows a turn-table reconstruction of a toy horse using marker positioning and the Intel RealSense F200 camera.

If you would like to try out the new Intel RealSense F200 camera, please download this sensor configuration file. You will need to specify the path to this file in the UI at Device / Sensor.

In case you want to give the marker positioning a try, please download this marker image, print it and measure the printed size in millimeters. Make sure to leave a big white border around the marker. You will need to set the correct marker size in the UI at Volume / Volume Position. We usually print the marker with size 90 millimeters. When using marker positioning, make sure the sensor captures the entire marker.

When you notice that the sensor position starts to vary as you move the marker, you know that ReconstructMe has detected the marker. Once ReconstructMe found the marker, you can use the Offset slider to adjust where the volume starts.