Menu

Today, it’s our pleasure to announce completely new scanning strategy that improves hand-held scanning in many ways. Although we are in a prototypical stage, we wanted to share the lasted achievements with our readers.

Limitations of hand-held scanning

One of the major issues with hand-held scanning is the fact that the set of tolerated scan motions does not match your natural sequence of movements. This means that you are often constrained to move slower than intended, plus you have make sure that the scanner points at areas of interest and keeps a certain distance to the object being scanned. Violation of any of these contraints leads to ‘tracking lost’ and corrupted data scenarios. We’ve seen unexperienced users being frustrated by these implicit scanning assumptions more than once. Moreover, this frustration quickly turned into to refusal of the 3D scanning technology all together.

Improving usability

So, we thought about ways to improve the usability of the system and came up with the following. In the video linked below you can see a new low-cost 3D scanning device that does not lose track no matter how jerky the movements are.

Features at a glance

Robustness
The new system is robust to any kind of jerky movements. Move naturally and never lose track again. In case you put the scanner aside for a pause, you can immediately pick up scanning from any location within the scanning area.

High accuracy
The system offers a constant error behaviour across the scanning area. Accumulation of errors due to drift is suppressed. The tracking accuracy is mostly independent of surface material and geometric structure of the scene.

We’ve just released a new version of ReconstructMe SDK that supports all currently available Intel RealSense (F200, R200, SR300) models. reme_sensor_create now accepts a librealsense driver argument that will try to open the default Intel RealSense camera. More options can be set by via sensor configuration files. Multiple camera support is also available. See reme_sensor_bind_camera_options for a list of available options.

This summer ReconstructMe is launching SelfieHD. SelfieHD is a tech preview of a new product that simplifies generating high resolution 3D busts like the one shown above. In case you are interested in getting an early access account, please fill out the following form. Note, the number of early access tickets is limited and participants will be selected manually based on the input provided.

Good morning everyone! We had a great Long Night of Research in April this year with more than 300 people visiting us at PROFACTOR. We scanned more than 80 people using a turntable and a single Intel SR300 camera using ReconstructMe. Here’s a good example of what those scans look like.

All models have been generated automatically without any manual interaction whatsoever! Please note that models are uncompressed and take a while to load up in your browser. Head over to entire scan collection.

License

Unless otherwise stated, all 3D model files are licensed under CC BY-NC 4.0. This means you can share or adapt them as long as you give appropriate credit and don’t use the material for commercial purposes.

Our team has been working hard in the past couple of months to improve overall reconstruction quality of ReconstructMe. We’ve put a lot of our attention towards generating photo realistic 3D scans using low-cost consumer sensors.

What we’ve come up with is a unique texturing pipeline that runs fully automatically and is able to compensate most of the artifacts caused by illumination, motion and other sources of errors. The interactive 3D viewer below shows a 3D bust generated using this work-in-progress technology.

The setup consists of a single INTEL sensor and a standard desktop PC running ReconstructMe. The bust was generated automatically. No manual interaction whatsoever.

Please be patient while loading as the geometry and textures are uncompressed.

We are proud to announce ReconstructMe v2.5.1034. This updated simplifies and improves the configuration of your sensor. Either select a supplied configuration, tailor-made for every supported sensor, or write and tweak your own configuration as before.

Additionally we made improvements to the user interface as we refactored parts of the ui code – most important ReconstructMe is now dpi aware and can be used out of the box on high resolution displays.

We also improved the rendering code, resulting in less overhead and more efficient usage of the GPU.

As you know ReconstructMe already supports a variety of commodity 3D cameras and we are working hard on integrating new and exotic ones as soon as we take notice of them. We felt it is about time to put details into perspective. Therefore we are kicking off a camera review series to cover sensor specifications, installation instructions and more.

On behalf of the ReconstructMe team, I’m proud to announce ReconstructMe v2.4.1016. This update improves support for the following sensors

Intel RealSense F200

Intel RealSense R200

You can grab the latest version from our download page. We are releasing this version free of charge for non-commercial projects as announced recently.

Usage

To use Intel RealSense cameras on your computer you will need to install Intel RealSense camera drivers and use the correct ReconstructMe sensor configuration files. For your convenience, you can download both from below.

Once you have installed the necessary components, open ReconstructMe and set the path to the configuration file as shown in the screenshot below.

Troubleshooting

Please note that Intel recommends connecting the sensors to a dedicated USB3 port directly. Avoid using hubs or extension cables. When your sensor does not respond for longer period of time, restarting the Intel depth camera services might help. You can easily find these services in local services management console as shown below

From now on, ReconstructMe – our user interface for digitzing the world in 3d – is available to everyone for free!

We offer ReconstructMe free of cost and without limitations for private and non-commercial projects. This means you can download ReconstructMe and use it for everything from scanning for 3d printing, architecture, documentation and animation. For commercial purposes we continue offer royality fee based licenses of ReconstructMe and ReconstructMe SDK.

Head over to the download area and grab the latest version in order to set it free. If you already have ReconstructMe licensed, but your license expired, then re-open ReconstructMe and it will run in non-commercial mode instead of unlicensed mode.

Use the opportunity to discuss with numerous international experts from 10 countries and 3 continents, both from science and industry, what 3D printing technologies offer today and what they can be expected to offer in the future.

What is special about Add+it 2015?

Workshops provide the opportunity to interact with participants and experts; discuss relevant 3D printing issues and initiate possible further business cooperations.

As promoted in our previous post we added a new color tracking feature to the SDK and promised to release a new UI frontend version supporting it. Today it is my pleasure on behalf of the ReconstructMe team to announce this new fronted release.

In the video below you can see ReconstructMe UI in action. Both scenes are tracked mainly due to color information, as the geometric information alone (planar shape in first scene and cylindrical shape in second scene) do not suffice to estimate the camera position accurately.

Color tracking is currently enabled for all sensors that support a RGB color stream. Algorithm settings are chosen automatically, so you don’t have to configure anything. In case your sensor does not support RGB the algorithm gently falls back to geometric tracking only. Note that scanning colorized is not a requirement for the color tracking algorithm to work properly.

Here are some tips for best results

Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.

Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.

Try to avoid fast camera motions that potentially blur color images.

Try to avoid reflective materials. Although a reflection appears as texture, it visually changes when moving the camera.

After weeks of hard work we are proud to announce a new upcoming feature called color tracking. Color tracking incorporates color information into camera motion estimation. This allows ReconstructMe to keep track over planar regions, cylindrical shapes or other primitive shapes. The following video shows some challenging reconstructions that succeed with the help of color tracking.

The new tracking algorithm seamlessly blends geometric and color information together, leading to an overall improved tracking performance in almost all situations. During development we’ve paid attention to robustness and runtime. As far as robustness is concerned, we’ve made sure that fast variations in illumination or camera auto exposure do not affect the tracking performance.

From a developer and user point of view you should be aware of the following points to maximize tracking stability.

Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.

Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.

Try to avoid fast camera motions that potentially blur color images.

Discard the first few camera frames as we have observed cameras to vary exposure vastly in these frames.

Make sure that the color camera is aligned to depth camera in space and time.

In case tracking fails we’ve also added a recovery strategy that takes color information into account. This global color tracking allows you to recover by bringing the sensor in a position that is close to the recovery position shown as shown in the following video.

Our roadmap forsees that we first release a new end user UI version that supports color tracking in the coming days. This will allow us to have many people test the current state of the algorithm and hence provide us with valuable feedback.