For the last year or so, I’ve been researching what all would be needed to produce something in the domain of 3D scanning for what we already have in the RepRap project. In the current state of hardware people have developed so far, there are a lot of great one-off projects that have been produced to be learned from (though they aren’t designed with easy repeatability in mind). The corresponding state of scanning software is not so good - people are are forced to use proprietary software due to lack of quality free software or open source alternatives.

Aleph Objects, Inc. has recently committed to developing a libre [hard/soft]ware 3D scanner, a project I’ve been put in charge of. What follows below are my notes on the project, primarily on the software that will be developed for it.

I’d like to lay down some constraints for this project:

1) The hardware must be libre hardware (with the acceptable allowance here being for standardized components that are easily sourced or replaced, eg cmos cameras).

2) The software must be free software.

3) The product needs to be < $1000

And, because LulzBot has a reputation to live up to about producing
totally awesome stuff that is actually usable to normal people, I’m
going to add:

4) The software’s output must be a printable stl file in the vast majority of cases. Eg generating a point cloud and telling the user to get cracking with meshlab is not ok.

With those four constraints, there is literally nothing out there in terms of existing projects that we can just manufacture and sell.

The Hardware
Early in the development process, the hardware is going to be simulated (likely using Blender), and so is less important to mention now. To be brief, it is going to be a modification of the laser-line scanner. The part that is different about this design, is that the scan area is to be contained in an enclosure.

The gist of the hardware is this:

- a box- a laser + clear dowel- a turn table- two cameras, along the same radius from the turn tabel - one looking down slightly from above, one looking up slightly from below.- a micro controller, maybe a BBB or an Arduino

Nothing hard to build here.

The Software
For about two or three years now I’ve been working on voxel-based slicing engine. The algorithm was originally conceived as a means of converting solid appearing (yet totally non-manifold) 3D meshes into manifold ones. While not originally intended for it, the method adapts well for scanning, because a sufficiently dense point cloud can be an example of a “solid appearing, non-manifold object”.

The algorithm for scanning assumes there is a model on a turn table, 1 or 2 cameras and a laser line in fixed locations with known orientations.

Step 0:

Calibration is done with the scan chamber empty. For each camera, take a picture and save for future reference. This will be referred to as the [bg reference].

Step 1:

An object is in the scan chamber. A solid voxel model is instanced, it represents the scannable area. This will be refered to as the [scan positive].

Step 2:

For each step on the turn table, for each camera:

A: Take a picture, with the laser line off but the object lit - this will be referred to as the [scan sample].

B: The [bg reference] and the [scan sample] are used to make the [contour mask]. In a perfect world, this would be achived by subtracting the [bg reference] from the [scan sample] and taking the threshold of the result to produce a black and white bitmask [contour mask]. In reality, opencv might be useful here. Remove noise from the [contour mask], if necessary.

C: For each black pixel in the [contour mask]: Assuming the [contour mask] is the back of the viewing frustrum of the camera, cast a ray from the camera location to the pixel in the mask. Delete all voxels in the [scan positive] that intersect with the ray. This will capture all convex details of the object. Scan resolution is directly determined by the camera’s resolution.

D: Take a picture, with the laser line on but other lighting off - this will be referred to as the [line sample].

E: For every white pixel in the [contour mask] of which the corresponding pixel from the [line sample] who’s color is close to the expected color of the laser line: Use paralax to calculate the pixel’s 3D coordinates relative to the camera and laser line. Project a line from the camera to the “pixel”'s calculated 3D coordinates to the first voxel along that vector past the “pixel”. Draw voxels along the line segment between the collision and the “pixel”. May need to generate a frustrum or cone instead of a line. These new voxels are added to the [scan positive]. This will capture all concave details of the object, in a somewhat lower resolution than the convex details.

Step 3:

At this point, the [scan positive] is a voxel model that closely resembles the topology of the object being scanned. However, interior voxels need to be removed. Thinking in terms of a cubic grid, the outter most shell of the grid needs to be completely empty. This either should be done by deleting the outter most grid “shell” from the [scan positive] (not to be confused with the outter most layer of the actual voxel data) or by doing the 3D equivalent of increasting the “canvas size” by 2 on all axies in the gimp and centering the image.

Step 4:

Define a new, empty voxel model - called the [scan negative] - do a color fill algorithm on the [scan positive] starting from on of the corners to determine the volume of space around the scanned object. This data is added to the [scan negative].

Step 5:

Delete the [scan positive] (or perhaps keep it to use as reference for color values). The inversion of the [scan negative] is the [scan result]. From this point, the exterior voxels may be easily identified (by their adjacent neighbors or lack thereof), and a high poly stl may be generated via marching cubes algorithm.

Step 6:

Save result as stl.

So, thats the basic algorithm for scanning. The “casting” that happens in steps 3 and 4 might not be necessary - it mostly depends on how noisy the laser line generated voxel data is.

I mentioned before that the hardware of the scanner was going to be simulated at first, in Blender of all things. What is meant by that, is that the basic geometry of the scanner (as seen from the cameras’ perspectives) will be built out in blender. The object to be scanned will be parented to the turn table, and python scripting will be used to turn on/off lights, rotate the object, and capture data from the cameras. Noisy rendering settings may also be used to make things interesting.

Simulating it in this way will allow for me to tweak the design of the scanner (eg what backdrop works best for a variety of objects). It also would be fun software to release for people to play with once the algorithm is implemented, and could be useful for reparing models.

If you decide to start investigating the projector based reflection scanner approach as well, let me know. I may be able to assist in testing, etc as I have access to various microprojectors, including some autofocus equipped units. Anyways, sounds neat, can’t wait to see what you come up with!

I would be interested in building one if the cost of prototyping isn’t too prohibitive.

Yes. The software that will be developed for this will be available on github as I’m developing it. The hardware will be on dev.lulzbot like all of our other products when development on it starts.

By design, this should be a DIY friendly device - the overall BOM cost is relatively low as far as these things go, and won’t rely on anythingy too difficult to source. You’ll be able to test out the product as we develop it.

I put together a basic simulation of a laser line scanner, using Blender. I’ve uploaded a rendering of the scan data to youtube.

A python script will be able to control the simulated scanner. For example, picking which camera to pull data from, and activating / deactivating light sources. The fake scanner will be used to provide fake scan data which I can use to start putting together the software process described above. This will also be useful for tinkering with the conceptual layout of the hardware to see what effect it has on scan quality.

So far, I’m getting enough mileage out of just using Pillow ( http://pillow.readthedocs.org/en/latest/ , fork of “PIL” - the python imaging library) to put together proof of concepts for what I’m after. OpenCV will undoubtably be useful for doing things like de-noising, and other things to normalize for “real world” conditions. Right now, I’m working on putting together the basic scaning pipeline.

By the way, I threw together a quick script that takes two rendered “scan” images (one with the object to be scanned, and one with just the background) and creates a mask image of where it thinks object is.

Note that the “autocontrast” method determins the lightest and darkests colors in the image and adjusts the color curve so that those colors are black and white respectively. The global variable “THRESHOLD” is to determine the color distance from (0,0,0) that counts as “black”, and thereby where the mask is clamped. The part with the image map “hack” is where Pillow provides an easy and FAST way to do per pixel per channel map method, but not just a per pixel map channel. Ideally at this point, I’d just select everything that isn’t black and make it white, but there wasn’t a clear method for that which was also fast. Converting the image to grayscale and boosting the contrast a ton accomplished the same effect quickly.

Note that the above method does NOT account for noise; in reality the difference map won’t conviniently be perfectly black indicating the background. This is fine for now to build the rest of the pipeline off of, but the virtual scanner will need to be changed to produce noiser images (specifically using indirect lighting etc so that some of the object bleeds onto the turntable and a noise field overlayed that differs from scan to scan to make things interesting). OpenCV will come in handy here for de-noising at the very least. Pillow will still be useful, but some statistical analysis of the images will need to be done to determine the correct thresholds for the filters.

Also, as I’m sure some are wondering, the code example runs pretty much instantly, because Pillow is super awesome.

[edit]
No idea why one of the images isn’t scaled - all of them are the same dimensions. shrugs

Out of curiosity, I ran a test to see what would happen when the source data was really dirty. In Blender, I added ambient occulusion to the render, and via the node editor, a different noise map is mixed for the scan and bg so theortically none of the pixels are perfectly identical in each image, despite looking similar. To make things interesting, the images are also given a slight gausian blur (prior to application of the noise map), and saved as 90% quality jpegs. I also changed the scanner’s materials so that the inside is matte black, and the lights are much brighter.

I tried a couple of PIL-only methods for dealing with the noise, but the results were unusuable for my purposes.

Also, the threshold and noise settings are hard coded - if the object being scanned doesn’t effect this too much, we might be able to get away with just having these values hard coded for whatever makes sense for the scanner.

[edit]
Also, here is another version that doesn’t use open_cv, is faster, and produces somewhat cleaner output:

Note that there is distortion below the object because the table’s surface is kind of glossy. The scan interior would likely be either matte black or a (possibly retroreflective) surfacing of a very specific color (eg lime green or safety orange). Translucent objects most certainly will not work unless the background is textured (but that is problematic with using a turn table). Reflective objects might be scannable. It may also be the case that the scanner could have different settings for different types of objects, so that more things are scannable albiet with differening overall quality. Also worth noting that the scanner is planned to be enclosed, not open like most 3D scanners.

I will look to pick up some bright colored construction paper later and try this experiment again later on.

I have used Prang tempura craft paint to coat parts that were subject to thermal testing with an infrared inspection system. In the testing case, we used black because it provided a uniform emissivity value across the part. That is necessary to obtain accurate temperature readings. In your case, any color should work. The paint provides a nice flat finish.

Quick update: I’ve been building out some tools for working with voxel data, which will be used in the scanner software. You can find my code so far here: https://github.com/alephobjects/libvoxel. Nothing photogenic, however.

In a meeting today we discussed what would be needed to start building a prototype hardware, which will happen parallel to me building the software for it.

I’ve used Blender’s bisect feature to show the difference between the two models. The model on the left is a hollow sphere, the model on the right is the cast version. They both have an identical outter topology, but the cast version lacks the internal topology.

This is an essential component for the scanner to be able to output manifold stl files instead of messy point clouds.