I am getting into the fancy new Core depth scanner and SDK. I’ve been able to capture a point cloud from a single depth frame and then do some processing on it using the Point Cloud Library (PCL), with the end goal of generating a model/mesh of that scene (I’m close but surface reconstruction is still elusive).

I’m scanning a stationary object (a human leg) from a Core at a fixed location (on a tripod angled down at the leg). My goal is to create as accurate a scan as possible of the leg to then model it in order to create AFOs (Ankle Foot Orthotics - basically leg braces) in 3D modelling software that is then 3D printed.

I’ve set up a similar proof of concept with the Sensor before using OpenNI. It was close but the quality was just a tad shy of what we needed. I’m hoping the Core’s improved quality will cross the line and make this a viable option.

I’ve seen that the depth data is not always consistent over multiple frames. If I process say 20 consecutive frames from the same scan (repeatable using an OCC recording), at the same x, y point the depth could vary by a few millimetres in each frame. Since my goal is accuracy, I am planning to average out the depth over about 1 second’s worth of scan data to get, what I hope, is the most accurate depth value for that x, y.

I was able to do this before with the Sensor and OpenNI using this method:

However the only method I can see in the Structure SDK that gives me something similar is:

depthFrame.unprojectPoint(x, y)

Which uses the depth value at the x, y in the depth frame, and doesn’t let me specify my averaged out depth.

Is there any method I can use to do this? Alternatively would it be possible to get the code or basic algorithm that is used under this method so I can replicate it in my code?

I considered instantiating my own DepthFrame instance and populating it with my average depth but that looked scary with internals I don’t know enough about, but if that was an option then some pseudo code for that would work too.

I’m also open to being told that the averaging out of depths is a bad idea and there are better ways to get the most accurate scans.

std::vector averagedPoints;
// Should probably ensure every frame you add to `frames’ is a valid frame using STDepthFrame::isValid()
// and that the width and height are the same for all frames
averagedPoints.resize(height * width);

If I’m reading your code correctly it is averaging out each x, y, z of each point in the frame, and making an average of x, an average of y, and an average of z out of it.

I don’t think this will give the same result as averaging out the depth of each frame and then converting (unprojecting) that final average depth to a point? I think that code will give some wild points that aren’t very reflective of the model?