Turn Medical Imaging From 2D Into 3D With Just $10

One of the modern marvels in our medical toolkit is ultrasound imaging. One of its drawbacks, however, is that it displays 2D images. How expensive do you think it would be to retrofit an ultrasound machine to produce 3D images? Try a $10 chip and pennies worth of plastic.

While — of all things — playing the Wii with his son, [Joshua Broder, M.D], an emergency physician and associate professor of surgery at [Duke Health], realized he could port the Wii’s gyroscopic sensor to ultrasound technology. He did just that with the help of [Matt Morgan, Carl Herickhoff and Jeremy Dahl] from [Duke’s Pratt School of Engineering] and [Stanford University]. The team mounted the sensor onto the side of the probe with a 3D printed collar. This relays the orientation data to the computer running software that sutures the images together into a complete 3D image in near real-time, turning a $50,000 ultrasound machine into its $250,000 equivalent.

[Dr. Broder] is eager to point out that it compares to MRI and CT imaging in quality, but with fewer issues: it reduces error in interpreting the images, and makes advanced imaging available in rural or developing areas. This is also useful when MRIs and CTs are risky due to medical history or for newborn children, and in critical situations where prep for an MRI or CT would take too much time.

The photo above is an STEVAL-MKI121V1 (which has a 6/9 axis module mounted in the middle) but I suspect that the $10 reference is just to the use of IMU units in general rather than any specific kit of parts.

This is just an add-on to an existing kit which already has testing, certification, and liability insurance. No modification is made to the active components of the machine which is where any worry about danger to the patient would be involved. That’s what makes it interesting and why it could be transformative. You don’t have to be so cynical about an actually clever hack.

I’ve thought of the exact same thing, using cheap bedside ultrasound (which does a 2D slice) to construct 3D images. Glad someone has done it, and I hope it becomes standard everywhere.

No reason 3D ultrasound imaging couldn’t become as standard and cheap as a stethoscope. It really ought to be, and would remove a bunch of guesswork from normal, everyday procedures.

It’d be REALLY cool if you could pair this with augmented reality (either using a display on the portable ultrasound or perhaps on the doctor’s glasses) allowing the doctor or nurse to easily see into the patient in real time without having to stare at a separate display. The technology should be seamless and transparent, not require a separate technician.

I’m not being cynical. It’s simply true. See the ISO comments and others.
And this isn’t a simple add-on when it comes to certification. I haven’t looked too closely at the output 3D, but what if the error on the additional sensors is too high and the algorithm gets things wrong, or otherwise drifts and a tumour is rendered smaller because of it? Whose fault is that?
Also, I’ve seen this hack with laser depth scanners (and thermal cameras) and the reconstruction wasn’t something I’d bet a person’s life on.

Incorrect. Would still require FDA / National Body assessment and approval where used as a medical diagnostic tool. Read the scope of the EU’s medical device directive. Read the scopes of your national version of the general standard IEC60601-1 and the particular standard IEC60601-2-37.

At least I am smart enough to know that I am stupid. And there are those too stupid to know that they are stupid.

This. The product affects the diagnostic images, therefore is a medical device and would need to be verified and validated under 60601 or equivalent and designed and manufactured under 13485, or equivalent and the software would need to be developed and verified under 62304, or equivalent.
It’s a fantastic hack and could be really useful in the developing world where budgets are limited and regulations take a back seat to on the ground pragmatism. Don’t expect to see one in a western hospital any time soon though!

1) how easy is the 3D printed collar and electronics to sterilize or cover with a suitable protective sleeve between patients

2) how easy is it to explain to the coroner (and ambulance chasing personal injury lawyers, if in the US), “Sorry the diagnosis of the abdominal aortic aneurism with an already poor prognosis was delayed, there was a cold solder joint in our lead free add on board that has no scheduled maintenance regimen or diagnostic mode.”

Question 1 is silly. Create a collar from injected plastic and it will survive an autoclave. Polypropylene (PP) and polypropylene copolymer (PPCO) can be autoclaved without any issues. You can create one hundred collars, and keep them around.

I think you didn’t watched the video before asking question 2… It’s an add-on, so if there is a cold solder joint your enhanced ultrasound machine into a regular one. Nobody will die. Add-on board dead? Take it off and proceed like it never existed at all.

Is there no way to access the raw data coming out of the probe? The ultrasound software has to be doing some processing to turn the ultrasound information into an image, and that’s going to destroy some information. There has to be a better way to get from an ultrasound measurement+orientation to a 3D model than going through an intermediate 2D stage, and point clouds have worked well in other areas of computer vision.

dahud says: “Is there no way to access the raw data coming out of the probe?”
The raw data coming out of modern probes is baseband RF, typically around 40 MHz, 128 channels, 16 bits. It’s possible to catch it, but not easy, and almost impossible without the explicit cooperation of the manufacturer to gain access to the internal data busses. You can imagine the NDA involved.

There also an enormous amount of very esoteric and proprietary processing done to modern ultrasound data before it gets to the screen. It’s extremely difficult to come even close to the OEM image quality of you try to process the data yourself.

The ultrasound image data is intrinsically 2D (in most probes), so it’s far simpler to grab the image data (literally, with a frame grabber) and build the 3D volume out of the 2D slices. These are finite-thickness slabs of intensity data, not surfaces, so pointcloud representation is not appropriate, and inefficient to boot.

It’s ironic that the segment around 0:30-:35, he is scanning a phantom (a dummy), with a dummy probe, using a teaching instrument. *That* probe is non-functional, and contains a 6-DOF position and orientation sensor that tells the display where the probe is, relative to the body. The display then presents a synthetic image in real time, drawn from a 3-D volume image.

That 3-D real-time technology is from around 1990. Back then we needed an Ascension Technologies Flock of Birds sensor and a dedicated $20k workstation to do though.

I vote this best hack of the year. I’ve wanted a 3D home medical scanner for the past few years to do self-diagnosis and to show my kids scans of themselves. That’d be amazing! We also make experimental lightfield interfaces in the startup I’m part of, and it’d be incredible to get near-real-time volumetric scans into those displays. But its always been such a pain to get the data. I can’t wait to try this out!

The software is indeed free, or near enough. The problem is simple trigonometry, with a bit of noise filtering thrown in. Throw in some point-cloud processing techniques from projects like CloudCompare and you’re golden.

I suppose we could add $0.35 to the end of the price tag for the filament, if that’d make you feel better.

Can we just stop for a moment to appreciate the elegance of just hijacking the VGA stream to get the images, too? So many people would be invested in trying to gain access to the software, or do a screen grab on the host PC, and these guys are all “Nah, man, just snarf the VGA.” Also, his eyebrows DGAF.

Damned unlikely, since similar implementations have existed in the literature since the nineties, and exactly this approach was published in 2001-2002, and some current “3D enabled” commercial probes have accelerometers already built into them.