Posted
by
kdawson
on Friday July 23, 2010 @09:59AM
from the healthy-green-glow dept.

Gwmaw writes with news out of the University of California, San Diego, on the use of GPUs to process CT scan data. Faster processing of noisy data allows doctors to lower the total radiation dose needed for a scan. "A new approach to processing X-ray data could lower by a factor of ten or more the amount of radiation patients receive during cone beam CT scans... With only 20 to 40 total number of X-ray projections and 0.1 mAs per projection, the team achieved images clear enough for image-guided radiation therapy. The reconstruction time ranged from 77 to 130 seconds on an NVIDIA Tesla C1060 GPU card, depending on the number of projections — an estimated 100 times faster than similar iterative reconstruction approaches... Compared to the currently widely used scanning protocol of about 360 projections with 0.4 mAs per projection, [the researcher] says the new processing method resulted in 36 to 72 times less radiation exposure for patients."

How is that remarkable? Just about every major function of a typical computer can be done with a low-end Celeron and 2 GB of RAM. Games are about the only thing that low-spec system can't do. And gamers enjoy paying lots and lots of money for the best. If I can do everything that I do now on a $300 computer, why would I pay $800 and get a quad-core unless I'm a gamer? Yes, there are a few other areas such as CAD and the like that need high-powered systems, but in the cost-conscious world, it is gamers that

I'm pretty sure the comment was for general usage, which is normally just Email and internet usage with some office apps thrown in. That is what a celeron with 2gigs of ram would be sufficient for.

Yes, there are many many programs that are used in many fields that would not fit into the celeron with 2 gigs comment. I work in an office environment, we don't need massive processors, we don't need massive video cards, all we need is a low end processor with a good amount of ram.

That is what I got from reading his comment, but apparently I am in the minority.

You and a couple of others in this sub-thread are defining the problem backwards. As near as I can tell you're approach is to look at computer A and computer B and then to say "B is five times faster than A, therefore I need B." The correct way is to lay out your requirements: technical, financial and SLAs for delivery of your "product." Then to identify the system you need.

While it's nice to be able to cache gigabytes of data, the reality is is that 2 GB is a fuckload of memory. Say you have a 21 MP camera

You have never done anything more than red-eye reduction in the GIMP. You calculated the framebuffer size properly, yes, but there are many problems with using that to base your estimate of memory required.

I've done plenty of graphic design, I just don't use crappy tools. If your tool requires a full size copy of what the image was before every single change, then your tool is hopelessly naive in it's implementation.

If you're going to refute my claim, then refute my claim. All you did was say "In my opinion

Most people don't to heavy video, audio, or photo editing. The most people usually do are crop and resize photos for facebook or myspace and small editing of video for youtube. I used to do video editing on my old 466mhz celeron with 64mb of ram. Sure it was slow but most people don't need to edit and preview in real time.

Well, the market for actual high performance computers is way too small to fund the R&D necessary to build those crazy GPUs. The high performance computing folks should thank their lucky stars that games went in a direction that required more and more processing power (well in excess of what CPUs can provide) and that the GPU companies didn't decide to just leave them out in the cold. There have already been stories of some jagoff putting a few GPUs in a box and outperforming million dollar supercompt

They've already been left behind, or else they would not have to pay developers to not use CUDA. Also, nVidia has better OpenCL support than ATI in terms of performance and stability despite the fact that it's not their first choice language for GPU development (obviously).

What ATI actually needs to do is stop treating software development like some minor aspect of their GPU production that can be haphazardly tossed together. They have much, much better hardware than nVidia on paper and yet they are merel

So, they pump in all that radiation because the processor is too slow? Doesn't seem right to me. I would think if they could have simply put another $10000 into the machine (adding CPU cycles) to lower the required radiation they would have done that a long time ago. So is the use of a GPU just a side effect of some new technology that allows the machine to estimate or predict the image with a lower radiation dose? That GPUs are more effecient for some operations is nothing new - what's the real breakthrough here?

The reconstruction time ranged from 77 to 130 seconds on an NVIDIA Tesla C1060 GPU card, depending on the number of projections –-- an estimated 100 times faster than similar iterative reconstruction approaches, says Jia.

So in essence they have built a parallel optimised calculation system rather than an iterative one, and we all know the one thing CUDA and OpenCL do VERY well is parallel processing.

It seems the real win here is the new code, it could run on a TI-82 calculator and still require that level of radiation, its just that its very well suited to GPU to crunch.

The TFA says that this tech is usually used prior to treatment, while the patient is in the treatment position.
Because processing a limited number of scans into a useful model previously took several hours, they were forced to perform many more scans to get a more accurate picture with which to build their model - because they don't want to leave the patient lying in the scanner for 6 hours prior to treatment.With this improvement in processing power, they can produce the model from limited data in a feasable time.

So the summary does actually describe the breakthrough quite well: It's not a new image processing technique for working with limited data, it's just new hardware allowing that process to be run in a quicker way. Yes they're using a slightly new algorithm, but I doubt that is a massive breakthrough in itself.

I think it's being driven by recent work which suggest risks associated with the scans are a bit higher than previously thought. There's a perceived medical need to reduce the radiation. I'm afraid I can't put my finger on a citation though.

I have to imagine that there are all kinds of people working on software and hardware upgrades all over medical science/engineering. Decreasing the risk to patients might be a nice reason to upgrade these scanners in particular, but you sorta sound like 'if it wasn't for the risk to the patients, this upgrade wouldn't be needed anytime soon.'

Engineers want to make better products, both to contribute and to make sales. Doctors want better products, both to decrease risk and to make their work easier and mo

Ah, interpolation, aka. making up data. This doesn't seem like a brilliant idea for purposes with accuracy is important.

I do acknowledge however that if your bullet is 10 mm in diameter and your target is 5 mm in diameter, you probably don't need a precise surface map of the target as long as you know where it's at within three or four mm.

Ah, interpolation, aka. making up data. This doesn't seem like a brilliant idea for purposes with accuracy is important.

Doing the scan quickly and then filling in the missing data computationally is becoming better than doing the scan slowly due to movement. People cannot remain perfectly still (breathing, etc.), so if you do the scan more quickly, you get less motion and less burring.

Because processing a limited number of scans into a useful model previously took several hours, they were forced to perform many more scans to get a more accurate picture with which to build their model - because they don't want to leave the patient lying in the scanner for 6 hours prior to treatment.
With this improvement in processing power, they can produce the model from limited data in a feasable time.

Good lord. Am I the only one that is terrified by the idea that they are take several scans and trying to come up with a vague model of how your organs tend to move, and then firing a rather large dose of ionizing radiation at their best guess? I was under the misunderstanding that imaging guided radiation therapy was somewhat real time up until now.

At some point the amount of processors becomes insignificant because of the overhead and costs it will introduce. A Tesla C1060 costs ~$700 for these types of projects and has 240 processors designed to efficiently process this type of data, compare this to the cost and maintenance of a half-rack cluster this would take in generic processors.

My guess is that each scan requires a considerable amount of processing to render into something we can read on the screen. Probably billions of FFTs or something. You can make a tradeoff between more radiation (cleaner signal) and more math, but previously you would have needed a million dollar supercomputer to do what you can do with $10k worth of GPUs these days, which is how they're saving on radiation.

What's going on is that instead of taking a clear picture they take a crappy picture and have the ludicrously fast GPU clean it up for them. While you could have done that by just putting 50 CPU's in parallel the GPU makes it quite simple.

The speed is important because their imaging is iterative, with the GPU they're apparently waiting 1-2 minutes, without the GPU it takes them 2-3 hours which is a rather long time to wait between scans.

The technique is called iterative backprojection. The reconstruction process assumes an array of pixels which, at the beginning, are of some uniform value. It then looks at a ray of attenuation data from the CT projection (along this ray, the tissues in the target result in this degree of attenuation of the xray beam), and asks "how must the pixels along this ray be adjusted, so that their attenuation along the ray matches the data from the CT beam?". It does this for every measured ray taken during the acq

They (NVIDIA) say that you could have a very cheap supercomputer for just $10k, made with Nvidia GPUs only. Pretty impressed achievement, and btw they also say that their GPUs are in fact faster that the normal, Intel/AMD CPUs. I don't know about you, but once my piggy bank is full, i will get one of these super-computer monster.

more like 20-40k

Each 4 GPU node costs us about 5k, the thing is you can do with a 4 node GPU cluster what would normally take 50-100 CPU's or about 10-15 nodes.

It's true that GPUs are faster than normal CPUs for some operations. If you have programs that are nearly pure linear algebra and looking for single precision FLOPS, then the GPU will leave a CPU in virtual dust. If you have a lot of branching, conditionals, double or integer operations, and care about MIPS, then not so much. Image processing is one place where linear algebra is king, so just think about what you want to do with a "super-computer" before you break open Hammy.

Neat. Does this also reduce the running costs of the machines, or would that be a negligible benefit compared to not irradiating your patients?

From the point of view of the hospital? It's the other way around; increasing the lifetime of the expensive X-ray tube (which this will indeed do) is the important benefit, and not irradiating your patients as much is just a side effect.

Neat. Does this also reduce the running costs of the machines, or would that be a negligible benefit compared to not irradiating your patients?

From the point of view of the hospital? It's the other way around; increasing the lifetime of the expensive X-ray tube (which this will indeed do) is the important benefit, and not irradiating your patients as much is just a side effect.

Certainly not from the perspective of a physician. I continually bear in mind the cancer risk for CT scans that I order....the problem is that what I'm scanning for is an immediate threat to life, so I have to take a long term potential risk to offset a more immediate, more probable, and higher risk.

As for saving time...it is negligible...most new scanner (64 slice and up) process the images as quickly as the machine can scan. And even if there is a delay (e.g. 16 slice machines) most scans are put into

Eventually it might.
The exact technique they are using is for planning a radiation _treatment_ (cone beam CT), not a _diagnostic_ (helical scan) CT. They are quoted at the bottom that it _might_ be applicable. There are probably 100 to 1000 diagnostic scans for every treatment protocol.

"CT dose has become a major concern of medical community. For each year's use of today's scanning technology, the resulting cancers could cause about 14,500 deaths.
"Our work, when extended from cancer radiotherapy to general diagnostic imaging, may provide a unique solution to solve this problem by reducing the CT dose per scan by a factor of 10 or more," says Jiang.

There currently protocols that are used to lower the radiation dose for pediatric patients...the problem is that not all hospital use them. Except in a life threatening emergency, the parents should ask before a routine/

As a physics engineer experienced in the field of radiotherapy and familiar with the techniques mentioned in the/. article as well as certified in radiation safety I am sorry to say that although the radiation dose is reduced, it is only reduced in very specific cases, where it is actually not a real benefit.
This technique is not used for normal CT scanning, used to diagnose in your average hospital.
This technique is used for radiotherapy (and mainly for position verification of the organ to be irradiate

These patients are about to get RADIATION THERAPY. This CT scan will be delivered immediately before they are to receive a lethal radiation dose at the same location to kill their tumor. Reduction of dose in diagnostic CT (not cone-beam) is a much more valuable accomplishment.

These patients are about to get RADIATION THERAPY. This CT scan will be delivered immediately before they are to receive a lethal radiation dose at the same location to kill their tumor. Reduction of dose in diagnostic CT (not cone-beam) is a much more valuable accomplishment.

LOL...if it is a _lethal_ dose, why treat the patient?

They are going to get a _theraputic_ dose of directed radiation to target a specific tumor bed. The reduction in the imaging scan portion will lower _total_body_ dosing.

Not all body tissues deal with radiation the same way. Thyroid and small bowel mucosa are the most radio-sensitive tissues, while areas like bone and muscle are much more tolerant...If you can avoid thyroid cancer or radiation enteritis, you'll have or be a much happier patient.

"Our work, when extended from cancer radiotherapy to general diagnostic imaging, may provide a unique solution to solve this problem by reducing the CT dose per scan by a factor of 10 or more," says Jiang.It's probably applicable to diagnostic cone beam scans, which are the hot item in implant dentistry. The reason it's first applied to therapy scans is because the tissue surrounding the tumor suffers radiation from scattering of the therapeutic beam, making dosage reduction highly desirable.

This has been said elsewhere in this thread, the real breakthrough here is due to compressed sensing, but here are some extra information:

1- Compressed sensing basically used the idea that it is not necessary to sample an image (or a projection in this case) everywhere because natural data is fairly redundant. This is why you can capture a 10 Mpixel image in a digital camera and have it compressed to a 2 Mbyte JPEG file without losing much visible information. Compressed sensing basically does the compression *before* the sampling and not after. Researchers at Rice University for instance built a working, one-pixel camera [rice.edu] using this brilliant principle.

2- Compressed (or compressive) sensing was proposed by Emmanuel Candes [stanford.edu] and Terence Tao [ucla.edu] respectively at Stanford and UCLA. Tao is a recent Fields medalist. I recommend reading his blog if you like mathematics.

3- This field is really less than 10 years old, it has completely turned on its head classical ideas about sampling-limited signal processing (Nyquist, Shannon, etc). It is a brilliant combination of signal, image processing and recent advances in combinatorial and convex optimization.

4- However this is only the beginning. Because compression happens before sampling, you need to make so-called sparsity assumptions about the signal ; in other words you need to know a great deal about what you are going to try to image. In interventional therapy, precise imaging of the patient is made beforehand in a classical way (CT or MRI), and this kind of technique is only used to make fine adjustments as therapy is ongoing. This is extremely useful and safe because of lower radiation output and because the physicians know what to expect.

5- Here the GPU is useful because it makes the processing fast enough to actually be used. It is an essential brick in the application, but of course not in the theory.