'Using NVIDIA Tesla P100 GPUs with the cuDNN-accelerated TensorFlow deep learning framework, the team trained [its] system on 50,000 images in the ImageNet validation set,' says NVIDIA in its announcement blog post.

What's incredible about this particular AI is its ability to know what a clean image looks like without ever actually seeing a noise-free image. Rather than training the deep-learning network by giving it a noisy image and a clean image to learn how to make up the difference, NVIDIA's AI is trained using two images with different noise patterns.

'It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars,' say the researchers in a paper published on the findings. The paper goes so far as to say '[The neural network] is on par with state-of-the-art methods that make use of clean examples — using precisely the same training methodology, and often without appreciable drawbacks in training time or performance.'

In addition to being used on photographs, researchers note the AI will also be beneficial in scientific and medical fields. In particular, the researchers detail how magnetic imaging resonance (MRI) scans — which are very susceptible to noise — could be dramatically improved using the program, leading to improved diagnoses.

The MRI scans are an example where AI should not apply. First, MRI has incredible resolution but it takes too long. Anyway, assuming that we somehow managed to get such a terribly noisy scan, AI would create a very nice image like the one shown there. The problem is - we want to see pathological formations, not a nice image. I can see a lot of suspicious objects in the noisy image which might be cancer but they are well cleaned up in the AI image. Now, one can train with scans with cancer. Then the AI may start seeing cancer where there is none (false positive).

In your example, what research is aiming at is „super human“ performance. It has been demonstrated in a number of cases already. Yes, the AI would be trained with normal and pathological examples. Given the same number of examples, a human would (as of today) always outperform the AI. But possibly, the AI can be trained with so many examples that humans can‘t compete anymore. Because humans, or groups of humans, make errors too.

Then you could actually measure the threshold where noise starts to deteriorate results. And maybe shorten MRI capture times, reduce radiological impact etc.

In this case, I would do several studies comparing MRI sequences acquired in the traditional fashion and compare them to the quick acquisition sequences performed with lots of noise and then processed trough AI. The main reason for noise suppresion application in MRI and CT is to optimise the time needed to form usable images. It can be tremendously shortened. That could amplify the type of exams and the amount of diseases that could be diagnosed with these methods.

I started astro Imaging in 2011, and one of the techniques used there is to take lots of images of the same subject, plus multiple sets of calibration images and then you use all the images to clean up the poor quality light frames, then stack them together to build up the signal, then process further to generally bring out the image. It gave me a similar sense that in programs like ‘24’when they enhance an image, that it is actually feasible, whereas before, I thought it was mostly made up!

@fotomotovfrI presume security camera is a video camera so you have 24 images for every second of the footage. Just stacking the 24 images would give you a much better image let alone computational photography algorithms

Noise reduction in images containing statistical noise is nothing new. Based on a combination Bayes theorem in statistics and the maximum entropy principle (Shannon) algorithms have been developed in the 80s. These can be efficiently implemented using GPUs.

I do not see AI here. AI and machine learning appear as big buzz words which does not make anything better. Some of these novel approaches reinvent the wheel, since there is only a fixed amount of information in the image. Whether AI or MaxEnt, they are only algorithms to generate the most likely original image that is underlying the noisy shoot.

In fact when you open the full res pictures these results aren't very impressive. They certainly do not come near recovering the detail in the true image. Which again is expected because noise is absence of information, information that physically can't be recovered, only guessed at.

All true, but what if you are at the limit of what is possible with current tech, or even physically possible?Then you need to start to include other data, that could be data from a second camera, from the same camera but a different time (stacking bursts) or from a database of how the models eyelashes should look based on other images. Merging all that data in a convincing way is incredibly difficult, but neural networks seem to do a surprisingly good job here, and the research above is merely just the start

OP,Congratulation for owning a camera that produces "real" data in any shooting conditions... But I am afraid it does not exist. Even in good lighting situations, a camera recreates missing colors for 50% of green, 75% of blue and 75% of red with a Bayer sensor... In fact, more than 50% of your picture data is made of "fake", "estimated" data ;-)

KristinnK," information that physically can't be recovered, only guessed at."That's exactly where AI can do a better job (than averaging or filtering) : guessing.

@photoMEETING"In my photography, I love to deal with real data rather than with etimated ones."

Then digital photography is not for you. Your choice of cameras that don't require "estimation" to create final image is limited to two monochrome models (one very expensive and one extremely expensive) and Merrill (or earlier) generation of Sigmas.

@photoMEETING"I understand what you mean. But sometimes the best idea may be simply not taking a picture."

That is one standpoint you can take. But I don't think we all share that. Also, the process is not black and white. So what is with an image you took and you like, but it is noisy enough that it distracts from the story/memory. Why not use some of those techniques to "rescue" it, or remove just enough noise that the image works again. With good algorithms such "minor" improvements of 1-2 stops shouldn't give you any weird artifacts.This has been done for years with color noise, since color noise is extremly distracting even more than luminance noise. So the standard process is to convert it into luminance noise, by desaturating the high frequency layer. This is a standard part of denoising and does alter the image, but I don't think anyone argues that this kind of NR shouldn't be used.

Karroly explained it nicely.So if you cannot deal with your photos being a result of guesstimation, there really is not much in digital that meets your high standard of capturing "real data". It rather just shows that you don't fully understand how digital cameras work.

If you don't understand that demosaicing of the raw image involves interpolation (which is basically "guessing" based on limited data from surrounding pixels), then you don't understand how digital cameras work. Only monochrome and some Foveon sensors don't require demosaicing. There are also some high res modes (eg. Olympus, Pentax) that capture full colour information for every photo site by using sensor shift.

> That's exactly where AI can do a better job (than averaging or filtering) : guessing.

I don't think that "subtlety" escaped anyone. But that's not the point of this chain of comments. Rather it's the truism that however advanced these replacement algorithms are, they will never equal having the actual information.

It was you who said "In my photography, I love to deal with real data rather than with etimated ones." I just pointed out that you deal with estimated data all the time. I agree that Bayer interpolation is more straightforward, but that's a difference of degrees. You still aren't getting 100% real data.

KristinnK,"Rather it's the truism that however advanced these replacement algorithms are, they will never equal having the actual information."This is not what I understood from the OP's first comment. He did no write "I would love to deal with real data" but "I love to deal with real data" implying, at least for me, that he believes his camera provides him with real data, which is not true...

photoMEETIG,I do not split hairs, the reason of my first comment is exactly as I explained just above. But maybe you are not aware that the way you said things in your first post is not clear for many people as they all reacted in the same way... In fact, you are just enable to understand the consequence of your equivocal post...

It did have relevance to the way I was reading your post. But you have since explained that you were making a different point than some of us believed.

"It's the same if you would claim that analog photography is digital in reality, because there is 0 or 1, but no .5 photon.It is true, but has no relevance in RL."

No, it's not the same at all. Demosaicing has relevance in real life, as it creates a result that is subtly, but visibly different from capturing full colour data at each pixel. It creates artifacts, simple as that. It may not impact our images in a way we care about, and may have nothing to do with what you wanted to say, but it still means that we aren't dealing with 100% real data. And this limitation isn't inherent in digital photography, as there are ways to capture light digitally that would give us more real data.

It's really just a matter of time until large sensors get largely obsoleted by AI algos and computational photography. Hopefully camera makers see the potential here to improve their cameras, as opposed to pretending that those developments do not affect them.

There is no reason AI could not be applied to create beautiful bokeh for example. Researches have already shown an AI that has spacial awareness so good that it can "imagine" how the scene would look like from a different angle, or even deduce what should be in places that are not part of input data. It's quite impressive.

No reason those could not be used for AF, object recognition and tracking, and probably a few other areas I can't think of top of my head.

Improving the already good will always be better than improving the bad. The best will be the combination of large-sensor, high sensitivity, good optics, high-resolution and Intelligent Image Processing.

I wasn't exactly thinking of 1/2.3" sensors dethroning small and medium formats.

But combine capabilities of even current 1", 4/3 and APS-C cameras with that kind of powerful image processing and you could achieve new heights in terms of what is possible. Without the necessity for very costly FF setups.

Take Panasonic and their DFD technology. With that, the camera basically knows the depth of objects on the scene. Feed that to an AI algo to intelligently create a realistic looking bokeh and suddenly it doesn't matter that Panasonic cameras have 4/3 sensor and that you're using an affordable f/1.7 prime. You portraits will look like taken with an FF cam with f/1.4 prime. Or an f/0.5 prime. It can be anything, since it's gonna be artificial.

Obviously this can be applied to anything. The thing is, tricks like that mean that a cheaper and smaller camera with a smaller sensor and slower lens could effectively give you images that look as if captured by something much more expensive.

Well, it's not that far from being off. I go to the stores here and the vinyl stands get larger and larger every year. And we're talking major mass retail store chains here, not some dedicated music record stores. It was totally unthinkable just 10 years ago.

Streaming services and itunes of the world greatly reduced the appeal of a CD. I can imagine a scenario where CDs share the same fate vinyl faced 20-30 years ago.

Seriously, I think it's a good analogy: analogue audio has plenty of downsides compared to digital audio, but at least it has a few unique features and a certain appeal. I think digital imaging will go the same way, clumsy "full frame" digital cameras will become the SACD of the future, superceded by small sensor cameras coupled to great algorithms. Film cameras will probably survive in a niche.

So, as I understand it, this sort of work is being done to make real-time ray tracing more feasible. Of course, the application to noisy images is also clear (heh), but I'd actually really like to see the results of the network on a not-so noisy image to see how much it changes ground truth from where is originally. That is: Does the denoiser eliminate detail, misidentifying it as noise?

@wladOh you bet we have "AI" algorithms to upres those. :D That said nobody will be interested in you pictures if they can just ask their voice assistant to create a better one, while they eat breakfast

Latest in-depth reviews

After a rare Seattle snowstorm finally subsided, DPReview editor Jeff Keller was able to escape the snow and spend some time with the impressive Fujifilm X-T30, a camera that offers a lot of bang for the buck.

The EF-M 32mm F1.4 is a welcome addition to Canon's APS-C mirrorless lens lineup. It's a good performer all-around and enjoyable to use on the EOS M50, and we hope to see more like it introduced to the EF-M range.

We don't often get excited about $900 cameras, but the Fujifilm X-T30 has really impressed us thus far. Find out what's new, what it's like to use and how it compares to its peers in our review in progress.

The S1 and S1R are Panasonic's first full-frame mirrorless cameras so there's a plenty to talk about. We've taken a look at the design and features of both cameras and have some initial impressions, as well.

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Sony mirrorlses cameras in several categories to make your decisions easier.

Ross Lowell was a man of many talents who had more than 25 patents to his name, created a lighting company and created gaffer tape, a staple in the camera bags of photographers and cinematographers the world over.

Ricoh's new WG-6 is the company's latest waterproof camera, with a 20MP sensor, 28-140mm equiv. lens and the ability to go 20m/65ft underwater. If you need something that's both crushproof and chemical-resistant, there's the G900, which is designed for industrial use.

At its Galaxy Unpacked event, Samsung has officially unveiled the Galaxy S10 and S10+ with a triple rear-camera array, as well as a more basic S10e model with a dual main camera unit. As expected, the S10 series' display is the center of attention with a hole-punch style front-facing camera embedded in the screen.

Samsung wasted no time unveiling the Galaxy Fold at its Unpacked event today – a foldable device with a 4.6" display when folded, and 7.3" display when unfolded. The device contains a total of six cameras – three on the back, two inside and one front-facing camera.

After a rare Seattle snowstorm finally subsided, DPReview editor Jeff Keller was able to escape the snow and spend some time with the impressive Fujifilm X-T30, a camera that offers a lot of bang for the buck.

Given that it uses the same sensor and processor as the X-T3, it's no surprise that the Fujifilm X-T30 is capable of producing some excellent photos. We took a pre-production X-T30 all over the Seattle area and have plenty of photos for your viewing pleasure.

Tamron has announced three new full-frame lenses slated to launch in the middle of 2019: an SP 35mm F1.4 Di USD and 35-150mm F2.8-4 Di VC OSD for DSLRs, as well as an ultra-wide 17-28mm F2.8 Di III RXD for Sony E-mount cameras.

The EF-M 32mm F1.4 is a welcome addition to Canon's APS-C mirrorless lens lineup. It's a good performer all-around and enjoyable to use on the EOS M50, and we hope to see more like it introduced to the EF-M range.

Panasonic is well known for including impressive video features on its cameras. In this article, professional cinematographer Jack Lam explains one killer feature the company could add to its S series that would shake up the industry – and it all comes down to manual focus.

Full-frame cameras get a lot of attention lately, but Technical Editor Richard Butler thinks that APS-C makes the most sense for a lot of people – and there's just one company consistently giving the format the support it deserves.