If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Geographic you're right, its hard to tell currently how well it works.
I tried it but I dont find it easy to do, to adjust the images so that the neural one has same color / contrast / lightness
Maybe someone can help me ?
this is my best effort to it

Well not entirely sure.. but i begin to think that my current neural network cannt do what i hoped it to do.
I will think about different ways to do it, for example the network might count similair pixels (use standard diviation or so)
and based upon that adjust setting for a "simple" blur operation, (neural net trains the blur radius, for a single pix blur).
currently i let it directly set the lum value but i dont think thats optimal.. well not sure.... but things have to radical change.
The network does train i believe, but as is currently it is not aware of borders, it can only slightly single pixel resulting that it learns a blur ratio that would be best for all pixels. that might get rid of the pepper noise but it adds slight blur every where I must think this over.
The good news is that i got a trainable network (reduced training time a lot today) but now i need to give it the right information and the right tools too...

I'm using a regression neural network with just 3 layers, with multi threaded training.
Essentially while normal neural networks answer binary or classify regression nets are optimized to return values -1..0..+1 or larger.
With about 22 inputs surrounding a single pixel, and code to randomly create lots off train data out of images with those 22 facts.
i't could have been a lot more facts but.. it takes time to code them
Originally i had the plan to later stack them (some deep networks are based upon that).
As i dont have the means (hardware) to train deep networks nor the code, my take on it was to split tasks by smaller nets.
As small neural nets can be very fast. Combining them keeps them small and fast.

i know deep neural networks though not yet coded them, i got pretty good idea of their working, but its essentially a bigger black box;
(often made out of stacked multiple smaller nets, which makes the error feedback calculation really complex. And that adds to their training time. The industry is all in on deep nets, Google made tensorflow while Microsoft improved CNTK, the later might be better but.. i didnt study either, its just that I made a few neural nets at work and got amazed by what they can do on industrial machines.
Based upon that i'm 100% sure a neural network could do this... but well it has to be made first

@ razorbade
I once saw a very simple but extreme good noise filter (i often wondered why it got so little attention).
They used blocks 3x3 or 5x5 from which they substracted the brightest and the darkest pixel.
Then they averaged the remaining pixels of lena, you should know Lena
Many denoising articles are written about her, its this girl :lena.jpg

What i was thinking that maybe your regression network could trained like the filter i called earlier.
Although it would be easier to do it without a neural network; this could be just a layer in your stacked network maybe ?
Well i dont know about deep learning but that filter i wish i had the article link, to show you what it can do.
But there are so many noise articles about her that its not easy to find back a specific article.
That filther though was special cause so it was so simple but a lot better then many complex filters.
I once coded it in openCV (but thats years ago, dont know where that code went)

Well i think i take a small brake, my research into this will not stop, but for the moment i rewind what i've learned so far and will think about next attempts to do this, i still have a few methods in my mind. I've not given up (i rarely give up).
So maybe 1 week no coding or so.. taking a few steps back and get fresh insights; working at work as a coder and evening cost me a lot energy.

@Geographic
Ah Lena.. yeah i've seen lots of articles with her.
But i've not heard of the denoiser you talk about, but i can imagine something like that might work, if i understand you correctly you could do this in steps and each time repeat the function (with or without?) altering the darkest/lightest spot (what if that spot is in the current center pixel ?

If anybody else knows what Geographic is talking about feel free to post a link, cause he doesnt seam to have it anymore.
It might be something. (I thought i had read all articles about lena denoising, so apparently there is one i missed).

Hi RazorBlade
How you're doing, i hope your OK and in good health.
Your last post is about a month a go and i was wondering how is the progress on this ?.
Meanwhile a document at Disney has been made about some deep neural network that does do denoising

I've been trying to read it seams to be based upon the non local means filter (as in openCV) but made 'smart' by i guess a smarter way of picking equally looking squares. It doesnt sound like something you tried, so maybe it can be of use to you ?.

Oh hi
I have not been online for a while here, well social life kicks in i got to plan my time a bit different.
I have not been writing code into project, my thoughts on how to use neural nets have grown a lot.

The reason for that is that in my free time (not that much time a day), i read lots about the subject, while at work i use them too now.
And this gives me a lot of insight, into deep neural nets as well.
In my opinion despite some good results as achieved with tensorflow or CNTK.
Its much more about how to apply a neural net against what kind of input.
Understanding your data and understanding your goal; is a pre-tasks that eases a lot of pain in the training of a NN.

These days it seams not an important topic, just add more hidden layers, more training.. sure at some point the NN will understand it.
And personally i think the Disney solution falls in that category, ok it looks good, it looks as how i want my end result to be.
But i dont believe that such a network is required. I didnt went deeply into their math though, but if its indeed based upon a improvement of local means filter.. then that's not the true power of what a NN could do.

Something i also notice quite often is that it can be a 'bad' thing to try to understand neural nets with tons of math statistics.
Partly it is based upon statistics sure that true, but i see regularly that people put wrong conclusions about neural nets, its a very tricky to work with neural nets, and hard to explain as well.

As for today for example i red some related article where someone improved computer listening (voice to text).
His solution unlike all other current solution is able to learn all kind of languages, and not just the most popular.
He stumbled upon something that others had ignored to see, something plain in the data, that was there all the time.
Hence he could reduce the calculation time that much that his net could directly work on a android phone.
(i dont think that's why google created Tensor Flow) but its a great example of rethinking neural nets and the data

Essentially i still dont think this is a deep network problem, but it might get quite complex and it be easier probaply to use a deep network, or some other architecture (there are others not that much talked about).

So yes i'm working on it, but for the moment its all in my mind, not in code.
And currently thinking if i could use a unsupervised deep network to do it,
And wondering what ifs... what i NN (a) tries to eliminate lighter noise and (b) darker noise, like in a battle could that be a playground for such a neural net...hmmm (my deep thoughts).

1) Your network seems to be getting stuck in local minima (brightness issues). Are you using a Stochastic neural network?
2) Using multiple images with different seeds may be a good way to help the input, maybe? Or (more for animations) consecutive frames warped to match the current frame via optical flow?
3) What is your heuristic for how well the images match?
4) Would you consider posting a link to your source code, so that we can see if we can improve it, too?

In the longer term...
With Eevee up and coming, will that become a good reference source for neural network denoising in the future?

1) No not using a stochastic network. So far I used a regression BP network
2) actually i take 2 large images, and from that i build random training data, by picking randomly groups of pixels.
3) the heuristic is how well the pixels fit based upon a perfect render a calculated pixel from raw noise image.
4) Well currently its parts of a larger neural network framework with several types of networks I made myself.
And with some tricks that are new in this field. (neural nets are a field of math where there is a lot of inventions).
While some say i should publish, as a coder I have some different ideas about that (i need to make a living too).
However i kinda promised myself when i get this solved i'll publish the neural net as free software. Its my way of saying thanks to the people who wrote Blender.
Note though that it would be only the "trained" net (and some tricks related to input output) but not the training code itself.
Once trained though it is usable, and people might later even add other training routines to it.
As for currently its a private repro (as some parts of this framework i use in commercial software too).
I would get into problems with my work if I would share the training methods.

5) i dont think Eevee is better then cycles, its a render engines optimized towards gaming it seams.
-----
For the longer run, i think neural networks are just a method (and in theory pretty good to do such tasks) however i wouldnt rule out other tricks. NonLocal means is currently hard to solve, and Disney improved it a bit, but i can imagine that maybe a more specific general function can reduce noise too, because the blender noise is way to specific, its not random as in pure random.
I've also been thinking maybe code + statistics could solve it.
- take a pixel,
- check for similar pixels (H,S,L) lower the differences between similar pixels with a certainty (weight) based upon amount of pixels.
> which is close btw to a filter that is used for improving FAX imagess (they average after removing the darkest and brightest pixel of a small area (3x3 or 5x5) > improvements to such filters could also work well in Blender.

The way you wrote it, made me thinking, so neural networks could be "deployed as trained" data machines.
Without futher need of learning, well thats kinda sounds like a new programming language where instead of code we put in blocks brains for specific tasks. And if thats possible then we only need to change blocks of memory for different tasks.
If that could solve lots of todays complex data tasks.. then will there be a new era for a new kind of chips ?.

Chips that will be good at massive matrix computations, and with some extra hardware based logic for doing neuron like emulations. Something people do now often on GPU, but GPU are not ideal requiring a lot of watts (while i've heard that some neuron emulators use ridiculous low watts, just like out own brains. )

@geographic, well you wouldnt believe what this stuff is capeable off allready at some research centra.
i've seen things but those take a hour for me to explain. I dont think you're far of with ideas for new processors, its likely to happen.
Allready saw a neural net on a usb stick, as áditional processing unit.

However for me i take a long coding brake, time to prepare for vacation.
No updates anymore before half of August. likely September..