It's currently improving my 128^2 tile... Spent 78 hours so far. Added progress counter and resume functionality. I'll report when I have something running, but it's going to be after the weekend; tracer is broken atm due to some BSDF experiments.

Good catch on that instruction toxie, I've pushed up a fix for this. I'd not thought about MSVC support but it's a good point, this should also be working now. Although saying that, I've not got a windows environment to test this on. So if anyone does, and tries compiling the source, then let us know the result.

I've also tried simulating 128x128 sample mask at a depth of 4, and got some comparable results to that displayed in the paper. This took 12 hours with 131072 iterations, although that was on a relatively modern Xeon with 8 cores (16 threads). Would be interested to see what we get with more iterations, let us know what you find jbikker

I'd imagine the benefit of the blue noise properties would give diminishing returns with greater depth rather quickly. And as you say, a higher depth value will also produce a sample mask of lower quality. This approach will probably be best suited to dealing with earlier integration problems such as motion blur or spectral sampling. Although they did give an example of good results with light sampling.

If higher dimensions are required then we might see better results by padding multiple sample masks of a low depth, each using a different seed value.

To be honest, i don't even know if it can bring any real benefit for anything that is not directly visible from the camera. After all one exploits the human visual system here, so the higher dimensions/bounces should be better tackled via a properly distributed sample set (e.g. most likely some modern QMC set or another "hand-optimized" set with a fixed number of samples, especially if one tackles interactive/realtime usage).

Better you leave here with your head still full of kitty cats and puppy dogs.

toxie wrote:To be honest, i don't even know if it can bring any real benefit for anything that is not directly visible from the camera. After all one exploits the human visual system here, so the higher dimensions/bounces should be better tackled via a properly distributed sample set (e.g. most likely some modern QMC set or another "hand-optimized" set with a fixed number of samples, especially if one tackles interactive/realtime usage).

Dithering aims to improve is the correlation between the pixel estimates so that the distribution of the error is visually pleasing. It does not address the quality of these estimates, i.e. the amount of error. Of course, for each pixel you want to use a good integration pattern, e.g. a QMC one, to lower the amount of error, but that's an orthogonal objective. The two objectives can be combined.

For example, while motion blur and dispersion can be classified as "directly visible from the camera", direct illumination and ambient occlusion are not. And dithering helps with those too.

True, unless one uses a distribution over the screen instead of per-pixel for example (and then merges the two schemes). So maybe i was a bit distracted here by my "own" usecases and experiments, sorry. And then you sacrifice that part in addition to potential quality loss for these "directly visible" dimensions (as one optimizes the sample set offsets for a larger set of dimensions, where-as the lower ones are (most likely?) the more important).

Which brings me to a simple idea: What about weighting the dimensions/bounces in the optimization process? So that early dimensions are more important than later ones? So using kind of a custom vector length that favors lower dimensions over larger ones?

But maybe that's not even true. Thinking more about it, it could also be that the lower dimensions are only important in the beginning/low sample count, but the higher dimensions become more important for growing sample counts? All very scene dependent of course.

EDIT: as for the example: Yes, that's why i wrote dimensions/bounces, so basically everything > first hit incl. collecting the stuff (direct light, pre-computed data or something like AO) in there.

Better you leave here with your head still full of kitty cats and puppy dogs.

- Applying the method to direct light sampling yields the results presented in the paper.- Applying the method to the first diffuse bounce yields no perceivable improvement in quality.

In general, the number of dimensions is a problem: I tried 6 dimensions (sampling direct light on first diffuse surface, then the first diffuse bounce and finally direct light on the second diffuse surface) but this already seems to decrease the quality of the penumbras compared to using just 4 dimensions. This would suggest that just using 2 dimensions could yield the best quality; this way additional dimensions do not affect the quality of the distribution of the first two. It could be that slightly more converged tiles yield better results; I had 128x128 / d=10 tiles of high quality and produced 32x32 / d=6 in just a few minutes (didn't expect to need them).

So that's pretty much what everyone expected.

That being said, the method obviously improves image quality for the first couple of samples, it's straight-forward to implement and should have only a tiny impact on performance.

EDIT: maybe also due to the semi-magical weighting function, smaller tiles are better? if thats the case then one could also have different small tiles that are then used "randomly" over the screen to get rid of the tiling patterns.

Better you leave here with your head still full of kitty cats and puppy dogs.