I would change some of those settings based on recent experience, but it probably wouldn't hurt anything if you wanted to try those settings. I can't see any reason to use debanding with 10-bit, 4K sources. And NGU Sharp does seem to benefit from using soften edges 1 or 2 depending on the scaling factor. I noticed this when upscaling still images. NGU Sharp is a good chroma upscaler, but I don't know if it's the best chroma upscaler.

Originally Posted by ryrynz
Someone needs to compile a set of profiles for various cards to avoid these same old questions.

If there is an interest in a global database for madvr profiles then send your profiles too me and I'll upload them with a proper description and credits and of course a small explanation. I would see this as step forward since people can simply download those profiles and compare, when something is wrong you can safe a lot of time explaining each toggle you changed by just download the specific profile/settings and replace it and then restart madvr.

It's not that this is against existent guides but who really reads them, especially beginners are really fast pissed off and it anyway ends up testing everything as per own needs. I also doubt that such profiles wasting much bandwidth. Just PM me and I see what I can do.

I would change some of those settings based on recent experience, but it probably wouldn't hurt anything if you wanted to try those settings. I can't see any reason to use debanding with 10-bit, 4K sources. And NGU Sharp does seem to benefit from using soften edges 1 or 2 depending on the scaling factor. I noticed this when upscaling still images. NGU Sharp is a good chroma upscaler, but I don't know if it's the best chroma upscaler.

I'd be interested to hear what other elements of those profiles you no longer apply?

I'm interested to hear if anyone is successfully using a 3D LUT for HDR in their current setup?

At present I can only get HDR looking right if I use passthrough. If I create a LUT using MadVRTPG and DisplayCal using the MadVR HDR template the resulting image is a total mess with highlights going red.

If upscaling 1080p to 4K, you can indeed add detail to the image. Image upscaling will approximate what a low resolution image would look like if it was a high resolution image. There are four times as many pixels in a 1080p source upscaled to 4K. Unless you use Nearest Neighbor for upscaling, those new pixels will add detail to the image that was not there previously.

I am a big fan of your knowledge and have used a lot of information you have passed on and lessons learnt from you, but I have to disagree with your statement, this is like saying all 2k masters to 4k UHDs are at most slightly better detail and this is after a studio and techs and mastering equipment messes with it, are you telling me an app, as wonderful as it is, is capable of adding detail to a 1080p image to make it as good as 4K, I get a 720p to 1080p idea of this, but 1080p to 4k, I dont think so.

If you scale chroma separately, the chroma layer will be slightly lower quality. This is not overly important when doing a large downscale, so you can check this if you need to save resources. The only thing important is the quality of image downscaling. I would recommend using SSIM 1D 100%. If you select this, you may be able to uncheck the chroma quality checkbox with your graphics card.

I think I tried that, that was my original setting with it, messed about with DXVA yesterday, never saw much of a difference, but, I ticked the trade quality, and went back to lots of high settings again anyway

As for HDR -> SDR conversion, it is completely up to experimentation. There are many settings because it is a work-in-progress. The setting "dumb - convert gamut late" is popular, while there doesn't seem to be consensus on the best scientific method. The target nits is like a brightness adjustment, and you can use any value you want, as long as the image looks good to your eyes. There is no scientifically accurate value. Higher target nits will crush the low end in attempt to increase the detail of the specular highlights at the top end, but this can lead to black crush at high enough values.

Understood. But I was hoping there was explanations on what each part did (not that I would understand it tbh), but I think you can agree most of the options we have are very discernible at best, so just a general this does this would help most I think.

I don't know what you mean by having madVR settings on top of everything all of the time. Just close it if it is stealing focus from other windows.

I hear you, but when messing with different settings to see differences and the like, it would be a lot easier to switch it to focus as you were doing this, be it custom resolutions, HDR settings and the likes, thought I was missing an easy fix thats all. But then again I only worked out quite recently that double clicking on the Mad VR icon would bring up the settings itself, rather than click click, so was just checking to see if I missed another trick

Yes, I have had good results with an HDR 3DLUT. I have to change my TV's mode off of PC though, so I get better gamut coverage. In PC mode I get clipping of the gamut, resulting in odd off-magenta banding in highlights.

Yes, I have had good results with an HDR 3DLUT. I have to change my TV's mode off of PC though, so I get better gamut coverage. In PC mode I get clipping of the gamut, resulting in odd off-magenta banding in highlights.

If it's not too much trouble could you explain your process for producing a working HDR 3DLUT? I think I'm making a mistake somewhere down the line, but I just can't work out where.

I'd be interested to hear what other elements of those profiles you no longer apply?

I already use soften edges and grain with NGU sharp.

Are you saying you no longer NGU sharp for chroma?

You don't have to take my word for it, as everyone will have a different opinion on what looks best.

NGU Anti-Alias, NGU Sharp and Reconstruction are likely the best chroma upscalers, but this is pretty difficult to test with anything besides chroma upscaling test patterns. So, I use NGU Sharp blindly.

I don't really do anything special besides that. I use soften edges 1 with NGU Sharp when luma doubling. Add grain can add noticeable noise to solid black textures, so I don't use it, but it probably helps as much as it harms.

Neural network scalers are supposed to be able to find small detail like eyelashes and hair textures and reconstruct them. I find NGU Sharp does the best job of finding and enhancing these small details, with NGU Anti-Alias in second place. I haven't seen any examples where NGU Standard is better than NGU Sharp or NGU Anti-Alias. And NGU Soft is too soft compared to NGU Anti-Alias.

I toggle the free variant of RCA with certain content with a keyboard. Everything goes back to the defaults when the video is over. If a source is too soft, I enable a profile with image enhancements because some content is just shot that way. Again, everything reverts to its defaults when the video is over.

That's really about it. I did look up banding in 4K UHD and found some technical information about the combined efficiency of HEVC, a 10-bit master and high bitrates in improving compression and concluded that banding isn't as common as 8-bit Blu-ray. I have only seen limited 4K content on my 1080p display, but guess that banding in 4K UHD is probably uncommon.

If there is an interest in a global database for madvr profiles then send your profiles too me and I'll upload them with a proper description and credits and of course a small explanation. I would see this as step forward since people can simply download those profiles and compare, when something is wrong you can safe a lot of time explaining each toggle you changed by just download the specific profile/settings and replace it and then restart madvr.

It's not that this is against existent guides but who really reads them, especially beginners are really fast pissed off and it anyway ends up testing everything as per own needs. I also doubt that such profiles wasting much bandwidth. Just PM me and I see what I can do.

This is not a bad idea at all. I can only warn you after deciding to support madVR two years ago that you will end up spending a lot of free time editing projects like these. If it's all in fun, go ahead. Those profiles will only last a few months as things are always changing, so you will have to replace them on an ongoing basis. It might also be valuable to start a thread in the Software players forum because that GitHub link will be hard for many users to locate.

I just read your profile on GitHub and now know that you worked for Microsoft and Nvidia for over 15 years. Perhaps, you should be posting here more often, unless of course you were responsible for driver development related to HTPC use. In that case, you can go somewhere else...

I missed your bold reply. I don't think anyone would argue that a true 4K master is superior to a 1080p upscale. But that doesn't mean that upscaling doesn't have significant value. And good, sharp upscaling can can sometimes be superior if the 4K source is soft and lacking detail. Not all 4K content is razor sharp.

HDR -> SDR. The tone mapping curve reduces the luminance (Y) of all pixels to fit the value set in target peak nits. The default curve compresses everything into the available luminance of the display. You will obviously lose some highlight detail by doing this, so there is an option at the bottom to sharpen these pixels to make them stand out a little more at a lower luminance.

The gamut mapping algorithm corrects any RGB pixels that don't fit into the gamut after tone mapping. By focusing on luminance reduction and then creating colors, you will end up with some pixels that are out of gamut because each RGB color contains different amounts of white, so they won't scale linearly with reductions in white (luminance). They need to be corrected (estimated) into a value that can be shown by the display's available colors.

The way the pixels that are too bright & too saturated are corrected depends on a balancing act between luminance and saturation. This only applies to any of the scientific tone mapping algorithms. The dumb method simply clips the offensive pixels to fit into the gamut. You can't have a perfect balance of hue, saturation and luminance, so you have to adjust each in certain amounts to make the pixel fit. This method of estimating the pixel color is called hue preserving tone mapping, as the goal is to preserve the hue while manipulating saturation and luminance to find the best balance between the two. Some people don't like the hue preservation method and instead prefer dumb tone mapping. It is all subjective as there is no way to perfectly recreate the original highlight color. It won't scale linearly with luminance. Trying to preserve the hue when tone mapping RGB pixels is recommended by Dolby and other white papers that have been written on tone mapping techniques. Hence, madshi's desire to use this method.

The option to measure each frame's peak luminance should eventually lead to the creation of dynamic tone mapping like Dolby Vision or HDR10+, where the brightness changes based on the max luminance of the scene rather than use a global value like the current HDR10. I don't think this is working, yet, but should be a significant advantage when it is available.

Someone can correct me if any of that is not 100% accurate.

Edit: To add to that, the quick math showing how far off colors can be would be as follows:

- If colors are calculated as floating point values where 0 = black and 1.0 = white, then the PQ gamma states 1.0 = 10,000 nits.

- A 2,000 nit master then has an approximate (likely not exact) maximum value of 0.2 (0.2, 0.2, 0.2), which is pure white.

- If the target is 100 nits, then the scaling factor is 10,000/100 = 100 times.

- If you multiply 0.2 x 100 = 20. This is 20 times the maximum displayable value of 1.0, so this pixel will have to be tone mapped.

- If the max value is twenty times larger than the target gamut (0.0 to 1.0), then you can imagine how many values have to be reduced to fit a 100 nit gamut.

So tone mapped images will never look 100% identical to the 4K master because so many values have to be changed to fit into the gamut.

I own an OPTOMA UHZ 65 laser video projector that handles very well the HDR, in fact with madvr I'm going to passthrough .... yesterday out of curiosity I wanted to use the HDR to SDR conversion, the image is more brighter, but the colors appear washed out, even if you set it in "this display is already calibrated" BT.2020, what can it depend on?....I use MPC-BE.... I also wanted to know from the owners of AMD video cards which is the most stable driver for HDR?