This is because of the core fundamental of the dynamic range: the ratio between bright and dark areas. In your case, HDR1000 refer to the peak brightness of the monitor (1000nits). Black areas for the specification have to be under 0.03 nits. So if you lower the peak brightness, you lower the dynamic range and then you'll be out HDR1000 specification, hence the brightness lock.

So would the solution be implementing colour management into corona & to max and rendering straight to ACEScg? I didnt realise that even on certified HDR displays they were still adding lots of processing ontop but i guess its obvious when i think about it. I guess its to compensate for shit panels or discrepancy across a panel batch so they all look about the same?

What do you mean here? The solution to what? If you're talking about the color discrepancies that occur during the colorspace switch then yes, we should render straight into ACEScg to get a proper color-managed workflow. But this is not that simple tho, as it may introduce other caveats.

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

For sure! It is just that this HDR thingy introduced a load more of those post-processes. What's more, HDR and HDR10 are stuck with a fixed curve at the beginning of the media playback which end up with details loss in very bright or dark scenes. Things are going in a good way tho, HDR10+ and Dolby vision specifications introduce dynamic metadata to allow the change of the brightness boundaries on the fly (per scene) rather than remaining constant for the whole experience.

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

The thing im not looking forward to is the day that the 30fps standard disappears and we have to start rendering animations at much higher framerates

Yeah, 4k/60 would be nightmare :- ) But same happened to still frame rendering, I used to do render that took 2 hours on single quad-core with Vray for 4k resolution, now that same 4k resolution is easily 2 hours on 200 cores... Quality standards constantly grow. Anyway, apparently half of TVs in 2019 from Samsung are 8k. And what was that ideal VR clarity ? 8x4k each eye at 90 FPS ?

Yeah, sadly displays are evolving faster than computer hardware. High transistor density starts to be tedious for semiconductor manufacturers. As for VR, sales does not seem to raise that much and without mass adoption, I guess we won't see high-density panels come anytime soon. It's sad because we start to see some interesting technologies, like foveated rendering, to be able to render RT on that hardware. The Varjo technology is even more enticing, a small ultra high-density panel that follows your sight, backed by a standard resolution panel. They claim it to be equivalent to a 70k display.

But oh boy, I believe it will be massive revolution when it becomes common spread accross all industries. Apparently right now SDR content looks terrible while on HDR mode, and vice-versa, how HDR content looks on SDR display we've known for years :- ).So if someone wanted to jump on the hype train for archviz right now in the moment, what would it look like in practice ? Double the post-production ?Does anyone already do it in some form for their clients ? I can imagine some real-estate people could showcase such content on top-grade TVs in their show rooms to wow the clients. HDR content on 100" 8k TV sounds more impressive to me than nause inducing VR.

That's actually the whole point of ACES! Keeping scene referred data all along the pipeline and working in a wide gamut space to be able to deliver to whatever display referred space. So basically, select your display transform in a dropdown list and deliver to the intended platform. You can already (kind of) do that already in a serious and valid post-production package that support OCIO.

The only issue is that most renderers are using sRGB primaries and it can cause some discrepancies during the conversion from linear sRGB to the intended working space (mostly AP1 for us, this is the space that defines the ACEScg gamut and that encompass the REC.2020 one). So it would be better to render straight into ACEScg from scratch. As a renderer is almost colorspace agnostic, you should already be able to do so, except for spectra related stuff ( everything driven by Kelvin temperature) because those correspond to defined RGB triplets in the targeted colorspace (6500k/D65 is the white point for sRGB as an example, but ACEScg has a D60 white point).

The real issue here is all that HDR shit, to be honest. Every manufacturer is applying a whole load of post-effects to make the image "look better" without any respect of the initial vision of the content creator. The only thing we should benefit from that technology is the wider gamut, it should not have any impact on the dynamic range of the displayed medium (software wise). All the stuff they add on top of that is a massive pile of shit and a lot of film producers start to raise their voices against those marketing trends.

Ondra, would you consider to move your office to Mars when humans will establish colony on the red planet? I know, 37 additional minutes isn't that much, but it still would be an edge over competitors :]

FStrorm still does volume displacement, and it's fuc***g awesome. https://fstormrender.ru/manual/displacement. Ever wondered why fStorm users can use displacement on every single plant in their interior, and why the displacement only take like 400mb instead of 200 gazillion terabytes ? This is why.

Looks like the exposure inconsistency is related to the gamma 2.2 part in the IOR mask generation. Why did you choose to perform those operations in gamma 2.2 to put it back into gamma 1.0 afterward? To my knowledge, playing with gamma while performing arithmetic in a node tree that is supposed to be plugged in a linear input always end-up in a bad way, especially with data as sensitive as IOR.

For people who are reading this later, feel free to simplify the material with pure float values as Fluss mentioned.

There is a really strange behavior I cannot explain. After further testing, it looks like it's the 1/IOR part that not working here. On the render result at least as the material preview and the rendered material do not follow the same behavior. Here are some examples demonstrating the issue :

No IOR map plugged (IOR 1.5 in the material) :

1.5 IOR linear float corona color plugged in, work as intended :

Mixing 1.1 to 1.8 linear float using your node graph, the material preview is fucked up but render looks fine to me :

Mixing using the 1/IOR method, the material preview looks fine but render does not :

I have the feeling that there is some weird gamma related stuff behind that.

For the glossiness darkening and for the plasticy bump feeling, I'm not sure we are talking about the same stuff (see my previous posts).

Wouldn't it be better to put efforts in geopattern instead, which has way more uses?

If I have to choose between Geopattern and this tech to be implemented first, I'd definitely go for Geopattern because it is way more versatile indeed! It is not limited to fabrics so the cloth shader is not the priority TBH.

But you'll still need much more effort with Geopattern rather than this tech tho, just because you have to model the pattern by hand. What's more, Geopattern is useful for small repetitive cloth patches but as far as the yarn pattern is not consistent on the full mesh, it's another story. The real benefit of this is its procedural nature. You can reproduce patterns in no time.

Here is a small example (patterns don't match between examples but anyway, you got the idea):

Build the pattern with the pattern editor :

And you're done! This pattern will be used by the shader to build the yarns structure :

Then, each yarn will be replaced by a fully simulated fiber model with both dense fibers and flyaway fibers (those ones create the fuzziness). And you get control over a few other parameters :

That would be the ultimate cloth solution which is a tedious part in CGI since a long long time.