Every single time I do some look development these days, I first set my tone mapping to have some response curve that's somewhat resemblant of digital camera response curve. A bit of highlight compression, bit of contrast boost. It's very important to me in order to correctly see HDRI environments, and be able to correctly replicate materials from photographs to CG, because photographs are usually captured by a camera, which has such response curve.

It also seems that Fstorm's experiment to default to camera-ish film response rather than LWF/sRGB has met with great success and overall positive feedback.

Now, there are some valid reasons as to why most renderers still default to LWF/sRGB, two of the major being:

1, As soon as your output stops being linear, you can not correctly compose individual render elements anymore.

2, If you add some sort of tone mapping and and bake it into final output, you will destructively lose a bit of dynamic range data.

Non the less, I think that these reasons have became historical these days, because:

1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore. It would be more reasonable, if those, who utilize rare workflows, had to go extra step and make their renders linear if they want to be doing some advanced compositing, because majority of the Corona users do not.

It could be even implemented as one button solution, simply called "force linear output" or something like that. But there's not much of a reason to abstain from the joy of having Corona behave like digital camera by default just because of a few who still utilize workflows, which are now becoming legacy.

2, Main reason people choose not to do tone mapping in VFB, but instead in post is arguably because "they can bring back highlights in post". The thing here is, that if you apply some sort of tone mapping, such as highlight compression, you do it mainly to get bring those highlights back in the first place.

If you save tone mapped image with highlights that are not completely clamped, just a bit compressed by tone mapping, in some at least 16bit format, you will still be able to go back in post and adjust for example tonal contrast of highlights without getting any banding or artifacts. Yes, the gradient won't be as precise as it would be with linear image, but at the same time, neither would be footage from real movie camera.

The main idea here is that if you were in compositing, you would already start off with something that's a lot closer to movie camera footage, than to a linear render footage, so you could skip the entire one step of making it first look tonally realistic, before proceeding to some creative moody grading.

Many people praise Corona for being a lot like point and shoot camera rather than cumbersome technical tool, so I propose to push Corona even close to that ideal digital camera behavior. I think it's time to enter a new era of rendering, where renderers are becoming complete simulators of a movie/photoshoot sets, simulating most of the real world occurrences. Not just simulators of light transport, surface and volume shading, which then gets the printed onto pixel perfect radiometric grid (digital image), but also optical effects and digital film response to light, that reaches camera film back, such as contrast, glares, subtle blurring and sharpening, possibly even lens flares, and so on. Basically a state, where if you had near-perfect representation of real world scene, with scanned geometry and shaders, you would get an image indistinguishable from actual photo, without needing to work for it hard in Photoshop or other compositing software.

Therefore, I'd like to know your opinion about breaking old habit of linear being default in exchange for the greater good of the future. :)

Totally agree! The hole linear thing is just confusing and a pain in the ass..Especially since corona 1.5 VFB controls most of the images I make wont even see the post production part, so to have even more realism in corona itself would be awesome!

Coincidentally Blender Gugu posted a video about sRGB in Blender yesterday, about why it sucks. I'm not realy at home in the hole technical stuff so I dont know if it also applys on corona:

Yep, actually I've been thinking about it for a year or so already, but just today I saw that video, and it finally pushed me to post about it. The video actually contains a lot of inaccuracies and sometimes nonsense, but regardless, the overall point is he's making aligns with mine :)

Great post, I've had similar feelings about this for some time. I think rendering software needs to evolve to suit the needs of how people are using it day to day. I've been eyeing up F-Storm for some time now due to how beautifully photographic the images produced look out of the box/with minimal post processing. Daniel Reutersward's images are a clear example of this: https://www.facebook.com/danielreuterswardvisualisation/

Another prime example is some of the stuff from JakubCech on here and how he talks about emulating a real camera in post: https://corona-renderer.com/forum/index.php/topic,14288.msg91657.html

Regarding mapping I would not like to be very specific as for me its like a Coca Cola formula but I can say that I have been polishing and developing it for a few years now and finally have it in a compact, everytime to use form. Its based on post processing raw 32bit linear imige using software emluation of some of the real photographic process. Complicated stuff easyly - save in linear 32bit, apply some processes (like bleach bypass etc., but precisely) is the core.2 Years ago I managed to bake it into the LUT and used VFB+ for a long time but finally Corona comes with the LUT thanks god :)

Jakub

Some renders from him that really blew me away: https://www.behance.net/gallery/23707939/The-Ranch

1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore.

As a 20 year veteran of the CG/VFX industry I can categorically state that the entire VFX industry still comps using layers/passes/elements. Every single film you see that has VFX is done this way.

Keeping the data pure (linear) is the only real standard across the entire industry. Not doing so will deeply hurt any inroads Corona wants to make past what ever currently small userbase who wants what you are requesting.

1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore.

As a 20 year veteran of the CG/VFX industry I can categorically state that the entire VFX industry still comps using layers/passes/elements. Every single film you see that has VFX is done this way.

Keeping the data pure (linear) is the only real standard across the entire industry. Not doing so will deeply hurt any inroads Corona wants to make past what ever currently small userbase who wants what you are requesting.

Please do not change this.

I've heard quite the opposite in recent years... And from several independent high profile sources.

It's not about removing an option to render linearly. It's just about linear output not being the default.

I think the major roadblock here are old CG veterans, who often do things just because "that's the way it has always been done", not really ever stopping and taking some time to think about if things shouldn't be done better.

Can someone explain for dumbass me, are we talking here about changing default tonemapping values, like HC, contrast, curves or something entirely different, like changing colour space from wideRGB to something else? I got confused by original post and this video from blenderguru.

It would not be buttons, just one button. The point here is not to have a set of presets for everyone pick. The point here is completely different. Fundamentally changing what we perceive as default image.

Right now, we perceive linear sRGB as the default, the start line, and we then work with some parameters to bring that sRGB close to photo-realism. We manually have to twist some knobs in order to take a picture, which by default is not realistic to our eyes, and using some controls, turn it into image that our eyes perceive as photorealistic. So why not just skip this process and have renderer(s) by default output same ranges as cameras do. If you take a picture with your camera, you don't tweak it to look more photorealistic, because it already is a photo, it is realistic. You tweak just mood using some artistic controls. There's no significant reason why renderer should not work the same way. Not by having a dropdown where you can pick numerous response curves, and one of them is called photorealistic, but instead by having it defaulting to a camera, with an option to switch to a very special mode, which will make your output less realistic, but compose-able in post.

This is not just discussion about some feature design. This requires some out of the box thinking, some thinking about future of CG imagery in general. You can't really perceive it properly if your mind stays in the bounds of regular established workflows.

I think the major roadblock here are old CG veterans, who often do things just because "that's the way it has always been done", not really ever stopping and taking some time to think about if things shouldn't be done better.

This is incorrect. Linear is the way it is done because of maths. all of the operations to reconstruct various colour components are based on simple operations that can be acurately reproduced in any renderer and compositor.

We manually have to twist some knobs in order to take a picture, which by default is not realistic to our eyes, and using some controls, turn it into image that our eyes perceive as photorealistic. So why not just skip this process and have renderer(s) by default output same ranges as cameras do. If you take a picture with your camera, you don't tweak it to look more photorealistic, because it already is a photo, it is realistic.

One major thing I've learnt in the VFX industry is that there is no standard for 'photorealism'. As a lighter by trade, many times I've output what I think are 'photoreal' setups, and backed them up with real world data, only having to be told that it doesn't look 'real' by the client (many times the best directors in the industry).

Quite simply, 'photorealism' is purely a subjective, a moving target that can never be 'pinned' down.

I think there has been misunderstanding. I've never claimed images straight out of VFB are being used. What I've claimed is that the workflow of rendering Diffuse, Reflection, Refraction, Indirect GI, SSS, Self-Illumination passes, and then composing them back in post using ADD operation just to reconstruct what in the end becomes 1:1 beauty pass is not being used much anymore. Mostly because now that we have physically based rendering, color correcting separate light path components does not make things look better anymore. If anything, it actually makes it look worse.

But people still render out lots of passes and masks, but those can be composed together even without output being perfect linear curve. Non-linear image output only removes possibility to compose separate shading components so that they make up exactly 1:1 pixel perfect beauty pass.

Precisely as you said, linear has been standard because compositors needed to reconstruct color components using simple mathematical operations (add and multiply) to get beauty pass. But that was mainly so that they had separate control over those individual color components. And they needed to have control over them mostly because they needed to make bad looking CG pop up. Nowadays, thanks to physically based lighting, shading and rendering, unless it comes to a very unskilled artist, it's very hard to make bad CG in a way, which can be fixed by tweaking separate shading components. If someone sets up bad material, you can only rarely magically fix it by selecting for example reflection component of the beauty pass, and boosting reflection on certain object. Yes, it may improve it slightly, but nowhere near the amount of improvement achieved by actually going back to a 3D scene, and fixing the material there, causing change in the illumination on the surrounding surfaces based on the new properties of the material.

As for the no standard for photorealism, I am talking just about standard for displaying shading and lighting from the renderer on an average screen. I am talking about this: http://acescentral.com/

And I think that photorealism is far from subjective. Actually it can be defined quite easily - a computer-generated image/video, which is by majority of people indistinguishable from photographed image/shot video.

Compositing is much more complicated than simply adding together a bunch of layers. What if you needed to reduce the specular component on a specific object, or desaturate a specific object, or boost the SSS of a specific character? Or, what would you do if you can't render every object in a single pass due to memory limits, or combining passes from different renderers? All of which happens on practically every single shot, on every single show I've worked on. You would be amazed by how much detail is scuplted into the final comp. All of this is possible by rendering out necessary layers.

Saying that grading individual passes and combining the result looks worse just proves my point that photorealism is subjective. As while you think it might look worse, the director thinks it looks better.

Also, many places render out deep images, hence bypassing the need to render masks, of which the individual components become even more important.

If photorealism wasn't subjective everyone who takes the following test would get a 100% pass to the following:http://area.autodesk.com/fakeorfoto

For the 3 years I worked in PFX VFX studio, it was only on very rare occasions that we ever needed to do things like boost specular or boost SSS in post. Actually, if you conform to PBR rendering, then doing such things is simply breaking the shaders. I almost never had any supervisor, director or client requesting something to be boosted specifically in post, as it almost always looked worse. We got away with it on some very cartoony things, but in photorealistic scenes, things looked always far better when tweaked right in 3D. Even simple boosting of reflective layer caused it to mismatch between directly visible surface and reflected surface. Also bounced light then usually did not add up.

I would also recommend this article from Blur, who are known for their high quality work when it comes to fully CG projects: http://www.creativebloq.com/blur-studio-elder-scrolls-online-cinematic-2123047

A few quotes from the article:"To streamline production we strive to render as many elements together as possible because it's the fastest way to achieve realistic results with V-Ray. This minimizes the risk of inferior results trying to reconstruct various passes during compositing."

"Five or six years ago we could produce elaborate breakdowns of simple shots into hundreds of passes all cleverly composited back together. It's still somewhat true when we deal with cartoony or stylized projects, but I find that the key to a realistic project like Elder Scrolls is to spend most of your time inside the 3D package."

"Ornatrix and V-Ray can now render hair strands as splines at render time, which allowed us to render our characters and their hair in the same pass and using the same GI lighting solution."

Regarding Fake or Foto challenge, I wanted to bring it up myself, as like you, I too consider it to proof of my point. Point that photorealism is hardly subjective and that vast majority of people find the same few images picked by staff photoreal. The fact, that it is hard to get 100% result in this challenge, even for a trained eye, shows that it's generally quite common for CG images which fool majority of the people to be produced. The challenge is not about people voting how much photoreal do they consider a particular image to be, but rather if they can distinguish CG imaged scattered across actual photographs.

Rawalanche, do you have some specific values, you'd like to become as defaults? If so, can you share it with us? I'd like to try and see if it will magically makes my renders more photoreal :] Now i have to tweak tonemapping for each scene individually. If your proposal will take place, then it will mean that i will have different starting point, but i will still have to tweak those controls. Would that change anything?

Rawalanche, do you have some specific values, you'd like to become as defaults? If so, can you share it with us? I'd like to try and see if it will magically makes my renders more photoreal :] Now i have to tweak tonemapping for each scene individually. If your proposal will take place, then it will mean that i will have different starting point, but i will still have to tweak those controls. Would that change anything?

Not yet, I haven't even tried it yet. I am asking Dubcat to do some of his LUT magic to try ACES in Corona.

That being said, you will always need to tweak something on per scene basis, in the same way most photographers still take their photos in Photoshop to fine-tune them. You would not necessarily get super pretty image right out of the box, but you would get something that's a lot closer to what would happen if you converted your 3D scene into real world and actually took a picture of it with a digital camera.

It also makes eyeballing material properties from photos and translating them into CoronaMTL settings a lot easier :)

I was thinking about this lately a lot too. We will almost certainly do it for 1.6. Now the question is, what should the defaults be. And would you consider taking this further with perhaps auto-exposure/autocontrast? That is the real difference that cameras make

Rawalanche, I'm telling you, on all of the projects I've worked on, whether it's X-Men, Wolverine, Hunger Games, Game of Thrones, etc., and whether the objects conform to PBR or not, the comments about what looks real, and what doesn't look real, all have been subjective to whoever is commenting on whatever submission is being commented on.

I use and have used many renderers professionally in my career: PRman, 3Delight, Mantra, Arnold, VRay, Mental Ray, Maxwell, and more, and I really do hope that Corona doesn't break with what the enitre CG/VFX industry puts out by default. It will just cause unnecessary confusion.

Rawalanche, I'm telling you, on all of the projects I've worked on, whether it's X-Men, Wolverine, Hunger Games, Game of Thrones, etc., and whether the objects conform to PBR or not, the comments about what looks real, and what doesn't look real, all have been subjective to whoever is commenting on whatever submission is being commented on.

I use and have used many renderers professionally in my career: PRman, 3Delight, Mantra, Arnold, VRay, Mental Ray, Maxwell, and more, and I really do hope that Corona doesn't break with what the enitre CG/VFX industry puts out by default. It will just cause unnecessary confusion.

I'd suggest giving this a watch:

I still don't think we understand each other. You won't lose ability to render linear. It will still be there. It's just that it won't be default, it will be one click away.

Saw that video today, I was disappointed to remake the test and see the "bad " behavior on light saturation. I'm a total idiot regarding color but felt like the video was into something, I was unsure on what to even say or propose, I'm so glad someone else took the time to post it and Ondra actually recognizes it as something worth doing :)

I was thinking about this lately a lot too. We will almost certainly do it for 1.6. Now the question is, what should the defaults be. And would you consider taking this further with perhaps auto-exposure/autocontrast? That is the real difference that cameras make

But what would be the standard for auto contrast/exposure? Auto exposure in dslr's can be pretty shitty at times...

I would agree on some default filmic tonemapping with an option/checkbox to export as linear(in render settings also in vfb maybe)

Why fix something when it's not broken? What's the point of having a few parameters set to arbitrary values when it's a matter of seconds to set them by myself?I'd rather control this by myself, and honestly I don't want to lose any of the parameters of the VFB post

Do we really need another change in default behavior for 1.6? And then yet another 'legacy'-checkbox in order to achieve the same result like in prior versions when re-rendering a scene? Do you realize that with every point release we get a new default somewhere and it takes effort to make sure the result is exactly the same when re-rendering an old scene and introduces uncertainties?

I should add that having an auto-exposure / auto-levels button wouldn't hurt. But I really would hate if any of the defaults that are in place right now would change and would be yet another thing to keep in mind when trying to match an older render.

You see, this is what I meant by completely changing mindset. You still perceive camera response as some sort of post processing option, and you still perceive linear sRGB as the right default, the base line. But the point here is that sRGB is simply not a right color space to display linear rendered light in. You have some input, in this case computer generated image, and you want to display it on a monitor device in a way that resembles what human eye sees in real world as closely as possible. Digital cameras area already very good at it, but most renderers are not, yet...

Back in the day, people were rendering in wrong linear Gamma 1.0 space, and then someone came up with linear workflow, and there were lots of people popping up saying "Why change something that works, why introduce new confusion?", and then LWF slowly became standard. And this is simply another step in the evolution of getting computer generated lighting and shading data displayed in a way that is most natural to the human eye.

You are still too much preoccupied with mathematical data rather than visual data. You want to have output by default linear, because you are used to adding final photorealism to the output yourself. But it's also important to think about non-technical users, newbie users and future users. You hardly find anyone trying to remove/delete all image processing algorithms from digital cameras they bought, to get mathematically pleasing data out of it.

I think that every step taken towards reducing the amount of manual steps from newbie/migrating users to achieve ultimate photorealism is a good step.

If we'd get new defaults, there would still probably be some legacy mechanism to render scenes exactly as they are.

Matching old renders is rather rare request. Definitely not done on daily basis by most users. Why would you match something old when you can make it look better? I can understand it being a client request, but if someone has very niche client base with very specific requests, general renderer defaults should not conform around that.

On the more general note, I'd dare to say that over 95% of Corona user base does not produce image by composing separate shading elements (Refl, Refr, Diffuse, SSS, Self-illum, etc...), so nothing would be lost. Specialized render elements like velocity, world position, normals and such are already removed from tone mapping by default. This could be probably taken one step further if all CESSENTIAL render elements would be excluded from tone mapping too, so it would be just beauty pass affected.

But if 95% of the userbase does not do shading elements compositing, and similar portion of them uses Corona as a virtual digital camera, why should we comform the defaults about the ~5% minority?

Anyone who ever did some successful shading element compositing knows that things need to be kept linear, so anyone who will do so, will know to linearize the image (remove tone mapping) before proceeding to the compositing stage. Whereas most new users as well as future users who aren't technically based, but perhaps ex-photographers, will not enter realm of CG rendering knowing there are some extra steps you need to take to achieve photorealism. If we make it behave more like digital camera by default, good results out of the box will be a lot closer to them.

Right now, Corona is not yet very VFX-capable renderer. And by the time it becomes one, I am quite confident viewing rendered images through camera response rather than plain sRGB will be well established standard (In the same way I predicted 8 years ago PBR will become standard - even in games, and all the blinn, phong and ambient occlusion heroes were mocking me :) )

I haven't used 32bits channels and probably never will. That's not to say others don't need this as it is one of the standard way of working, and yes experiences vary from person to person. Please respect this and never judge based on your personal experience only.

The problem with a camera response is that it's not one camera response across all camera models - quite the contrary, every camera has its own set of algorithms and sometimes they're made different only to make sure the price tag is justified. Camera vendor - same thing, each one of them has a processed look to make sure your clients know what they buy (Nikon and Canon have a distinctive look and it's kept that way artificially to not alienate their customers when introducing new sensors). So what kind of progress is it to arbitrarily impose a certain way of processing rendered images when all we need to customize them to our liking is already there?

I'm all for new tone mapping algorithms, in fact Filmic Shadows was a wonderful addition. I'd only want this if I can get to the old way with a click or a setting in the defaults. Again, the simple solution would be to introduce a 'Make it photorealistic' button to set the parameters according to the algorithm you come up with and let everything else as it is.

As for legacy settings - well, this is a mess. 1.2, 1.3, 1.4, 1.5 all introduced new things that need to carry code from earlier versions in order to render legacy results, and I assume it must be a nightmare to maintain the code.

And as for always-photorealistic-out-of-the-box. That's a holy grail promise you will not be able to hold up to as it always relies on the artist's skills and eye to properly set up a scene, lights, materials etc. How many renders have we seen from top end renderers that are simply crap because some people don't go the extra mile and polish their materials, or actually work on their image to become really good? That's something you'll not be able to overcome with some math behind.

I really don't want to dismiss the idea just for the sake of keeping everything as it is. It's just that it doesn't convince me why it's better than what we have.

I haven't used 32bits channels and probably never will. That's not to say others don't need this as it is one of the standard way of working, and yes experiences vary from person to person. Please respect this and never judge based on your personal experience only.

The problem with a camera response is that it's not one camera response across all camera models - quite the contrary, every camera has its own set of algorithms and sometimes they're made different only to make sure the price tag is justified. Camera vendor - same thing, each one of them has a processed look to make sure your clients know what they buy (Nikon and Canon have a distinctive look and it's kept that way artificially to not alienate their customers when introducing new sensors). So what kind of progress is it to arbitrarily impose a certain way of processing rendered images when all we need to customize them to our liking is already there?

I'm all for new tone mapping algorithms, in fact Filmic Shadows was a wonderful addition. I'd only want this if I can get to the old way with a click or a setting in the defaults. Again, the simple solution would be to introduce a 'Make it photorealistic' button to set the parameters according to the algorithm you come up with and let everything else as it is.

As for legacy settings - well, this is a mess. 1.2, 1.3, 1.4, 1.5 all introduced new things that need to carry code from earlier versions in order to render legacy results, and I assume it must be a nightmare to maintain the code.

And as for always-photorealistic-out-of-the-box. That's a holy grail promise you will not be able to hold up to as it always relies on the artist's skills and eye to properly set up a scene, lights, materials etc. How many renders have we seen from top end renderers that are simply crap because some people don't go the extra mile and polish their materials, or actually work on their image to become really good? That's something you'll not be able to overcome with some math behind.

I really don't want to dismiss the idea just for the sake of keeping everything as it is. It's just that it doesn't convince me why it's better than what we have.

I don't get the point about cameras. Yes, they all have different curves, but they all are superior to CG rendered light and shading interpreted in sRGB, that's the issue here. And while the image processing curves of different cameras are there to make already real image pop, in CG, we don't even have that reality baseline, because we display synthetic, generated light and shading through a response curve that is different to a way human eye is used to seeing reality captured through digital devices in form of digital photographs.

The whole idea of the button is problematic in a way that you basically have a wrong way of displaying something and a right way is hidden behind a button. You wouldn't expect any rendering software these days coming with Linear Workflow disabled by default, and finding a button somewhere in the settings that says "click here to enable LFW", now would you?

These days, LWF is simply a standard, and camera response is evolution of that standard. Again, it's wrong to perceive it as an additional option. It's intended to be update of the default.

I also did not claim that this change would make photorealistic images out of the box. What I (obviously) intended to say that whatever would anyone render would still by default be closer to photorealism than with the old workflow. Even if it was a complete noob, who would just put a gray cube on a gray plane and lit it by Corona spherical light, it would still look a bit closer to what this same scene would look like if it was re-made in real world and shot with a digital camera.

What's also important is, that most people do not realize that in order to successfully translate material properties from photos into 3D, you first need to at least roughly match your tonality to average camera response, otherwise it gets really hard to nail the material properties when translating them from photo to CoronaMTL by eye. Even I, myself found it out relatively recently, and ever since that, I start off with Highlight compression at 1.75, filmic shadows at 0.5 and contrast at 2, whenever I create any scene. And this is the knowledge most of the people do not have. It will just increase their success rate without need for them to actively research it and come to this conclusion, which it took me personally several years to come to. I wish I had known this earlier... No, actually... I wish there was some renderer that would do it for me earlier :)

Ok. Let's presume you replace the current state (which I expect) and someone saves out a linear image for comp - how will he know what Corona did to the image to be able to reproduce this in comp? Will it be a black box without info on what it did behind the scenes? I assume it will be. So instead of users asking of how to use tone mapping you'll now get question from people who ask how to match the VFB.

Now what I'd really like to see is comparisons of the old way vs the new way. That would really help.

Rawalanche, at the end of the day Corona is your software and the direction it goes in is ultimately yours. If you think (as well as the majority of others in this thread) that this is the way things should be done then please do it! As a general rule people don't take to change kindly, even if the newer option is objectively better.

As you've noted 95% of people in the CG business are only concerned with producing visually pleasing images with the least amount of time/effort required to get there. Streamlining that process is a plus and not a negative. People who are hung up on antiquated workflows will simply have to adapt to changing times (and you've even stated there will be a legacy pure linear option for compositing so I have zero idea why anyone is fighting you on this).

Rawalanche, at the end of the day Corona is your software and the direction it goes in is ultimately yours. If you think (as well as the majority of others in this thread) that this is the way things should be done then please do it! As a general rule people don't take to change kindly, even if the newer option is objectively better.

As you've noted 95% of people in the CG business are only concerned with producing visually pleasing images with the least amount of time/effort required to get there. Streamlining that process is a plus and not a negative. People who are hung up on antiquated workflows will simply have to adapt to changing times (and you've even stated there will be a legacy pure linear option for compositing so I have zero idea why anyone is fighting you on this).

Haha, definitely not mine, but Ondra's. I just occasionally talk into UI :)

isn't the LUT section already pointing into that direction the fstorms and octanes etc are working with?It would just be consequent. if photorealism is the ultimate goal, why shouldn't the rendered image behave like a camera-picture?

tbh1: i don't actually care which "workflow" is behind the image i am working on, as long as it looks good in the end - andi would appreciate nothing more than an idiot-proof solution. there are so many things you can screw up in an image/project, render settingsshouldn't be among these things (i hear the old hares cry "you make rendering too easy" already)...

tbh2: most people don't work on iron men, x-men or wolverine-men - in large studios, with long pipelines where it might make sense not to go 5 steps back into the shading departmentto change the specular value of some random item and rather "fix it in post". no matter if that breaks the image or not, just from a practical point of view...i would assume most users are sitting with small teams doing fast turn-around jobs (not in x-men quality) and are happy if there would be a "render-cool-button".

so would that be the said button? the "instant-photoreal-button" we all dreamt of all those long SAD years? Make rendering finally great again guys! :D

Ok. Let's presume you replace the current state (which I expect) and someone saves out a linear image for comp - how will he know what Corona did to the image to be able to reproduce this in comp? Will it be a black box without info on what it did behind the scenes? I assume it will be. So instead of users asking of how to use tone mapping you'll now get question from people who ask how to match the VFB.

Now what I'd really like to see is comparisons of the old way vs the new way. That would really help.

If someone saves out EXR image for compositing with camera response tone mapping baked in, and opens it in Fusion, or Nuke, or anything that loads EXRs with correct gamma, they will get exactly 1:1 result to what they had in CoronaVFB. They won't need to know what happened to the image as long as they get same thing as in CoronaVFB. The problem would happen only if they tried to compose CESSENTIAL render elements, where they would get different result.

Now, first of all, I doubt most of the new users will ever encounter this. As I already mentioned, this workflow is becoming obsolete. Secondly, new users most likely won't be able to tonemap in post, because there's no compositing software that by default has a node which contains tone mapping set similar to CoronaVFB. CoronaVFB has quite refined tone mapping tools compared to Fusion/Nuke/AE, and so on. There won't be any black box, users will know exactly what's going on just by looking at tone mapping settings in Corona VFB.

This is such an interesting topic but also one very very difficult to fully understand.

In fact it's so hard on the brain cells that many many cinema production professionals still don't understand that when they get their 12/14/16bit RAW files from their 60 thousand dollar cameras they get just get an image that has not been debayered but it has been already heavily modified by the firmware magic on the camera - the secret sauce behind every camera manufacturer (just like pokoy wrote earlier). They actually believe what they get is the direct signal from sensor which is something so far from truth.

Canon and Nikon DSLRs perform amazing in studio lighting (light that is usually very "white" when talking in kelvin temps) and really rival analog film in these situations. But when you switch to outdoors or different types of lights the image usually falls apart and does not look good anymore (while analog did) - it needs to be brought back in photoshop or any other image processing software. Canon even with their understanding of color have failed at creating a proper cinema camera (C100,C300,C500) - they have created a sort of bland depressing look which actually works for documentaries but not for cinema. Their C700 camera has colors and "tone-mapping" that is almost identical to their DSLR range which in my opinion won't work for cinema - this only shows that they are trying to backtrack to something that worked for them in the past.. they are out of ideas. A multibillion image hardware corporation has really ran out of ideas which is something really amazing.

Arri Alexa - the first ever camera that provides a very durable "cinematic" look right out of the camera. Great dynamic range and beautiful color - very nice desaturation of highlights. Their color processing and dynamic range is still unrivaled and the camera hardware is over 6 year old already.

Maxwell render - Maxwell render during beta and the first version had a very special tone-mapping and color response.. it made images really look good without any post work.

So really.. it'd be amazing if someone ever would get his hands on the direct firmware code of the cameras to see what color math acrobatics they do. LUTs are simple linear transforms which do not catch all of the intricacies of what goes on in the firmware.

Getting to know what the cameras really do (from sensor signal to raw file) It would really really help the renderer developers too - if you could quickly match your renderings to live action footage that would be insanely helpful for VFX.

The Blender video is all about tone mapping, right? As far as I can see, it has nothing to do with sRGB, it's just being mentioned as the culprit but it's actually tone mapping he's talking about.

Similarly, I'm not sure why sRGB is mentioned in the thread title. From what I understand the initial idea is to add a default tone mapping preset replacing the linear display of the VFB we have now (though we can't be sure there isn't a tone mapper already present that isn't exposed to the user). sRGB is the color space Max displays as it's not able to use anything else and uses the Windows default color space which is sRGB. So without adding color profile support to Corona's VFB it doesn't really make sense to mention it.

However, if you're about to add color profile support to Corona's VFB we might actually achieve a more natural look. The current widely used standard in professional photo workflow is eciRGB v2 which is meant to reliably reproduce colors present in the nature with an emphasis on blue-cyan/orange-yellow tones. This would indeed help in the VFB as sRGB is pretty dull. However, it's something entirely different than tone mapping - if this was the original idea - and saying that overcoming limits of the sRGB color space by adding a default hidden tone mapping curve is misleading.

I guess there's a reason why you want to add this. More or better tone mapping options will certainly not hurt. And as long as we get a legacy behavior checkbox so comp departments get their channels right, I'm all for it. Also, please consider adding color profile support (per scene, not as a global default) as this can really have an impact on how the values are mapped to final colors after tone mapping and it's a very important factor indeed.

Back in the day, people were rendering in wrong linear Gamma 1.0 space, and then someone came up with linear workflow, and there were lots of people popping up saying "Why change something that works, why introduce new confusion?", and then LWF slowly became standard.

What in the world are you talking about? It seems you are misinformed about the history of colour reproduction on digital devices.

What's more you seem to be throwing around CG related buzzwords at will without knowing what they do, why they exist or what the relationships they have with one another.

Now, first of all, I doubt most of the new users will ever encounter this. As I already mentioned, this workflow is becoming obsolete.

You keep saying this, and I'm telling you straight out that you are wrong. There have and always will be a goup, usually hobbyists and smaller boutique studios, that will want a final image with a single render, but there always will be the rest of us from hobbyists all the way up to the highest end VFX studios that will want, and need, to render out in passes. Please stop spreading misinformation.

Back in the day, people were rendering in wrong linear Gamma 1.0 space, and then someone came up with linear workflow, and there were lots of people popping up saying "Why change something that works, why introduce new confusion?", and then LWF slowly became standard.

What in the world are you talking about? It seems you are misinformed about the history of colour reproduction on digital devices.

What's more you seem to be throwing around CG related buzzwords at will without knowing what they do, why they exist or what the relationships they have with one another.

Now, first of all, I doubt most of the new users will ever encounter this. As I already mentioned, this workflow is becoming obsolete.

You keep saying this, and I'm telling you straight out that you are wrong. There have and always will be a goup, usually hobbyists and smaller boutique studios, that will want a final image with a single render, but there always will be the rest of us from hobbyists all the way up to the highest end VFX studios that will want, and need, to render out in passes. Please stop spreading misinformation.

Well, then set it straight, if you know better. If you are in CG for so many years already, you should quite well remember period of time many years ago when LWF was not yet standardized, and was even off by default in many of the mainstream DCC packages. Actually, in most of them. And people had no idea that they were viewing images in wrong color space, and were doing terrible things in post to fix the problems.

You seem to be constantly reminding me how much more you know, because you were around back in the age when dragons were wreaking havoc across fields of our kingdoms, but you still haven't gotten into specifics.

I am telling you, already, for the third time, that image being tone mapped by default won't result into passes not being usable. Most of the specialized render elements, such as world position, normal, velocity, are excluded from tone mapping by default. If this solution would be implemented, only CESSENTIAL render elements would be affected (actually, it is quite possible only beauty pass will end up being affected), and fixing it will be 1 click away from anyone who needs to do compositing.

You have completely biased view of division of userbase. It's not 50:50 those who composite separate render elements vs those who render everything together. It is more like 5:95, and it's shifting more towards one pass rendering every year. And do not misunderstand it as not using passes like world position, velocity, and such. I am strictly talking only about outdated process of compositing shading elements together (CESSENTIAL elements).

The entire point is taking a virtual, computer light and surface simulator, and making it output images that are a bit close to how human eye perceives real world by default. That's all. This will have more and more priority in the future.

Also, since you mentioned working on all sorts of Hollywood blockbusters, I could not help myself but to search for some of your actual work, to be blown away. Only thing I found was the Cyan Eyed project, and coincidentally, only 3D renders from you I saw seem to be suffering heavily from the lack of proper tonemapping, showing the usual pathological burned out highlights and oversaturated color range. A bit ironic :)

The entire point is taking a virtual computer light and surface simulator, and making it output images that are a bit close to how human eye perceives real world by default. That's all. This will have more and more priority in the future.

Then you really should consider adding color profile support to Corona's VFB. Do some tests with VFB+ (free since a few days, thanks Rotem!) and see why it might be a good thing.

I understand why it makes a few users nervous to mess with the way render elements are neglected. Even if the 95/5 ratio works out, I'm pretty sure Corona team will be glad to hear that a larger studio used it on a blockbuster film in the future, and since these happen to have dedicated comp depts they will need elements untouched. If it's really a click away and the new hot feature will not hurt the pros out there (I don't consider myself one) then fire away.

One thing I'd still like to know is what exactly will this new thing be? I mean, technically?

Well, Ondra said it could be possible to exclude CESSENTIAL render elements from tone mapping, like other elements. That would be ideal case scenario. So only elements you would see tone mapping on would be beauty and lightmix.

And yes, I am sure people in big VFX houses are smart enough to click that one button and then save it as default, as Corona allows for that.

As for technical solution, it's not decided yet. There's a discussion about a few different options at the moment.

Also, since you mentioned working on all sorts of Hollywood blockbusters, I could not help myself but to search for some of your actual work, to be blown away. Only thing I found was the Cyan Eyed project

So you want to get personal do you? Well then not only do you not have an understanding of CG principals, but you also seem to fail at Googling. I'm not going to hold your hand all of the way, but as a single example, I was the lead lighter on this:http://www.cinemablend.com/new/How-Did-They-Film-Quicksilver-Amazing-X-Men-Days-Future-Past-Scene-43157.html

I've also made two animated short films that have played in a combined total of over 100 film festivals, and one of which was accepted into Siggraph's Electronic Theatre, which only show cases the worlds best work.

and coincidentally, only 3D renders from you I saw seem to be suffering heavily from the lack of proper tonemapping, showing the usual pathological burned out highlights and oversaturated color range. A bit ironic :)

You also fail to understand what a work in progress is, how much work actually goes into a project like Cyan Eyed, and how the CG pipeline works (first make the assets, then layout, then animate, etc.). So, to say it simple enough for you to understand: Cyan Eyed hasn't finished the asset build and animation stage yet. Judging the look at this point completely undermines your credibility, since no frame of the film has had a proper first pass of lighting and comp yet. Everything you say just proves to me that you don't know what you are talking about.

Well maybe I don't. I mean really, I am still waiting for actual example of how it should work, instead of saying why it won't.

I mean, if I wanted to present something to wide audience in order to get funding, I would have probably spent at least few minutes in some sort of compositing software to present something that wider audience can perceive as pretty, instead of pointing at limitation of the pipeline process.

I mean that's exactly the point of this thread. It was just WIP, as you said, but if I am not mistaken, those pictures were rendered in Corona, right? Well, Corona has already some capable tone mapping right inside of the VFB. Nothing prevents you from presenting images with some basic nice post processing right out of VFB, even in the work in progress stage.

That is exactly it, a new approach that will allow you to see something that is a lot closer to the end result without waiting ages before it actually reaches end of the pipeline through numerous people. Or, when working as a freelancer or in a smaller team, allowing you to see something, that is almost nearly final output, without waiting until you get to the compositing package.

If It sounded too personal, then I genuinely apologize. I was not proving you are not skilled, you most definitely are. I was just trying to point out that you yourself may have actually encountered a situation this solution would help with.

Ondra said in another thread that Corona DOES NOT SUFFER FROM THIS. I'm glad :) but I tried the desaturation example shown in the video in Corona 1.5 hotfix 1 and it seems to suffer from the same "problem" described in the video. I won't pretend to understand any of the underlying technologies or processes, but shouldn't the colors desaturate as exposure/light intensity increases?

Maybe I'm doing something wrong? Can you clarify?

Passes might be essential in composition and VFX situations, however, I think the ultimate goal of a render engine is to be as easy to use as a camera. It comes down to market size. I think there's a lot more people who would use a virtual camera than people working in the VFX industry.

Architects, Designers, Engineers, Markete, Hobbyists etc..

I know that when you are inside of an industry, it seems like the whole world should conform to that reality. But the world is quite diverse and most people don't have time or interest in specializing in composing, or understanding the workflow.

Should a render behave ONLY as a camera? of course not. Once you done all that work, you might as well offer the possibility for people to draw outside the box. But should it be the standard? I think it should.

Ondra said in another thread that Corona DOES NOT SUFFER FROM THIS. I'm glad :) but I tried the desaturation example shown in the video in Corona 1.5 hotfix 1 and it seems to suffer from the same "problem" described in the video. I won't pretend to understand any of the underlying technologies or processes, but shouldn't the colors desaturate as exposure/light intensity increases?

Ondra said it because Corona already has tone mapping. Blender folks see it as something so revolutionary, because they did not have any tone mapping at all up until now. No highlight compression, and no filmic mapping. In corona, you have highlight compression as well as filmic highlights. You can use those to compress highlight range and desaturate it at the same time.

Guys (especially Rawa), please restrain from personal fights, or switch to private messaging. This thread is to discuss specific topic, there is no point in adding the extra spam.

Funny. I see this exactly the opposite way :)Rawalanche is trying to bring up a case with which everyone seems to agree with (myself including) presenting rational arguments, while Njen goes like "trust me, I'm an angineer , what you say is wrong and I know this cuz I'm an engineer" :D absolutely failing to acknowledge that Rawalanche is not proposing to make Corona unusable in classical VFX pipeline BUT instead to work by default like a DSLR with the option to work like it does now after activating a checkbox. Win-win situation.No offence guys, with all the respect to your amazing knowledge, what you achieved and worked on (this is not irony), leave your egos outside the door please :)

Personally after watching that lecture about ACES I'm all for the change. It simply looks so much better and easier. If it would help even a tiny bit in fighting constantly too dark renders then please do it :)

In my personal opinion this is a complex subject that can help Corona conquer an specific sector of the market (people that need/want easy/good results out of the box) but will leave Corona outside other markets like VFX (yes, most VFX studios use LWF). Looks to me that this is already your strategy and it´s working really good so far but i´m also sure you know about the "side" effects.

ArchViz studios probably will love this change because right now we are making this (not everyone) using compresión + custom LUTs (it´s not the best way to do it in my opinion) and can be confusing for a lot of people new to the industry. But i personally think it´s a mistake if you can´t choose, i prefer linear renders because they are important not only for multi pass compositing. You need linear data for comp in order to do a lot of things property like DOF, MB, Glints Glares, etc. and we usually use OCIO or LUTs just for pre visualice the end result but still working in linear during the entire pipeline. Comp operations still linear. If the user can choose i think it´s a good move xD

But i can understand why this can help Corona to conquer the ArchViz market since most ArchViz artist don´t really understand whats LWF or how to use tone mapping properly.

¿And what do you want to use instead linear? because i suppose Corona will keep making light calculations in a linear way so we are talking about some short of internal post-production i can´t control? Something like Shaper Log LUT + Filmic Contrast LUT? will we have several contrast or camera response choices?

About the video from BG i think it´s not really accurate and can make some people misunderstand the problem/solution. Because most of the people are shocking about the "magic" button instead of really understanding what´s going on.

Of course this is only my personal opinion.

PD: if you want to replicate this "magic button" from the video just download the OCIO and use it to color management a Nuke viewer or if you really want to break your 32bpc file and transform it into a 16bpc tonemapped one transform your render into Log using nuke lin2log and apply the LUT from the OCIO pack you can also download OCIO profiles from Sony from the web site. This is nothing new to the industry as Rawalanche pointed, this is old stuff xD

Some people ask me for this on my channel and i make this examples to explain the process (they are in Spanish sorry):

This might be a silly question, but are you able to compile the above processes into one singular LUT file that we can use in the Corona VFB so we can test this out? Would make life a lot easier than having to fire up Nuke. Is this even possible using LUT's in Corona? I'm still kind of noobish with LUT's and their limitations.

This might be a silly question, but are you able to compile the above processes into one singular LUT file that we can use in the Corona VFB so we can test this out? Would make life a lot easier than having to fire up Nuke. Is this even possible using LUT's in Corona? I'm still kind of noobish with LUT's and their limitations.

Cheers

HI

You can´t. LUTs are only able to map 0-1 values that´s why you need to apply a "shaper" LUT (lin2log) before in order to re-map the linear values into a logarithmic 0 to 1 format and then apply the contrast LUT to this 0 to 1 logarithmic file. The OCIO "magic trick" apply this 2 LUTs in a row.

But you don´t need to do this in Corona, you already have amaizing tone mapping inside Corona, you can "skip" this lin2log because you can compress dynamic range using highlight compression for example, try this LUT but you need to apply heavy highlight compression in order to use it. But i recomend to undertand whats going on being before you start using this kind of stuff xD

Or you can tick "input LUT in Log space" and use the LUT from the OCIO package.

I edit to add the "Heavy contrast" LUT from the original Filmic Blender OCIO transformed into a Cube LUT so you can use (This needs to be used in Log space)

The Blender tut about filmic blender did offer explanations about some issues I have had with lighting and materials. Reason why I use lights for my scenes rather than HDRI and some tweaking to get the colors to look right within certain context. This is actually a good thing as a lot of ppl are now discussing it and devs are also looking into it as well.

As for the thread topic, If we can have both and there is no loss in quality or features that come with sRGB (I do admit using this gives you some sort of control). Then it can be done. If nope, maybe two versions of Corona, one filmic and another sRGB.

You can´t. LUTs are only able to map 0-1 values that´s why you need to apply a "shaper" LUT (lin2log) before in order to re-map the linear values into a logarithmic 0 to 1 format and then apply the contrast LUT to this 0 to 1 logarithmic file. The OCIO "magic trick" apply this 2 LUTs in a row.

But you don´t need to do this in Corona, you already have amaizing tone mapping inside Corona, you can "skip" this lin2log because you can compress dynamic range using highlight compression for example, try this LUT but you need to apply heavy highlight compression in order to use it. But i recomend to undertand whats going on being before you start using this kind of stuff xD

Or you can tick "input LUT in Log space" and use the LUT from the OCIO package.

I edit to add the "Heavy contrast" LUT from the original Filmic Blender OCIO transformed into a Cube LUT so you can use (This needs to be used in Log space)

Best.A

Thanks for taking the time to explain this further and for providing the files! I think I've got my head round this a bit more now.

*edit*

We are definitely onto something here. I've just plugged in the medium high contrast cube file in log colour space into a test scene I'm working on and wow what a difference! I reset all of my tone-mapping settings that I thought looked pretty good and damn, the burnouts on some wooden shutters in the window have been completely subdued (you can actually see the individual shutters now and the exterior brightness has been tamed) while retaining great contrast, colour saturation and shadow depth elsewhere in the scene. No "murky grey" tones anywhere.

Will need to do some serious testing of this workflow as now some of my materials are a bit too vibrant.

At the end of the day this is really just a discussion about tonemapping, right? (excluding all the comping discussion)

Just the idea that an HDR/32 bit linear render benefits from some form of tone mapping before being passed to the Srgb display transformation? I know it can get deeper than that, but at its essence that is what all this is essentially about, and all the hubbub from that blender guru guy?

Photoshop's Autotone Autolevels Autocolor buttons are for non professional users. It's not me, it's Den Margolis ;)As I understand you want to make using such button by default. Strange way of thinking as for me.

You may add such button (as Photoshop), or at least to give to user possibility to choose what default he would like.

I tried Adanmq's LUTs on few of my scenes and i have mixed feelings about it. The results i got, is pretty unreliable - in some scenes these LUTs works good, in others, not so much. Besides, it seems that once i use Adan's LUTs, Corona's tonemapping controls becomes useless. I rarely feel the need to tweak my renders further outside of VFB, after i use Corona's tonemapping, but with these LUTs i have to send every picture to photoshop to tweak something. Maybe it's just my inexperience. I'll try to play with it more and see if i manage to get more consistent results.

I tried Adanmq's LUTs on few of my scenes and i have mixed feelings about it. The results i got, is pretty unreliable - in some scenes these LUTs works good, in others, not so much. Besides, it seems that once i use Adan's LUTs, Corona's tonemapping controls becomes useless. I rarely feel the need to tweak my renders further outside of VFB, after i use Corona's tonemapping, but with these LUTs i have to send every picture to photoshop to tweak something. Maybe it's just my inexperience. I'll try to play with it more and see if i manage to get more consistent results.

Curious what others think about Adan's LUTs?

HI.The LUTs that comes whit corona are just a color response and they are not standardized for any specific workflow so they are designed to be used in an "artistically" way and usually you need to do exposure, compresión and contrast separately to achieve final results.

LUTs are not a good way to grade a linear image to be honest, I personally only use contrast ones + highlight corona compresión in order to be able to better see the linear render on a sRGB monitor so i can work on lighting/materials whit something similar to the final post in mind but i usually do my post by hand. LUTs are great to color management between apps and platforms. LUTs needs some short of compresión before you apply them or you will kill your render and clamp the results.

The ones y post here are just a conversion from the original Filmic Blender thing and they only give you a contrast reference but you need to use them in LOG and help you pre-visualice your linear image on a sRGB monitor so you can better plan your materials/light.

I´m working on a new pack designed to be uses over compressed linear renders because even if LUTs are not designed for that purpose a lot of people are using it for it and it´s better if you have proper designed ones xD

sRGB has of course nothing to do with tonemapping (it doesn't do any, it just applies gamma curve), or dynamic range at all.

sRGB is still shit, but because of its limited color gamut, it's old color space and 3dsMax is making users with quality display devices suffer because it lacks color management. But nothing to do with tonemapping or photorealism.

The implemented version of ACES in blender is imho not ideal, and I wouldn't consider it as ultimate solution as it relies on presets. Either full version of ACES filmic makes sense, like Unreal 4 did, but it's sadly quite complicated as it has lot of parameters that are not very intuitive for average person used to single highlight compression value. Although it can still keep default look.The issue with presets is like Romullus write above...they are situational and already quite defined looks. Not starting point for any post-production.

But still I do agree with this thread, nice tonemapped look (but not post-processed) should be starting point in modern renderer. It's like .raw vs jpeg from DSLR, even .raw by default is interpreted as quite non-linear in most packages (although you totally can extract fully flat or linear look through dcraw), just lacking the agressive s-curve and HSL mods done by each manufacturer as their preferred tonal curve.

Some practical idea: Ditch the current filmic, it's just not working at all. It's very weirdly behaving, the shoulder is not very pleasing and the shadows are somehow oddly exposure dependent.The current Reinhard could be improved a lot, it flattens the mids too much but worst it just washes the blacks away completely.

And let's fix the white balance first :- ) No one asked for tint to be additive overlay. Without it every image other than 6500K in Corona suffers from green or purple tint.

Some practical idea: Ditch the current filmic, it's just not working at all. It's very weirdly behaving, the shoulder is not very pleasing and the shadows are somehow oddly exposure dependent.

So what should be the replacement?

My idea would be if you could basically modify all of the curves Photoshop style (by adding/subtracting points on the curve and adjusting the bezier handles) that filmic uses for it's color voodoo + having the abbility to save this as LUT (when maybe playing around with colors in Davinci Resolve, Nuke or etc) or just some preset for later re-use.

I agree that Filmic highlights are weird and very unpleasant. Filmic Shadows, however, are great and really help, I'd miss them a lot as they manage to look way better than when using Contrast alone, there's really no alternative (yet).

Here you go, just copy and paste it into your graph. Let me know what you think :) I used the function described here: http://forums.odforce.net/topic/25019-hable-and-aces-tonemapping/#comment-145839 . With some fooling around you can make it less contrasty but I used the values as described in the thread as default.

Thanks a million for that. I'm very sorry if I gave the impression of being very familiar with tonemapping in general and specifically with the ACES-stuff.But in a nutshell: The controls(tA,tB...) where can I read a description of what they do?

Here's how that can look on scene: (test done by Dubcat in fusion, macro by Deadclown, scene from my current project )

The default might not be ideal for every situation, but it is far superior solution even as base. It does not need to be ignored that the full solution (with 5 parameters) let's you tweak this look to your liking (like overall contrast, overall brightness, HL compression or black crush), but that might be too complicated for regular user.

None of these quite match the intended look which I attach as illustration, though the filmic gets very close to what I would use as base for post-production. It just gets perfect brightness and black levels.

Thanks a million for that. I'm very sorry if I gave the impression of being very familiar with tonemapping in general and specifically with the ACES-stuff.But in a nutshell: The controls(tA,tB...) where can I read a description of what they do?

Kinda related. I've been done some tests comparing the curves controls in the 1.6 daily builds and seeing how it compares to a curves adjustment in after effects. I found some interesting results that may line up with some of the tone mapping issues brought up in this thread.method breakdown:

1.) Rendered a simple image in linear format (no changes to tone mapping controls in vfb), exported it as a 32bit exr to be loaded into After effects.

2.) Next, I applied a curves adjustment in corona vfb to brighten the image , saved the curves .acv file

4.) saved both images from corona vfb and after effects with the same curve adjustment. (see attachments)

5.) When I compared the two images, I strongly preferred the output from after effects.

Things I noticed:

It seems like Corona is pushing highlights and shadows towards more middle values creating a more washed-out grey look.(can be seen on the rear wall next to the opening & with the immediate shadows of the red box)

The reflected light from the box takes a hit in vibrancy and variation, as can be seen on the right wall and on the floor to the right of the box. Warm vibrant tones from the red box get flattened

I've attached the images for you guys to take a look and compare(I recommend opening them both in separate browser tabs so you can toggle back and forth easily). What do you guys think on how they look? Maybe I missed something in my process.

To me, the best option is to have the ability to save both raw (32 bits exr) and processed render in one go as i would like to save a raw version of my render, just in case. But it's not possible through network rendering and that sucks. It's absolutely a nonsense to clamp data as you have to rerender to retreive lost information. Can be usefull in tight deadlines as i'm pretty sure most of us does not have a renderfarm.

It does not need to be ignored that the full solution (with 5 parameters) let's you tweak this look to your liking (like overall contrast, overall brightness, HL compression or black crush), but that might be too complicated for regular user.

As always, this is the cruel dilemma. I'd go for the complicated one. Make things easy and effective is great but some concept need to be assimilated to get full control. 5 controls won't kill anybody. But that's obviously a matter of personal taste.

Here's how that can look on scene: (test done by Dubcat in fusion, macro by Deadclown, scene from my current project )

The default might not be ideal for every situation, but it is far superior solution even as base. It does not need to be ignored that the full solution (with 5 parameters) let's you tweak this look to your liking (like overall contrast, overall brightness, HL compression or black crush), but that might be too complicated for regular user.

None of these quite match the intended look which I attach as illustration, though the filmic gets very close to what I would use as base for post-production. It just gets perfect brightness and black levels.

Damn what a difference between the first two images. The first one suffers from the "murky greys" that I experience when using Corona's highlight compression and the 2nd has those nice ethereal highlights with controlled natural burnouts.

Damn what a difference between the first two images. The first one suffers from the "murky greys" that I experience when using Corona's highlight compression and the 2nd has those nice ethereal highlights with controlled natural burnouts.

I totally agree, the ACES filmic mapping looks way closer to what we should see through a camera with great dynamic range.

I have not really informed myself about ACES yet, so what is the workflow here ?

I have a very simple question for those who are familiar with Fusion, which I am not...obviously.So, basically, I just need following steps to do, and the tutorials I've found are much more complex than this, and it is hard for me to find what I really need...

This is a super interesting thread.I can see rawalanches point that the user base *generally* works in a way that would benefit a 'true to camera' look because the the final output from us, in the archviz world, are usually the final deliverables and we take on the roles of lighting,colour grading etc ourselves so from a workflow standpoint it would speed things up alot. Whereas when theres a hella lot of people further down the pipleline after the initial render it becomes an issue for it to entirely be removed. Hence why the checkbox would remain.

Im all for it if it means my renders look truer to what a camera would produce and i dont have to rely on luts and lightmix tweaking to get the ideal look and feel in my renders. I also never use Highlight compression in corona as it tends to flatten everything.

I think those days the tide is changing from swiss officer knives style (AKA gazilions of AOV) and I am one of those, don't get me wrong, to the more neat "this is your beauty, and this is your masks) period.Specially guys like WETA, the head of dev is a HARD Liner about that and he is right in many ways. in studios, for mostly multi shots productions, the NEMESIS is heterogeneous output from COMP dpt and / or CG as well, meaning that the director at the end of the day as a heart attack when switching from shot to shot, the color is not flowing correctly and that takes a lot of time to manage and / or control.So at the end, their thinking is: Why the fuck using a PBR engine and all the pain it brings IF we kill the result a comp time ? that what it is about.I agree 100% with that. But it requires an education cycle again in the film industry to not get out of that and do a strike with just one little email saying that you just realized the shot was supposed to be night time and not day light, for example.It will take a bit of time to be able to lose those hard coded habit we have ion the industry, because they are well founded, at the end, but not really making sense. AKA 2nd amendment if you follow me.

And indeed is the question and need of keeping linearity (not Linear workflow... ahah) between all brick of the pipeline. typical pipeline involves a lookDev, (maya/max/houdini/C4D - Nuke/Fusion/Smoke - Katana/Clarisse - Avid/LightWorks/Premiere/and what not. All those guys are required those days to display at viewport level the exact same level in shading and textures display. Probably adding a bit of a challenge as well).

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

Here you go, just copy and paste it into your graph. Let me know what you think :) I used the function described here: http://forums.odforce.net/topic/25019-hable-and-aces-tonemapping/#comment-145839 . With some fooling around you can make it less contrasty but I used the values as described in the thread as default.

Is it possible to make this script for After Effects? I'm also very interested in this. I always use Highlight Compression but it always felt like the images would lose depth, seeing the ACES tonemapping blows me away, such a big difference!

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

Yes please write some comparisons. I wanted to look into it too but the lawsuit surrounding it has put me off testing it in a production environment

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

Yes please write some comparisons. I wanted to look into it too but the lawsuit surrounding it has put me off testing it in a production environment

I've been testing out F-Storm on a few product shots and have to say it's really amazing. I don't know what they are doing but the way bitmaps are handled is really something - textures appear to come out much sharper (and not artificial "sharpen tool" sharper). For example at grazing angles and in fine reflection/glossy maps, textures are much clearer

More on topic: The tone mapping in F-Storm is godly. It's as close to DSLR like as you can get from any render engine I've used. Everything just looks so "real" and photographically punchy - it's actually quite a task to produce a badly tone mapped image in FStorm.

Now if only Corona could copy FStorms tone mapping 1:1 (and FStorms geopattern) that would be insane. Corona has too many great features and great ease of use so it will still be my daily driver (until 32gb+ GPU's start coming out...).

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

Yes please write some comparisons. I wanted to look into it too but the lawsuit surrounding it has put me off testing it in a production environment

I've been testing out F-Storm on a few product shots and have to say it's really amazing. I don't know what they are doing but the way bitmaps are handled is really something - textures appear to come out much sharper (and not artificial "sharpen tool" sharper). For example at grazing angles and in fine reflection/glossy maps, textures are much clearer

More on topic: The tone mapping in F-Storm is godly. It's as close to DSLR like as you can get from any render engine I've used. Everything just looks so "real" and photographically punchy - it's actually quite a task to produce a badly tone mapped image in FStorm.

Now if only Corona could copy FStorms tone mapping 1:1 (and FStorms geopattern) that would be insane. Corona has too many great features and great ease of use so it will still be my daily driver (until 32gb+ GPU's start coming out...).

So if Fstorm is one of the few (or the only one) who has this insane tone mapping, would I be correct to assume that it's technically very difficult to achieve this? Or is there another reason why other renderers are using an "old" tone mapping system?

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

Yes please write some comparisons. I wanted to look into it too but the lawsuit surrounding it has put me off testing it in a production environment

I've been testing out F-Storm on a few product shots and have to say it's really amazing. I don't know what they are doing but the way bitmaps are handled is really something - textures appear to come out much sharper (and not artificial "sharpen tool" sharper). For example at grazing angles and in fine reflection/glossy maps, textures are much clearer

More on topic: The tone mapping in F-Storm is godly. It's as close to DSLR like as you can get from any render engine I've used. Everything just looks so "real" and photographically punchy - it's actually quite a task to produce a badly tone mapped image in FStorm.

Now if only Corona could copy FStorms tone mapping 1:1 (and FStorms geopattern) that would be insane. Corona has too many great features and great ease of use so it will still be my daily driver (until 32gb+ GPU's start coming out...).

I have a single GTX 1080. Do you reckon I could handle small interior shots?

I've been on a nearly 4 month FStorm vacation and liked the results it gave me. Haven't been paying attention to Corona during the time and have a lot of catching up to do, so first I wanted to ask if this discussion has been picked up by the devs?

Yes please write some comparisons. I wanted to look into it too but the lawsuit surrounding it has put me off testing it in a production environment

I've been testing out F-Storm on a few product shots and have to say it's really amazing. I don't know what they are doing but the way bitmaps are handled is really something - textures appear to come out much sharper (and not artificial "sharpen tool" sharper). For example at grazing angles and in fine reflection/glossy maps, textures are much clearer

More on topic: The tone mapping in F-Storm is godly. It's as close to DSLR like as you can get from any render engine I've used. Everything just looks so "real" and photographically punchy - it's actually quite a task to produce a badly tone mapped image in FStorm.

Now if only Corona could copy FStorms tone mapping 1:1 (and FStorms geopattern) that would be insane. Corona has too many great features and great ease of use so it will still be my daily driver (until 32gb+ GPU's start coming out...).

I have a single GTX 1080. Do you reckon I could handle small interior shots?

I've mostly been using it for product shots and animations with heavy DOF/motion blur and for those applications it's working out faster than Corona by a long shot. But I'm not willing to risk it on a complex commercial interior scene. My setup: 12 core Xeon with dual Titan X's (maxwell). I have to say that the prospect of being able to scale performance linearly with add on GPU's is really nice/cost effective.

For example, you'd have to go with either a multi core Xeon system + render nodes (and all of the costs associated with a new system) or a single threadripper v2. Or with FStorm you could simply stick in as many GPU's as you want into your existing low cost system.

The only issue here is GPU memory limitation, but this will soon be a thing of the past with either out of core rendering/next gen GPU's with 32gb+ RAM. FStorm also seems to be insanely memory efficient and even has a bitmap compression algorithm that further cuts RAM usage. Corona had better keep up as I can easily see FStorm overtaking in the future once it irons out a few kinks/adds more compatibility. Crazy to think that it's just one Russian guy who made it!

Personally I would have like to have seen a Corona/FStorm merger rather than with Vray. The new tech FStorm brings to the table is insane.

The stuff that Johannes Lindqvist/Illusive images put out is what made me stray from Corona somewhat, but as mentioned before there are a lot of lacking features that I use in almost all projects (multi-light, material map compatibility, Corona scatter e.t.c).

how does one go about getting a trial or purchasing fstorm? I tried it long ago when it first came out but since the lawsuit ramped up I haven’t been able to track down anything. Would like to give my 1080 Ti a spin on a test scene.

how does one go about getting a trial or purchasing fstorm? I tried it long ago when it first came out but since the lawsuit ramped up I haven’t been able to track down anything. Would like to give my 1080 Ti a spin on a test scene.

Just changing tonemapping defaults to something non-linear and at the same time adding a button that will set everything to linear with single click (when you need to do pass compositing).

I couldn't agree more. It is time for something like that. If users need to do heavy compositing work, let them have a checkbox that forces linear output. But the default should be some tonemapping that is as close to a camera as possible.

Did anything ever come out of this? I recently jumped on the opposite track with working with linear 32bit exr. The possibilities are so much greater in compositing especially with the Corona Image Editor.I am a Blender user and Blenders filmic tonemapper didn't quite give me what I wanted compared to Coronas Image Editor. I am trying to wrap my head around this topic for a few months now and it's difficult to fully grasp.

This thread kinda burned out but it was really interesting to read. Did this ever get picked up as a potential feature request? It probably should.

I sure hope it did. I'm about to switch to Fstorm for interior renders for it's superb tonemapping and shading. Keep up Corona, I would hate to switch to another renderer because I love the ease of use of Corona, but Fstorm is just more realistic and I often feel like Corona's output still feels "rendery".. If I look at the roadmap I see all kinds of improvements that are interesting but it's not the things uses want (I think), when is the last time we had a poll about what users want?

Tom do you think reworking tonemapping could be added to the poll?I see its on the future ideas board on the trello but it seems from this thread that theres a huge appetite for things to be changed on that front sooner rather than later.

I think when you said "it's not the things uses want (I think)", you actually meant "it's not things I want".

I'll be blunt. The whole race, since the beginning of CGI, is to make the most realistic image possible. That's what you see people do and admire when somebody is making images that are next level in realism (and also in beauty, but that's dependent on the user not the engine). So the MOST important thing for a render engine is to keep up with what's technical possible. At the moment Fstorm is producing more realistic images than Corona, don't ask me why or how, I'm not a technician, it just does. So I think that the most important thing for the Corona team is to improve the engine to be on par with the best. I think THAT is the most important thing to work out now, and that's not in the poll.

The whole reason I switched, a few years ago from, Maxwell Render to Corona is that Corona is better (read more realistic and faster). If you ask me, Maxwell is nice but it's dying out because they did not improve their render model and kept to the physically correct rendering of light. I love corona(!) so I would hate to see the same thing happening to them so I really like to see that they keep improving their render quality so it still on par with the best, as they used to be a few years back.

I know a lot of people not going to agree with me because this might be too bluntly said, but I think that in it's core this is true. People will eventually chose for what's the best quality. :)

I think when you said "it's not the things uses want (I think)", you actually meant "it's not things I want".

I'll be blunt. The whole race, since the beginning of CGI, is to make the most realistic image possible. That's what you see people do and admire when somebody is making images that are next level in realism (and also in beauty, but that's dependent on the user not the engine). So the MOST important thing for a render engine is to keep up with what's technical possible. At the moment Fstorm is producing more realistic images than Corona, don't ask me why or how, I'm not a technician, it just does. So I think that the most important thing for the Corona team is to improve the engine to be on par with the best. I think THAT is the most important thing to work out now, and that's not in the poll.

The whole reason I switched, a few years ago from, Maxwell Render to Corona is that Corona is better (read more realistic and faster). If you ask me, Maxwell is nice but it's dying out because they did not improve their render model and kept to the physically correct rendering of light. I love corona(!) so I would hate to see the same thing happening to them so I really like to see that they keep improving their render quality so it still on par with the best, as they used to be a few years back.

I know a lot of people not going to agree with me because this might be too bluntly said, but I think that in it's core this is true. People will eventually chose for what's the best quality. :)

Right now I think Fstorm and Corona are on par. It might take a little more effort to achieve the same results in Corona. What I miss from Fstorm which I think is crucial is triplanar mapping. And it's not a planned feature in the near future. (I asked Andrey) Is there another way to achieve that in Fstorm without unwrapping?

I know id much rather see an improvement to my workflow and realism right out of the VFB is all and it seems like theres pretty strong opinion from this now 9 page thread that this 'feature request' is something the user base really does want. But I agree with the above, Its not on the poll and it should be. Ive mentioned on the poll thread that stuff like 'speed improvements' and 'memory improvements' shouldnt be on the poll because version to version im pretty certain we all expect those things. Id rather see some new tonemapping than be able to render ever so slightly faster if im completely honest.

I added the tone mapping into the list. It is high on my list of features to do, the problem is that we are talking about very subtle differences and as Tok_Tok said, we dont even know why some tone mappers look more realistic than others. What can be definitely improved though is usability - removing current controls for something like shadows/midtones/highlights controls

I added the tone mapping into the list. It is high on my list of features to do, the problem is that we are talking about very subtle differences and as Tok_Tok said, we dont even know why some tone mappers look more realistic than others. What can be definitely improved though is usability - removing current controls for something like shadows/midtones/highlights controls

Oh its definitely subjective for sure but thank you for letting us know. Im sure alot of people would be very pleased.I wonder if theres a way of measuring the effects of various tonemappers. Im sure dubcat or someone could find a way haha

I added the tone mapping into the list. It is high on my list of features to do, the problem is that we are talking about very subtle differences and as Tok_Tok said, we dont even know why some tone mappers look more realistic than others. What can be definitely improved though is usability - removing current controls for something like shadows/midtones/highlights controls

Why remove those features? I think they are useful. But I am glad to hear that it's high up on your list. And definitely talk to Dubcat about that. I think he can offer a lot of insight into this :-) please :D

They would be more usefull if they would work well. I often find filmic shadows and filmic highlights to work in weird, unpredictable fashion. Solid highlight/midtones/shadows controls are much needed in Corona. Can't wait for them.

...we dont even know why some tone mappers look more realistic than others.

A user here on the forums (dubcat I believe) posted a way to analyze two images. The result was a curve that represents the difference between the two. So if you have a raw linear render from fStorm for example, and another tonemapped one, you could try to analyze what is going on there by looking at the output curve. Just an idea, maybe ask dubcat... I am highly interested in this as well, because I feel the 32bit linear workflow is not suited for the needs of most users here.

...we dont even know why some tone mappers look more realistic than others.

A user here on the forums (dubcat I believe) posted a way to analyze two images. The result was a curve that represents the difference between the two. So if you have a raw linear render from fStorm for example, and another tonemapped one, you could try to analyze what is going on there by looking at the output curve. Just an idea, maybe ask dubcat... I am highly interested in this as well, because I feel the 32bit linear workflow is not suited for the needs of most users here.

There is no need to analyze anything here we know what happen on the Fstorm side. It's using the ACES output transform instead of the sRGB one. Tonemapping controls can be whatever you want before applying the output transform (which must be the final step), it's just a matter of taste.

The Filmic HL/Shadows (in their current implementation) should be removed. I understand some people use them to get interesting effect when it occassionally does what they 'think' it should do, but it never actually does that. The implementation is wrong so I would not hesitate to remove it fully.

This one is actually very interesting ! So wide gamut color space as internal computation space can actually provide problematic output when used with common lighting engine ? Not sure how this transfers to Corona but it's interesting as I never thought this could affect that much.

...we dont even know why some tone mappers look more realistic than others.

A user here on the forums (dubcat I believe) posted a way to analyze two images. The result was a curve that represents the difference between the two. So if you have a raw linear render from fStorm for example, and another tonemapped one, you could try to analyze what is going on there by looking at the output curve. Just an idea, maybe ask dubcat... I am highly interested in this as well, because I feel the 32bit linear workflow is not suited for the needs of most users here.

There is no need to analyze anything here we know what happen on the Fstorm side. It's using the ACES output transform instead of the sRGB one. Tonemapping controls can be whatever you want before applying the output transform (which must be the final step), it's just a matter of taste.

Um, I am not quite sure if it really is that simple. On which sources are you relying to make that statement?

OK, this thread explains why there is a new feature in the poll called "rework tone mapping"

I totally agree with Ondra on reworking the usability. I would start by taking a good look at Camera RAW and how it works. The fact that you can change values with the arrow keys is very convenient. I also like the curves they have implemented in CR. The curves we have now in Corona have to many options and you can't do anything very precisely.

Even white balance should work like in PS, with a blue-yellow slider and a green-magenta one. This was already pointed out by Juraj.

That being said, the color wheels some programs use are awesome. Specially for some color grading.

On top of that I would really like to see a Histogram and maybe even a Spectrogram and other useful tools for video.

This might seam a little overboard but I feel that doing POST in Corona should be the goal. I don't want to open up a separate piece of software just to do 3 things because I can't do them in Corona.

This would give people a very fast way to send WIP to clients and since we have Corona's CIE, you can always open the render and change any configuration.

But of course, for this to work the tools provided have to be as good as the ones you get from your usual POST software.This means that Corona has to become aware of color space or at least give you the option to load you monitor color correction.

That brings me to the Filmic Highlight and Shadows. Just like Juraj said, they shouldn't be in Corona if they don't work properly.

As far as usability goes I would definitely appreciate a change on the whole VFB. Some buttons should be implemented or seperated.Finally the history should NOT delete and work just like the one in Vray, where you can load your POST configuration.

I guess spectral rendering is the future otherwise we will always rely on cheats/artistry ;)

Really nice article, thanks for sharing, I missed that one! Conclusion: there is no perfect colorspace for rendering.

You are right, spectral rendering is the way to go if you want pure realism but it introduces some flaws too. Spectral rendering is quite slower and it's hard to convert standard RGB triplet to spectra as you have to deal with metamerism. I've read some recent papers that are introducing some nice solutions to improve those aspects btw :

https://www.ci.i.u-tokyo.ac.jp/~hachisuka/rgb2spec.pdfhttps://graphics.tudelft.nl/Publications-new/2018/PBE18/PBE18.pdf (It looks like Ondra provided the pool scene for that one 😊 )

Anyway, if we have to use RGB triplets, the colorspace that provides the most (AVERAGE) accurate results (compared to ground truth spectral render) is the ACEScg one, see this :

https://www.colour-science.org/anders-langlands/

And this around 18 min.

That said, the Academy Color Encoding System introduces a whole bunch of stuff. What was discussed in the first place was the tonemapper itself. Using ACES for rendering is not mandatory. We can still use the tonemapper converting linear sRGB to display output and then, benefit from the nice Filmic Curve it provides. It will still remain miles ahead of what we have now imho.

The tool they use in the video to explain the tonemapper controls can be accessed here: https://www.desmos.com/calculator/h8rbdpawxj

@Devs you really should look at the UE4 post controls (around 45min in the video), that would be a good starting point for the tonemapping and grading tools rework.

This was very interesting. So we basically have all we need in Corona but the UI is a bit all over the place.

We can already do this with the curves or use a LUT from Adan Martin which emulate the behaviour of different photo stock.

The only thing different in UE4 is that they use a specific curve with base values, in this case ACES.

So to sum up, we get rid of the Filmic highlight and shadows and replace it with a new section called tonemapping or filmstock.Of course you can deactivate it like all the sections in the Corona VFB. ;)

Im new to corona for Max but I've been a long time C4D corona user. C4D natively saves PSD files and if you chose the 16bit option the file with all the pass layers in it is linear. This gives a nice subtle addition for painting in extra reflection etc. Ive noticed with having to save as 16 bit tiff files in Max the files are srgb and you lose that subtlety, having to turn the layer transparency right down and having highlights blown out. Is there something Im doing wrong? Can I get 16 bit linear out of MAX?

Yes you can, in save as dialog, choose override gamma and set it to 1.0

I believe this still gives you output-referred integer Tiff, not linear half-float file. It will just contain the data correctly, but with no leeway as it will clamp.

And that is outside of some weird idiosynchracies 3dsMax has with Tiff file in general. If you want linear half-float file for compositing, just choose .exr in half-float, it will have all the data and will be substantially smaller in size.

Rhodesy, there is also .PSD plugin that saves all passes for 3dsMax if you want to avoid saving to .exrs and having ProExr in Photoshop.

Is that the Cebas PSD manager you refer to Juraj? Im trying to reduce extra subscriptions where I can now we have Max. Does it do anything other than write multipass PSD files? The half float exr might be an option. The 32bit version is a non starter for me with the file sizes and the mine field of 32bit so a 16bit variant might work best. Is that what you use? The exr plugin works well I seem to remember when I tried it last.

What do you mean 'minefield of 32bit' ? Mind you, 16bit .exr acts defacto as 32bit file, only with half as data. 16bit .exr has nothing in common with integer 16bit files like Tiff, PNG,..

To composite render passes together with correct math you need to be in linear space, which always happens to be in 32bit mode per channel anyway. The Cebas PSD manager would create 32bit PSD file as well, Photoshop can work in linear only in this mode. I think you know all this, but just to make sure I am not understanding something wrong from you :- ).

I use 16bit tiff, but I don't do any compositing nor do I have access to dynamic range anymore. Advantage of 16bit Tiff as opposed to 8bit is only in better tonal gradation, so wider range of tones to avoid posterization effect or other artifacts. Nothing else, no highlight or shadow recovery. All those I do previously in Corona framebuffer.

What do you mean 'minefield of 32bit' ? Mind you, 16bit .exr acts defacto as 32bit file, only with half as data. 16bit .exr has nothing in common with integer 16bit files like Tiff, PNG,..

To composite render passes together with correct math you need to be in linear space, which always happens to be in 32bit mode per channel anyway. The Cebas PSD manager would create 32bit PSD file as well, Photoshop can work in linear only in this mode. I think you know all this, but just to make sure I am not understanding something wrong from you :- ).

I use 16bit tiff, but I don't do any compositing nor do I have access to dynamic range anymore. Advantage of 16bit Tiff as opposed to 8bit is only in better tonal gradation, so wider range of tones to avoid posterization effect or other artifacts. Nothing else, no highlight or shadow recovery. All those I do previously in Corona framebuffer.

Thats very interesting! So essentially unless you have a 32bit exr you lose the ability to accurately step down exposure etc?

No, it can be many other scene-referred file formats, like various camera raw formats which are often 12bit at best. But for CGI there are fewer: 32bit full-float .exr/.hdr/.tiff and half-float 16bit .exr/.tiff . I do not believe 3dsMax supports 16bit half-float Tiff, this is very obscure format, I think only used internally for Adobe's DNG or something along that line.

16bit integer Tiff with embedded gamma curve (regardless of in which gamma you write those colors in) is not the same thing as linear 16 bit half-float .exr.

Then it matters how you extract the information from that format. If you open 16bit .exr in Photoshop, it will open it in linear 32bit environment and will be able to extract shadows/highlights /manipulate exposure or tonemap. The file itself (16 bit .exr) will have obviously less information than 32bit file but you rarely need this for post-production (while you do need it for image based lighting with above 16+ dynamic range stops, i.e. Sunlight for example).

If you change the environment to 16bit in Photoshop, it will instantly clamp (or tonemap, it gives you options) and you won't be able to extract any further dynamic range. 16bit will still give you the advantage of wider tonal gradient to avoid artifacts and posterization effect. 16bit Tiff/PNG/etc.. will open to this mode directly unlike .exr. But even if you open them into 32bit environment, they will still be clamped.

TL:DR :- )

16bit Tiff from 3dsMax if you already used highlight compression in Corona and only want to adjust local contrast, colors, etc. Edit in 16bit PS mode. (Save as 8bit for web/print at the end).16bit .exr if you want to composite or extract dynamic range operations (exposure, highlights, shadows, tonemaping, glare/bloom, etc..). Edit in 32bit PS mode.

No, it can be many other scene-referred file formats, like various camera raw formats which are often 12bit at best. But for CGI there are fewer: 32bit full-float .exr/.hdr/.tiff and half-float 16bit .exr/.tiff . I do not believe 3dsMax supports 16bit half-float Tiff, this is very obscure format, I think only used internally for Adobe's DNG or something along that line.

16bit integer Tiff with embedded gamma curve (regardless of in which gamma you write those colors in) is not the same thing as linear 16 bit half-float .exr.

Then it matters how you extract the information from that format. If you open 16bit .exr in Photoshop, it will open it in linear 32bit environment and will be able to extract shadows/highlights /manipulate exposure or tonemap. The file itself (16 bit .exr) will have obviously less information than 32bit file but you rarely need this for post-production (while you do need it for image based lighting with above 16+ dynamic range stops, i.e. Sunlight for example).

If you change the environment to 16bit in Photoshop, it will instantly clamp (or tonemap, it gives you options) and you won't be able to extract any further dynamic range. 16bit will still give you the advantage of wider tonal gradient to avoid artifacts and posterization effect. 16bit Tiff/PNG/etc.. will open to this mode directly unlike .exr. But even if you open them into 32bit environment, they will still be clamped.

TL:DR :- )

16bit Tiff from 3dsMax if you already used highlight compression in Corona and only want to adjust local contrast, colors, etc. 16bit .exr if you want to composite or extract dynamic range operations (exposure, highlights, shadows, tonemaping, glare/bloom, etc..).

When I say '32bit minefield' I mean potential issues with very large file sizes high res files and lots of passes but mainly the fact like you explained that you only have a few options and tools you can use in PS in 32bit mode so you have to tone and then downgrade to 16bit. The problem comes if you need to re-render for any reason and you cant remember your PS tone mapping settings and tweaks you made before saving down to 16bit so your base renders might not match your previously finished image.

I've done a very crude test in Corona C4D exporting as a layered 16bit PSD and opening in PS. The first image shows the base rendered image - note PS identifies the image as being in linear space. The second shows it with the reflection pass switched on (linear dodge (add) mode as default blending style) and still linear. The final one shows what happens when converted to sRGB (still 16bit) and the same as I get out of MAX TIFF format. Any subtlety has gone in the reflection pass and its completely blown out.

I dont use passes to rebuild an image from scratch but I do use them to embellish the image, especially with reflection and refraction passes. If I used a low opacity soft brush on that middle image to mask in some glancing angle reflections it will give an enhanced look but when using the sRGB version you lose the nice gradients and tones.

I will need to do some experimenting in MAX but do you know if this is possible for a 16bit output that PS will recognise as linear?

Yes, I wrote that above, half-float .exr is 16bit internally. But to work with them properly, you need to be in linear environment, and Photoshop only offers this properly in their 32bit mode. And yeah...Photoshop doesn't give you all the tools in their 32bit mode, because Adobe doesn't care.

But Affinity does, I suggest you give it a try. It's not perfect yet but they have much better 32bit support.

Also, do you really need linear information for this kind of post-production ? You could simply stay in normal environment and use 'Screen' mode instead of 'Linear(Add)' which the name implies what it does :- ).

Doh I'll have to give screen a go, Im just so used to working with the linear PSD files from C4D I didnt think to try switching the blending modes I just thought its was a limitation of linear vs sRGB. Thanks for the prompt!

Yeah I have affinity and I can see its potential but I perhaps didnt give it long enough to get used to the different personas. I like how photoshop just has all the tools in one place. Also the brush editor had some great features but not all the PS ones last time I tried, so some of my presets didnt transfer over too well from PS. I was also reluctant to jump in to another 'non industry standard' software full time.

Same here ! I have hard time transitioning. But for certain 32bit tasks, like working with our 360 HDRis, Affinity is godsend.

Screen is sRGB (I prefer to call it 2.2 gamma, as it can be any color-space, not just particularly sRGB) version of LinearAdd. It does the same, albeit for non-linear files.It is hack of course, sorta, so it won't create super-hot level of light but I rarely find I would need that. I use Screen mode to overlay all kinds of lens glare effects and it recreates the look very successfully.

Always top knowledge Juraj! Yes I use screen quite a lot for blending stuff along with soft light and overlay. I just didnt think to try it with the raw passes but your info makes absolute sense! Thanks again

Yes just done another crude test this time out of max with 16bit half float exr and opened in PS with EXR-IO.

The ball on the left is the converted down 16bit version with reflection pass set to screen. On the right is the 32bit original set to add. The ball on the right gives better tonal range with less washing out. I also ran a test from C4D outputting 32bit and 16bit PSD files and visually they looked the same as the 32bit exr version setting the ref to add. i.e. more natural tonal range and less washed out than the sRGB variants.

OK so just installed PSD manager and output a 16bit PSD. Interesting result. It has the file as sRGB and it must make the gamma 1 as it saves out and then adds a 2.2 gamma LUT at the top of the PS stack. This gives the nice tonal range I was getting from native C4D PSD 16bit output which PS reads as linear in its colour profile. It also matches the other 32bit tests from just now. I realise that the LUT is just an exposure adjustment layer setting the gamma to 2.2 so I've done that and it works.

So as I think Romullus pointed out you can do the gamma 1 output in MAX for saving out the 16bit TIFF. So Ive tried this along with the exposure adjustment layer fix in PS and it gives the same result which is great, if a little disconcerting that it seems like a bit of an unorthodox step and going against the 'recommended' setting.

My only real concern is, am I degrading my image by adjusting the gamma so severely or is the gamma in this case just a straight visual conversion / reassignment? so its not actually boosting the pixels artificially like if I was to add an aggressive curves layer, leading to banding and image degredation?

Photoshop has a horrible way of handling data, I wouldn't trust it one bit. The best thing you can do is use a program like Nuke to combine images to test.

Also, when someone says '16bit' not everyone is talking about the same thing. There is '16bit integer', and '16bit half float'. Generally speaking, 16bit integer files are usually defaulted to save in sRGB format (unless specifically set), and 16bit half floats are linear.

If you use 'adds' on sRGB images, you are incorrectly combining the data. It first must have an inverse gamma curve applied before you can use add to get a physically accurate representation.

16bit half float and 32bit full float formats will look more or less exactly the same to the eye when using any kind of merge.

More good info, thanks Njen. There are a lot of people with a lot more knowledge than me!

I'm going to give the 1.0 gamma output setting with the 2.2 gamma correction in PS a try on a couple of projects as I really want to recreate that type of subtlety in the passes. I dont rebuild images from scratch but I do use them to boost bits here and there.

Do you know if there will be any image degredation by adding the gamma offset as an adjustment layer from 1 to 2.2 in PS?

Fully agree with that, I wrote the distinction between Integer and Half-float Tiff above, like three times :- ).

And that is imho the issue with the Rhodesy's current workflow. If you write data into integer Tiff with gamma 1.0, it's baked down, and you loose the range in blacks.Try this quick experiment to prove it: Change your gamma (in 8 or 16bit mode in PS) to 0.454 (to simulate writing it to 1.0), bake it down, and then bring it back to 2.2. The result will be slightly wrong, noticeably in blacks. This will not happen in PS's 32bit mode with either 16 or 32bit .exr/.hdr files.

Njen is right that compositing softwares, heck, even Adobe AfterEffects offer you the option of linear compositing regardless of whether you're in 8/16/32bit depth mode. In AE, you even have check-box to blend colors linearly in sRGB environment, and vice-versa (and you can interpret each pass/layer as sRGB/linear individually).But Photoshop is strict and doesn't offer this distinction. 8/16bit mode in PS is clamped, output-referred workflow and 32bit is linear, scene-referred workflow.

Looking at the posted examples, almost all results look wrong to me. I get some feeling you're abusing compositing workflow to get artistic effect. Basically working out-of-the-box and that is completely fine, people get fantastic results with all sort of strange workflow they invented :- ). But the point of LinearAdd in compositing is the exact math of recreating how the passes add up in renderer for physically correct result. It's not to provide any gradients or range for artistic purpose. All the images seem to lack any form of tonemapping and feature oversaturated overblown highlights.

Photography retouchers don't work in linear and they can get any effect they want.

Look at my quick (300perc. boosted in size) example. The left is original 2.2 16bit Tiff. The right is one which was converted to gamma 1.0 and taken back to 2.2. (this should be identical as writing it to 1.0 directly)

Just a short note to those using 32bpc in PS - Photoshop will shift tones (especially shadows) when converting from 32bpc to 16 and 8 bits - be aware that you can't get the same output as from Max/VFB/CIE - this means any EXR image converted from 32bits to 16/8 bits in PS will not match a 16/8 bits image saved from Max/CIE.This may not be noticeable most of the time but if you work on darker images it'll drive you crazy. Only workaround is to stay away from PS for any 32bpc work, as Juraj mentioned AE works correctly (as do other comp apps).

I see what your saying Juraj, yes that must bake it down. This is what PSD manager does natively which is odd as it must be tampering with the image. I just wish there was PSD export in Max that behaves the same as C4D, which seems more advanced.

Below is a crop of an image I rendered last night on one of our final C4D projects. On the left is the base render and on the right is the base + reflection pass (add). This is at 100% so I would just brush in highlights where needed at lower opacities as it is overblown in parts. But I find it a really handy tool to enhance bits here and there artistically even if its not physically correct. I just need to find a way to get Max to write the tiffs with half float and for PS to recognise it as 16bit like C4D does.

Maybe we should all switch to Affinity! I might dust it off.

EDIT: crap just seen your example image Juraj. That is annoying! More experimentation required then! Thanks for all your help

- more commonly accepted in all of its versions (16bit&32bit Tiff, whereas 16bit PNG (PNG-48) is less).- more compression options- more flexible alpha use (lot of applications read png with baked alpha, with Tiff they always give interpretation options)- can save layers, good support for color profiles

It's good format to save from Corona, good format to store layers, good format to send to clients,/print.

I'm really trying to get my head around ACES and tonemapping and general colour management. I've been using the settings dubcat posted a while ago but i'm only now getting into the details about what it does and why. I'm very confused about LUTs and ACES workflow right now. Could anyone perhaps help me understand it a little more as some of you seem to have a good grasp of things when it comes to colour and corona.

I have a couple of questions about which LUTs can be used with the ACES tonemapping settings to simulate various camera responses etc rather than to just produce a 'look'. I'd like to have a couple of LUTs to use to bring the data back into what a real world camera might produce right in the VFB. Or am I completely misunderstanding the workflow and I'm supposed tonot use the LUT then do my grading in lightroom? Ive tried using dubcats colour chart in aces scenes without a lut then matching in 3DLUT creator which gives a great result. Would i then save out this LUT and use it in the VFB for other images in this light/camera setup?

There's not much information out there for beginners on this but im finding the reading incredibly interesting.That blender video only confused me more but it helped me to take an interest in colour space and tonemapping.

I'm really trying to get my head around ACES and tonemapping and general colour management. I've been using the settings dubcat posted a while ago but i'm only now getting into the details about what it does and why. I'm very confused about LUTs and ACES workflow right now. Could anyone perhaps help me understand it a little more as some of you seem to have a good grasp of things when it comes to colour and corona.

I have a couple of questions about which LUTs can be used with the ACES tonemapping settings to simulate various camera responses etc rather than to just produce a 'look'. I'd like to have a couple of LUTs to use to bring the data back into what a real world camera might produce right in the VFB. Or am I completely misunderstanding the workflow and I'm supposed tonot use the LUT then do my grading in lightroom? Ive tried using dubcats colour chart in aces scenes without a lut then matching in 3DLUT creator which gives a great result. Would i then save out this LUT and use it in the VFB for other images in this light/camera setup?

There's not much information out there for beginners on this but im finding the reading incredibly interesting.That blender video only confused me more but it helped me to take an interest in colour space and tonemapping.

Why go through all that trouble? Would the result be THAT much better? Isn't it better to wait for the developers to work on the new built-in tonemapping?

I'm really trying to get my head around ACES and tonemapping and general colour management. I've been using the settings dubcat posted a while ago but i'm only now getting into the details about what it does and why. I'm very confused about LUTs and ACES workflow right now. Could anyone perhaps help me understand it a little more as some of you seem to have a good grasp of things when it comes to colour and corona.

I have a couple of questions about which LUTs can be used with the ACES tonemapping settings to simulate various camera responses etc rather than to just produce a 'look'. I'd like to have a couple of LUTs to use to bring the data back into what a real world camera might produce right in the VFB. Or am I completely misunderstanding the workflow and I'm supposed tonot use the LUT then do my grading in lightroom? Ive tried using dubcats colour chart in aces scenes without a lut then matching in 3DLUT creator which gives a great result. Would i then save out this LUT and use it in the VFB for other images in this light/camera setup?

There's not much information out there for beginners on this but im finding the reading incredibly interesting.That blender video only confused me more but it helped me to take an interest in colour space and tonemapping.

Why go through all that trouble? Would the result be THAT much better? Isn't it better to wait for the developers to work on the new built-in tonemapping?

Personally the results i have using a colour checker look alot more natural than without. Its probably adding maybe 5 minutes tops to my post processing and the colours look great.

Personally the results i have using a colour checker look alot more natural than without. Its probably adding maybe 5 minutes tops to my post processing and the colours look great.

You mean , you add virtuall colour checker to your scene and that enables you to get more natural results? o_O Would love to see an example, if you will.

Yeah so a while ago dubcat posted a virtual colour checker model with the correct sRGB values for the xrite colour passport but the thread didnt really get much attention.I believe youve already commented in his hideout thread where i asked him about it. It seems to work sometimes but not all the time.I was just curious and its possible that the scenes i tested on were just happy accidents.

Title: Re: Time to ditch sRGB/Linear as default (?)
Post by: James Vella on 2019-02-09, 20:05:05

Yes, I wrote that above, half-float .exr is 16bit internally. But to work with them properly, you need to be in linear environment, and Photoshop only offers this properly in their 32bit mode. And yeah...Photoshop doesn't give you all the tools in their 32bit mode, because Adobe doesn't care.

But Affinity does, I suggest you give it a try. It's not perfect yet but they have much better 32bit support.

Also, do you really need linear information for this kind of post-production ? You could simply stay in normal environment and use 'Screen' mode instead of 'Linear(Add)' which the name implies what it does :- ).

Wouldnt it be easier if you wanted to stay in photoshop to have your 32bit exr file as a smart object within your 16bit post file. This way you can always readjust the exposure/gamma without crush or burn within the smart object?

Im curious to how this workflow compares to affinity as i have not used it before

Yes, I wrote that above, half-float .exr is 16bit internally. But to work with them properly, you need to be in linear environment, and Photoshop only offers this properly in their 32bit mode. And yeah...Photoshop doesn't give you all the tools in their 32bit mode, because Adobe doesn't care.

But Affinity does, I suggest you give it a try. It's not perfect yet but they have much better 32bit support.

Also, do you really need linear information for this kind of post-production ? You could simply stay in normal environment and use 'Screen' mode instead of 'Linear(Add)' which the name implies what it does :- ).

Wouldnt it be easier if you wanted to stay in photoshop to have your 32bit exr file as a smart object within your 16bit post file. This way you can always readjust the exposure/gamma without crush or burn within the smart object?

Im curious to how this workflow compares to affinity as i have not used it before

Yes, as long as I do all changes in smart layer container (but don't apply them via filters like CameraRaw directly in-file), that is viable choice indeed. As long as you are content of having a nested file.You would have to be fully non-destructive and don't mix your color grading with any image manipulation, and I work somewhat dirty.

But over the past two years, Adobe has changed so much how 32bit mode applies, for example excluding direct access to CameraRaw as filter (can by bypassed by direct loading),etc.. that I just concluded to be futile using it at all.

Title: Re: Time to ditch sRGB/Linear as default (?)
Post by: James Vella on 2019-02-11, 10:16:36

But over the past two years, Adobe has changed so much how 32bit mode applies, for example excluding direct access to CameraRaw as filter (can by bypassed by direct loading),etc.. that I just concluded to be futile using it at all.

What version of Photoshop are you using? They removed Camera Raw filter?

As you mentioned I usually keep all 32bit render layers in a smart object for relighting and then in the 16bit master file (color grading file) put a camera raw filter on the smart object, you are saying this is removed now?

Direct access to 32bit file. In previous CC versions (<2018) you could have applied CameraRaw in 32bit mode. ( Of course, it would apply auto-leveling in its default mode, by you could switch to 2013 process and zero it out, and use CameraRaw on actual HDR data).This can still be done by auto-loading into CameraRaw with "Open As" of 32bit file as .raw file. But no longer as filter.

Applying it in 16bit mode still works just fine, but there isn't much point to it without access to any dynamic range, so it's just different GUI.

Title: Re: Time to ditch sRGB/Linear as default (?)
Post by: James Vella on 2019-02-11, 11:08:31

Direct access to 32bit file. In previous CC versions (<2018) you could have applied CameraRaw in 32bit mode. ( Of course, it would apply auto-leveling in its default mode, by you could switch to 2013 process and zero it out, and use CameraRaw on actual HDR data).This can still be done by auto-loading into CameraRaw with "Open As" of 32bit file as .raw file. But no longer as filter.

Applying it in 16bit mode still works just fine, but there isn't much point to it without access to any dynamic range, so it's just different GUI.

Fair enough, I suppose my camera raw method is just as dirty :) Im not doing full dynamic range stuff with it just a slight few adjustments which is easier than white balancing/vignette without extra plugins. I also like the sharpen in camera raw and having the ability to switch it on/off as a smart layer filter, depends on your workflow I suppose.

Yeah the whole thing (CamRaw) is crafty, which is why I lament every time why Adobe just couldn't make it fully functional. It would be blessing to have it work on linear files without any hassle as that would make the post-pro workflow absolutely identical to photography, one smooth process without unnecessary division between "big" changes in linear 32bit, and "small" changes after.

Now you originally mentioned the Affinity and I have to contend that it isn't as good as it appears.. we bought two licences but almost kinda regret it as while it is lot more ambitious (much better 32bit support, simultanenous layer adjustments (super good for textures),...) it's just not even stable. At the moment I am fully back at Photoshop for almost anything. It's not just habit...it's the same stuff as 3dsMax, might not be ideal, but still the best choice.

Title: Re: Time to ditch sRGB/Linear as default (?)
Post by: James Vella on 2019-02-11, 12:52:11

Yeah the whole thing (CamRaw) is crafty, which is why I lament every time why Adobe just couldn't make it fully functional. It would be blessing to have it work on linear files without any hassle as that would make the post-pro workflow absolutely identical to photography, one smooth process without unnecessary division between "big" changes in linear 32bit, and "small" changes after.

Now you originally mentioned the Affinity and I have to contend that it isn't as good as it appears.. we bought two licences but almost kinda regret it as while it is lot more ambitious (much better 32bit support, simultanenous layer adjustments (super good for textures),...) it's just not even stable. At the moment I am fully back at Photoshop for almost anything. It's not just habit...it's the same stuff as 3dsMax, might not be ideal, but still the best choice.

Cheers for the info on Affinity, stability is a concern I might wait it out. Another thing that concerns me is I use lightroom for my photography so the Adobe suite suits my pipeline, as well as the terrific catagorizing/tagging etc and similarities between adobe tools.

Regarding what you said about not being ideal but the best choice, I see you did a test on the ACES workflow using Davinci with dubcat - out of curiosity did you implement this workflow? Are you working in a higher gamut space or just sRGB in post? I ask because I have read the tech papers, did some tests on my own, I also work in Adobe 98 space for my photography on a wide gamut monitor for print (output to sRGB for screen), however for 3D archviz I just dont see any major advantage currently - mostly because Photoshop is the main post production tool (with the limitations we spoke about), easy to swap PSD files with other studios and if its kept in sRGB (even if its a low gamut space - which still looks great even with photography) theres no confusion along the way for input, output, space conversion, color matching, etc. I wont go into print for now, just curious about your view on the topic currently.

edit: Not to mention I wont be remastering any of my old work, which I doubt many people do in archviz.

Juraj Talcik You can still apply camera raw to 32bit, via dialog merge to HDR, but it will convert in 16bit, and you need to enable this feature in preference They remove old behavior as they said it's was too different result between Camera raw window and ending image.

Affinity is good, i don't have any problem this stability, but it's work a bit different with editing mask (applying levels) for example it need some time to learn, and still it's lack of smart layers for every filter, i need smart layers for making template with wrapping tool like in photoshopBut phothsop have many problems too, how many years they torture users with ctrl+z command and fix it only in last version, and they can't add saving 32bit full float for exr and hdr formats, also i am curious how the same tools working different in other adobe software like illustrator.So i am dreaming to leave photoshop to something like affinity or krita

Yeah the whole thing (CamRaw) is crafty, which is why I lament every time why Adobe just couldn't make it fully functional. It would be blessing to have it work on linear files without any hassle as that would make the post-pro workflow absolutely identical to photography, one smooth process without unnecessary division between "big" changes in linear 32bit, and "small" changes after.

Now you originally mentioned the Affinity and I have to contend that it isn't as good as it appears.. we bought two licences but almost kinda regret it as while it is lot more ambitious (much better 32bit support, simultanenous layer adjustments (super good for textures),...) it's just not even stable. At the moment I am fully back at Photoshop for almost anything. It's not just habit...it's the same stuff as 3dsMax, might not be ideal, but still the best choice.

Cheers for the info on Affinity, stability is a concern I might wait it out. Another thing that concerns me is I use lightroom for my photography so the Adobe suite suits my pipeline, as well as the terrific catagorizing/tagging etc and similarities between adobe tools.

Regarding what you said about not being ideal but the best choice, I see you did a test on the ACES workflow using Davinci with dubcat - out of curiosity did you implement this workflow? Are you working in a higher gamut space or just sRGB in post? I ask because I have read the tech papers, did some tests on my own, I also work in Adobe 98 space for my photography on a wide gamut monitor for print (output to sRGB for screen), however for 3D archviz I just dont see any major advantage currently - mostly because Photoshop is the main post production tool (with the limitations we spoke about), easy to swap PSD files with other studios and if its kept in sRGB (even if its a low gamut space - which still looks great even with photography) theres no confusion along the way for input, output, space conversion, color matching, etc. I wont go into print for now, just curious about your view on the topic currently.

edit: Not to mention I wont be remastering any of my old work, which I doubt many people do in archviz.

The stability might be fine for most but I tried it mainly for 32bit (HDR editing of large files) and that was where it wasn't that great.

No I never even tried Davinci Resolve, I simply can't force myself to adopt node based approach to post-pro.

I use sRGB clamped monitor because neither 3dsMax nor Corona are color managed and I can't wait to setup correct colors in Photoshop, I spend 99perc. of my time in 3dsMax. I might switch to wide-gamut when at least Corona becomes color managed, without it...it's insanity to do so imho.I also edit my photos in AdobeRGB via CameraRaw in either PS or Lightroom and only clamp when saving for web, but I do this while still being in sRGB clamp on my display so I only get advantage of some mixing not really exposed to the wider spectrum. Switching and managing wide-gamut pipeline my stressing me so I gave preference to lowest common denominator, sRGB because of my CGI work.

(I've been advocating for color management of 3dsMax and/or Corona for years here, but that is rather slow to get traction. I see it was potentially moved into 5.0 version, so maybe next year ? If 3dsMax doesn't come with it sooner. Honestly they have to, soon everyone will be using DCI-P3 since AdobeRGB is dead and HDR will become common space, color management must come.)

Juraj Talcik You can still apply camera raw to 32bit, via dialog merge to HDR, but it will convert in 16bit, and you need to enable this feature in preference They remove old behavior as they said it's was too different result between Camera raw window and ending image.

Oh I know there are few workarounds, but...it's not ideal :- ). The one I mentioned is "Open As" and then it will preserve dynamic range without clamping it. But both are shit solutions.

I am still very ignorant of the whole HDR thing, esp. since standards (both software and hardware) of the technology are so ever changing still these days. But I believe it will be much earlier than 10 years, after all, all the TVs have it, even the cheapest ones. And can you even buy classic SDR prosumer (consumer/professional intersection like Dell Ultrasharp,etc.., anything short of NEC/EIZO) type displays ? All the monitors on market right now are some kind of HDR 144Hz PVA panels oriented for gaming and media, they're like 99perc. of what you can buy that's coming out new to the market. From next year on, I presume majority of cell-phones will be enabled as well.

But oh boy, I believe it will be massive revolution when it becomes common spread accross all industries. Apparently right now SDR content looks terrible while on HDR mode, and vice-versa, how HDR content looks on SDR display we've known for years :- ).So if someone wanted to jump on the hype train for archviz right now in the moment, what would it look like in practice ? Double the post-production ?Does anyone already do it in some form for their clients ? I can imagine some real-estate people could showcase such content on top-grade TVs in their show rooms to wow the clients. HDR content on 100" 8k TV sounds more impressive to me than nause inducing VR.

But oh boy, I believe it will be massive revolution when it becomes common spread accross all industries. Apparently right now SDR content looks terrible while on HDR mode, and vice-versa, how HDR content looks on SDR display we've known for years :- ).So if someone wanted to jump on the hype train for archviz right now in the moment, what would it look like in practice ? Double the post-production ?Does anyone already do it in some form for their clients ? I can imagine some real-estate people could showcase such content on top-grade TVs in their show rooms to wow the clients. HDR content on 100" 8k TV sounds more impressive to me than nause inducing VR.

That's actually the whole point of ACES! Keeping scene referred data all along the pipeline and working in a wide gamut space to be able to deliver to whatever display referred space. So basically, select your display transform in a dropdown list and deliver to the intended platform. You can already (kind of) do that already in a serious and valid post-production package that support OCIO.

The only issue is that most renderers are using sRGB primaries and it can cause some discrepancies during the conversion from linear sRGB to the intended working space (mostly AP1 for us, this is the space that defines the ACEScg gamut and that encompass the REC.2020 one). So it would be better to render straight into ACEScg from scratch. As a renderer is almost colorspace agnostic, you should already be able to do so, except for spectra related stuff ( everything driven by Kelvin temperature) because those correspond to defined RGB triplets in the targeted colorspace (6500k/D65 is the white point for sRGB as an example, but ACEScg has a D60 white point).

The real issue here is all that HDR shit, to be honest. Every manufacturer is applying a whole load of post-effects to make the image "look better" without any respect of the initial vision of the content creator. The only thing we should benefit from that technology is the wider gamut, it should not have any impact on the dynamic range of the displayed medium (software wise). All the stuff they add on top of that is a massive pile of shit and a lot of film producers start to raise their voices against those marketing trends.

But oh boy, I believe it will be massive revolution when it becomes common spread accross all industries. Apparently right now SDR content looks terrible while on HDR mode, and vice-versa, how HDR content looks on SDR display we've known for years :- ).So if someone wanted to jump on the hype train for archviz right now in the moment, what would it look like in practice ? Double the post-production ?Does anyone already do it in some form for their clients ? I can imagine some real-estate people could showcase such content on top-grade TVs in their show rooms to wow the clients. HDR content on 100" 8k TV sounds more impressive to me than nause inducing VR.

That's actually the whole point of ACES! Keeping scene referred data all along the pipeline and working in a wide gamut space to be able to deliver to whatever display referred space. So basically, select your display transform in a dropdown list and deliver to the intended platform. You can already (kind of) do that already in a serious and valid post-production package that support OCIO.

The only issue is that most renderers are using sRGB primaries and it can cause some discrepancies during the conversion from linear sRGB to the intended working space (mostly AP1 for us, this is the space that defines the ACEScg gamut and that encompass the REC.2020 one). So it would be better to render straight into ACEScg from scratch. As a renderer is almost colorspace agnostic, you should already be able to do so, except for spectra related stuff ( everything driven by Kelvin temperature) because those correspond to defined RGB triplets in the targeted colorspace (6500k/D65 is the white point for sRGB as an example, but ACEScg has a D60 white point).

The real issue here is all that HDR shit, to be honest. Every manufacturer is applying a whole load of post-effects to make the image "look better" without any respect of the initial vision of the content creator. The only thing we should benefit from that technology is the wider gamut, it should not have any impact on the dynamic range of the displayed medium (software wise). All the stuff they add on top of that is a massive pile of shit and a lot of film producers start to raise their voices against those marketing trends.

So would the solution be implementing colour management into corona & to max and rendering straight to ACEScg? I didnt realise that even on certified HDR displays they were still adding lots of processing ontop but i guess its obvious when i think about it. I guess its to compensate for shit panels or discrepancy across a panel batch so they all look about the same?

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

The thing im not looking forward to is the day that the 30fps standard disappears and we have to start rendering animations at much higher framerates

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

The thing im not looking forward to is the day that the 30fps standard disappears and we have to start rendering animations at much higher framerates

Yeah, 4k/60 would be nightmare :- ) But same happened to still frame rendering, I used to do render that took 2 hours on single quad-core with Vray for 4k resolution, now that same 4k resolution is easily 2 hours on 200 cores... Quality standards constantly grow. Anyway, apparently half of TVs in 2019 from Samsung are 8k. And what was that ideal VR clarity ? 8x4k each eye at 90 FPS ?

So would the solution be implementing colour management into corona & to max and rendering straight to ACEScg? I didnt realise that even on certified HDR displays they were still adding lots of processing ontop but i guess its obvious when i think about it. I guess its to compensate for shit panels or discrepancy across a panel batch so they all look about the same?

What do you mean here? The solution to what? If you're talking about the color discrepancies that occur during the colorspace switch then yes, we should render straight into ACEScg to get a proper color-managed workflow. But this is not that simple tho, as it may introduce other caveats.

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

For sure! It is just that this HDR thingy introduced a load more of those post-processes. What's more, HDR and HDR10 are stuck with a fixed curve at the beginning of the media playback which end up with details loss in very bright or dark scenes. Things are going in a good way tho, HDR10+ and Dolby vision specifications introduce dynamic metadata to allow the change of the brightness boundaries on the fly (per scene) rather than remaining constant for the whole experience.

I think stuff like that will be more unified once the standards start to be similar to each other along the way. And it isn't any different from people using "Vivid" mode on their display/TV right now.

The only person seeing the image as the creators (us) intended...is well, us :- ). Just something to live with.

The thing im not looking forward to is the day that the 30fps standard disappears and we have to start rendering animations at much higher framerates

Yeah, 4k/60 would be nightmare :- ) But same happened to still frame rendering, I used to do render that took 2 hours on single quad-core with Vray for 4k resolution, now that same 4k resolution is easily 2 hours on 200 cores... Quality standards constantly grow. Anyway, apparently half of TVs in 2019 from Samsung are 8k. And what was that ideal VR clarity ? 8x4k each eye at 90 FPS ?

Yeah, sadly displays are evolving faster than computer hardware. High transistor density starts to be tedious for semiconductor manufacturers. As for VR, sales does not seem to raise that much and without mass adoption, I guess we won't see high-density panels come anytime soon. It's sad because we start to see some interesting technologies, like foveated rendering, to be able to render RT on that hardware. The Varjo technology is even more enticing, a small ultra high-density panel that follows your sight, backed by a standard resolution panel. They claim it to be equivalent to a 70k display.

You are talking about colors, but can you tell me why HDR monitors all have bright LED what you can't actually work with content creation, i have HDR monitor and enabling it it's lock brightness settings, i can't stare on this too long in monitor distance(its 1000cd in HDR1000 mode). In sRGB i am working at 2 level brightness. And common monitors still have 8bit+2FRC how this works with bt2020 10bit?And i am sure most of people here set comfortable brightness ~120cd for working. I am doubt what who making HDR content working in HDR1000 mode

You are talking about colors, but can you tell me why HDR monitors all have bright LED what you can't actually work with content creation, i have HDR monitor and enabling it it's lock brightness settings, i can't stare on this too long in monitor distance(its 1000cd in HDR1000 mode). In sRGB i am working at 2 level brightness. And common monitors still have 8bit+2FRC how this works with bt2020 10bit?And i am sure most of people here set comfortable brightness ~120cd for working. I am doubt what who making HDR content working in HDR1000 mode

AFAIK this is because there isnt a brightness control standard across monitors which is annoying and the stupid 1-100 slider for brightness should actualy be 0 to whatever max nits your display is. You should be able to subtract nits (or cd) from the brightness rather than some arbitrary dimming setting because luminance affects colour just as much as anything else so in terms of colour calibration its a bit difficult.

This is because of the core fundamental of the dynamic range: the ratio between bright and dark areas. In your case, HDR1000 refer to the peak brightness of the monitor (1000nits). Black areas for the specification have to be under 0.03 nits. So if you lower the peak brightness, you lower the dynamic range and then you'll be out HDR1000 specification, hence the brightness lock.

But monitors have dynamic contrast what should guaranteed quality color at all brightness. So even at low brightness monitor can show full spectrum color, color will be cut if i change brightness at video card driver. For sRGB mode for my model monitor was recommends 2 brightness with calibrator.

But i was telling what HDR content creators can't use HDR1000 for work, it's too bright and harm for eyes. And how this content may work at monitors with 8bit real colors i don't count fake 2bit FRS as it's just like dithering in photoshop? Also 8bit video looks dull in HDR but i can change player settings so it will looks very similar to 10bit.So i am thinking it some kind of ads than real advantages in colors, maybe for video it has more colors with compression, but in CG we are rendering every frame in full colors so they should have more colors than this fake HDR videos.

There is also the current issue of 10bit output being restricted to professional cards regardless of physical input in monitor. nVidia gonna nVidia. I wonder if HDR will force them to reconsider this stupid policy.

But monitors have dynamic contrast what should guaranteed quality color at all brightness. So even at low brightness monitor can show full spectrum color, color will be cut if i change brightness at video card driver. For sRGB mode for my model monitor was recommends 2 brightness with calibrator.

But i was telling what HDR content creators can't use HDR1000 for work, it's too bright and harm for eyes. And how this content may work at monitors with 8bit real colors i don't count fake 2bit FRS as it's just like dithering in photoshop? Also 8bit video looks dull in HDR but i can change player settings so it will looks very similar to 10bit.So i am thinking it some kind of ads than real advantages in colors, maybe for video it has more colors with compression, but in CG we are rendering every frame in full colors so they should have more colors than this fake HDR videos.

When you talk about colour though you actually mean chromaticity which doesnt take into account luminance aswell. By reducing the monitor brightness youre restricting the luminance values it can represent and therefore restricting the colours available on your shiny new HDR monitor.

There is also the current issue of 10bit output being restricted to professional cards regardless of physical input in monitor. nVidia gonna nVidia. I wonder if HDR will force them to reconsider this stupid policy.

This has changed. As you mentioned HDR forces them to output at higher bit depth.But they are not stupid XD it seems that you only get 10 bit when playing games. So basically it only works when launching Direct X.