- Camera Tips -* Capture textures when there is diffuse light, like an overcast day.* Always shoot in manual.* Always shoot in RAW.* Set a custom White Balance. (Optional)* Use "Faithful" profile on your camera. (Optional)* Try to keep the Macbeth chart as parallel to the surface as possible. (Sometimes I put tape on the back)* Don't overexpose, you do not want to clip the Macbeth chart.* If you are cross polarizing, set shutter speed to 200+, this will cancel out the ambient light.* When you use a polarizing filter you will loose around 1 stop of light and the white balance will shift.

- Creating the Camera Pofile -Take a picture of your macbeth chart in the same lighting conditions as your textures.If you are using a macbeth chart from X-Rite, download the "ColorChecker" appStart the "ColorChecker" app and load the picture.The "ColorChecker" app will do it's thing.That's all, you are done.

- Calibration -I use Lightroom most of the time, but you can do the same stuff in Photoshop with CameraRAW. I'm using Photoshop in this guide.Remember that these are RAW images.

If you open the RAW image in Photoshop without changing anything it will look like this. eeeeew nasty!

Before we can calibrate the image, we need to linearize it.

Open the "Camera Calibration" tabChange the "Process" to 2010.If you made a "Camera Profile", select it.

I have made this little cheat chart to show you what the values should be.The RGB is for Photoshop and the % is for Lightroom.

You want to get Sampler #1 and #6 correct first. That's 243 and 53.Start by adjusting the "Exposure" slider until you get 243.If #6 is too dark use "Fill Light" if it's too bright, use "Blacks"

#1 and #6 are correct, but the other swatches still need some tweaking.This is where we use the "Targeted Adjustment Tool".Select the tool and drag your mouse up and down over the swatches.You need to fudge each swatch multiple times before each one is correct.You can still adjust "Exposure" "Blacks" etc.

When you are done, click on the "Presets" tab and create a preset.

Open the picture without the macbeth chart and click on the preset.

Here is the calibrated picture, niiice.

The picture we just calibrated was shot without any polarizing filter.If you want to capture really good textures, you will need to invest in cross polarization.

Here is the albedo texture from cross polarization.That's one dirty concrete wall ! What is that ? diarrhea splatter ? Guess we will never know.

Interresting, correct me if I'm wrong. But if you shoot in raw, why choose "faithfull" profile in camera?? It's only good for "in camera" preview , you can shoot even in monochrome, it won't make the raw monochrome anyways. Or does the setting change something for the raw?

Hi dubcat. just one quick question. If the final texture with the correct albedo is "brighter" than the original one, I suppose the final render will look wash out. So what would be the next step once the render is done? Apply a contrast curve or something in post? Apologies if I'm not getting things right. My mother language isn´t english, so I'm doing my best to understand this topic.

On another note, I recommend that you underexpose your textures. I've captured a lot of bright kitchen textures, and when I've processed them in Photoshop/Lightroom they got really overexposed/wrong. Lately I've been calibrating my stuff with 3D Lut Creator, kickass tool.

Really awesome guide! Looked into cross polarisation, never came to my mind to use a polarizer like that!Found this resource from a game dev on that topic: http://filmicgames.com/archives/233

@Dubcat: Since a simple photo is technically Diffuse + specular, wouldn't be the correct path to polarize the simple photo as well and calibrate the image through the polarizer with the chart? Or since we aim to shoot when overcast the specularity doesn't matter with textures that much to influence the outcoming image?Or do you already use the polarizer in the albedo image and I didn't read it correctly in your guide?

Dubcat, have you thought of building a personal scanner ? Would give normal/height from the cross-polarization. Not sure how but I've seen few of those setups recently, didn't seem that complicated at all.

My little prototype box had LED strips on each side. I took 4 pictures and combined them in Photoshop like this.Image 1: Top light in Green Channel and Left light in Red ChannelImage 2: Bottom light in Green Channel and Right light in Red ChannelThen I combined the two images with Overlay blend and filled the Blue Channel with #8080FF (And adjusted it afterwards).

My little prototype box had LED strips on each side. I took 4 pictures and combined them in Photoshop like this.Image 1: Top light in Green Channel and Left light in Red ChannelImage 2: Bottom light in Green Channel and Right light in Red ChannelThen I combined the two images with Overlay blend and filled the Blue Channel with #8080FF (And adjusted it afterwards).

Hey DubCat your posts are amazing, +1 from me to keep posting such stuff :)I would love to ask you, can you point me at some good sources of this PBR related stuff? Or some PBR/Disney/game/whatever related stuff / channels / videos you think are very interesting.One dumb question, why shooting raw? What if someone uses a smartphone with manual settings (not talking about resolution or just better sensor capabilities)?Thank you.

Can you point me at some good sources of this PBR related stuff? Or some PBR/Disney/game/whatever related stuff / channels / videos you think are very interesting.

Hey man. I've linked to a couple of guides and materials that I know are proper PBR.

There are two types of PBR.* Cheap ass "Bitmap2Material 3" materials that most texture-stores are selling. (They are still PBR, just not calibrated to anything in the real world).* Proper calibrated materials (This can be color values, 3D scanned normals/displacement, shadow cancellation by capturing a hdri of the environment.)

Color Values- You can calibrate the values like i did here in Photoshop/Lightroom. Or you can use 3D Lut Creator. 3D Lut Creator will auto generate correct Matrix, change the exposure and linearize the picture.They got two official tutorials covering this topic.

Normals/Displacement- Most people use PhotoScan for 3D Scanning. But I've heard good stuff about this new badboy Capturing RealityThis tutorial series about PhotoScan is great, check them out.

Shadow Cancellation- You need a good amount of equipment to do this. * Lens and Tripod mount to shoot HDRi * Grey ball (Cheap solution would be to use ColorChecker graycard) * Chrome ball (You use this ball to align the HDRi inside 3dsMax, it is not for capturing HDRi (Like in the old days))

* You re-create the grey/chrome ball in 3dsMax. * Rotate the HDRi until the HDRi reflections in the Chrome ball match the real life reference picture. * Change the exposure of the HDRi, until the grey ball match the real life reference picture.

* Bake a map that contain the shadows and divide them from the diffuse texture in linear space.

One dumb question, why shooting raw? What if someone uses a smartphone with manual settings (not talking about resolution or just better sensor capabilities)?

Camera manufacturers want their pictures to look good out of the box. So they apply white balance, contrast, sharpening, de-noising etc and save them as JPG.When you take a picture in RAW, you get the raw sensor information. Nothing is baked.

Another important thing is that JPG is 8bit and RAW is 16bit.You can make major adjustments without messing up the picture with RAW.

Shadow Cancellation- You need a good amount of equipment to do this. * Lens and Tripod mount to shoot HDRi * Grey ball (Cheap solution would be to use ColorChecker graycard) * Chrome ball (You use this ball to align the HDRi inside 3dsMax, it is not for capturing HDRi (Like in the old days))

* You re-create the grey/chrome ball in 3dsMax. * Rotate the HDRi until the HDRi reflections in the Chrome ball match the real life reference picture. * Change the exposure of the HDRi, until the grey ball match the real life reference picture.

* Bake a map that contain the shadows and divide them from the diffuse texture in linear space.

Tons of useful information, as always. Big thanks! I have one question about lighting information removal, though. If HDRI panorama is needed only for lighting removal, wouldn't capturing it with old school method (chrome ball) be more than enough for that? That would save a lot of time when capturing and would be much less expensive.

Wouldn't capturing it with old school method (chrome ball) be more than enough for that?

I've never tried with a Mirror Ball before. Right now I use a scratched up steel ball for HDRi alignment at home, it's not clear enough to be used for Mirror Ball HDRi capture.I'm planning on purchasing the Lighting Checker "Twins". (I guess that steel ball is clear enough to do a proper test).

Would be cool if someone with a sexy chrome/steel ball could give it a try. It could be a great poor man's alternative if it works.

I sampled some albedo values today.When I shot the last picture it was starting to get dark and blueish outside.We can use this underexposed blue picture to compare RAW and JPG for people who want to see the difference.

Open them in a new tab for better comparison.

It's the same shot, but the camera has applied Lens Correction to the JPG.

Thank you for all the usefull information dubcat. I run trough all the info +- and your posts. very useful. In comparison RAW to JPEG, which one did in final look resembled real appearance you saw better? + I saw you posting the link for akromatic gadget - do you happen to know any source where all different materials are shot together with this chart? That would make great reference for materials creation.

In comparison RAW to JPEG, which one did in final look resembled real appearance you saw better?

It was starting to get dark and blue outside, so my eyes saw something like "RAW as Shot".That's why we have to calibrate the texture with ColorChecker, because we want the original Albedo value.

I kinda messed up in this example. There had just been a major update to 3DLUTCreator, and they made some big changes to the Linear/New Matrix process.You can see that the shadows in the RAW image is kinda purple, this is because I used the old method that I was used to, with the new 3DLUTCreator.I can post new proper comparison shots.

I had some spare time to run out and do a new comparison. This time I included cross polarization.In the new 3DLUTCreator update you just press "Match" and everything is done for you.

If you have a ColorChecker Passport like me, don't include the top portion of the checker when you match, it will give you worse result.

With Cross PolarizationYou can see that it has a hard time with the blacks, this is because there is no specular.I'm working on a new Match preset that will work with cross polarized pictures.

You can see that the polarizing filter has a brown/green tint.

Without Cross Polarization

Here is a little test I did with Specular on the ColorChecker.

And this is how you create the Specular and Albedo texture.I say 100-50% here, this depends on your filter. Only the ratio matters.Here is my Layer stack (Linear 32bit)

When you shoot cross polarization, you get two pictures.One with 50% Albedo and one with 100% Specular and 50% Albedo.

Here is the 50% Albedo picture.

Duplicate this picture and change the Blend Mode to "Add"

This is the 100% Specular and 50% Albedo picture.

Duplicate one of the 50% Albedo pictures, put it above the Specular picture and change Blend Mode to "Subtract"You will notice that everything that has SSS has colored specular. Leaves/skin has blue specular.

I just have one or two questions. Im trying to lock all this down in my mind, and have been reading everything I can get my hands on. I'm shooting my own textures, I have a Macbeth and have been experimenting with cross polarization. Im very happy with the results, but still may questions.

1) You use the term linearize. I have seen others use the same term in reference to texture/Albedo, but not sure we are all using it in the same way when it comes to texture processing (nothing to do with LWF or gamma). Some seem to mean just A) basically that lightroom/camera RAW is at all its default settings? Others B) the idea of bringing the grayscale of the Macbeth chart into their proper luminance ranges, with the targeted adjustment tool?

Im mainly asking as Im curious exactly what 3d LUT does in this area? Its like a raw image processor that will do all this adjustment work for you ( in the sense of adjusting luminance rages)?

2) This bring me to the part where I'm finding very little information. The inbuilt (nikon for me) tone curve of our cameras.

Now I know this may not be important, but, I wish i could develop a DNG profile from/for my nikon that was Truly 'linear'. It seems that there is much contrast added to the .raw images somewhere in the camera image processing. I have begun to think of the process of adjusting the Macbeth grey scale patches as removing this inbuilt tone curve.

Does this make sense to anyone else or am i just imagining things? If our cameras had a truly liner/neutral response wouldn't our photos come out a lot closer to the 'linearized' (luminance adjustment) adjustments we make to our textures? I find when I'm adjusting my images to fit the proper Mcbeth luminance range that i am always making very similar adjustments, some how its an inverse S-curve. Always with a very steep dip between the first and second white patches. Am I wrong in thinking that I'm fighting the Nikon tone curve? If so is there a way to remove it more automatically without buying 3d LUT ?

Also, some other thoughts......I use a very similar aproach to you for adjusting the luminance patches in a targeted manner....however after setting my basic black and whit patch in ACR I go into photoshop and use a 1 curves adjustment set to color blending mode, then a second set to luminosity blending mode. In the first I white balance each color range with separate rgb curves, in the second I adjust the luminance...I found doing it all with a single curve adjustment was messing with my colors. Specifically because correcting for the nikon tone curve (weather or not my thinking is correct) Involves some pretty steep corrections for the second patch, and getting the luminance right meant my colors would start to go loopy.

This is of course time consuming, so if 3d LUT will take care of this work that would be great. Is that in fact what it does?

Okay guys, also full disclosure, I haven't tried corona, though I can see people are getting fantastic results. I just found my self reading on the forum as there are so many great discussions.

PS. the thread showing the curve response of all the 3ds max adjustments was brilliant.....ahhh the mysteries are unlocked....too bad it was mostly bad news...hehe

I know exactly what you are talking about! I actually think this is the RAW file decoders fault.I noticed that CameraRAW / Lightroom gave me super contrasty pictures, even though I used calibrated camera profiles/linear curve/ 0 everything.After some googling around I found a free program called "dcraw" that gave me better results.Not long after that I began using 3D LUT Creator. 3D LUT Creator is using "libraw" as a decoder and "libraw" is based on "dcraw".

This is how CameraRAW decodes the RAW file, super contrasty.

When you open a RAW file with 3D LUT Creator it decodes the file with "libraw" and saves it as a 16bit LogC tiff file. This way you keep the dynamic range.

You can customize how "libraw" decodes the RAW file here, but default is pretty good.

The view port will load with a LogC LUT, so you will never actually see the LogC version.This version has a more "Linear" look if you ask me.

This is what you get when you align the checker pattern and click Match.

3D LUT Creator does this by first adjusting Exposure/White Balance, and then it creates a new matrix.No curves are used.

In some cases using the "Linearize with curves" tool after "Match" will give you slightly better result, and in other cases it will ruin the whole picture.Trial and error thing.

When you are done calibrating, you click "Send LUT to Photoshop" and apply it to the pictures without a ColorChecker.

You are 100% right. Its the raw decoding that introduces the contrast I have been fighting against. I have been starting from Adobe standard camera calibration in ACR. This has not been a great starting point, I now realize.

3D LUTs logC starting point got me thinking..... I understand that log is used to shoot 'flat' for video guys and preserve maximum dynamic range ('Flat' = 'linear'). So really what is needed is a flat starting point.

Being to cheap to purchase 3DLUT for the moment (thought it looks like the perfect tool for processing texture) I looked into a few other options. Here is what i found.

1)dcraw...your suggestion....definitely gave me a much flatter starting point. Great! In fact after setting my white and black point (243/52) all the other patches were pretty much in the right places. First time I've seen that. Also interestingly, using dcraw i got like 30 extra pixels of resolution out of my sensor, hilarious, I guess Nikon feels these edge pixels are not fit for consumption.

Ill look into this further....but i must say that the lack of GUI is a bit of a bummer...hey, do you remember what settings you were using by chance?

2)LOG vision camera profile for ACR. Its essentially a very flat conversion. Perhaps to flat. Also looking at the histogram its almost as though something has been clipped. If I try to stretch the black and white points to the edges of the histogram its like the data hits an invisible wall. Also i found the colors to be a little off. have to do more playing here. Maybe be an option, but may also have problems.

3) Nikon NX capture 'flat' profile......after stumbling across something about Nikon's flat profile online i decided to download their free raw developer.....Paydirt! thought this software is a horrendous piece of crap, the initial conversion using the flat profile is pretty damn good. From there I can export to tif and keep going. This gives me the best colors and results I think. This will likely be my new texture workflow.

Ill post a comparison of the three images when I get a chance..........Just wanted to follow up and share what id found.....of course 3dLUT is clearly the way to go, but for now im definitely getting a much better initial conversion. Wonder If I can convert with flat then create a color matrix profile with adobe DNG editor to apply back to the tifs in ACR....haven't tried that yet....will post back

PS. I don't know if you saw but episcura the texture site is giving a month of free pro access for each 8 textures you submit, 3 if they are tiled. Not bad. Now if only they would have a section of textures shot with Macbeth charts then that would be really amazing.

I have been using similar workflows :- shoot raw images with chart ( no polarization )- get linearized tiff from dcraw- white balance and expose in photoshop-> all swatches match pretty well with the chart

I recently started using cross-polarization, and I noticed I couldn't use the same worflow :- when I white balance my images at the end of the process, the patches of the chart don't all fall into place as they do with non polarized images.-> if I expose according to the 3rd grey patch as I usually do I get too bright white, and too dark black

It feels like cross-polarization, which I use to remove specular lighting, removes specular also from the chart. It doesn't seem to remove the same amount of specular on all the patches, and thus I can no longer calibrate according to the chart values.Matching the cross-polarized patches with the reference values using something like 3D lut creator might try to compensate that missing specular part, and do some odd color transformations.

Did anyone come across the same situation ?Do you think that the specular part on the chart is neglectible and thus we can consider there is no spec in the chart ?

Thanks for the answer, I feel less alone in the world ^^May I ask how you came up with these values ?

In my case, I tried to calibrate my polarized chart based on the brightest grey first since I found it's the one that might have the least specular ( because of the low glossiness ) and also even if there is a little specular in it, it should not be much compared to the value of the bright grey itself. I expect a bigger difference on the black.

My samples are still not perfect, I'm improving my camera/light setup every month.The secret is to have everything calibrated and then recreate the scene in 3dsmax.I have HDRis of my light boxes and ring light (Diva Ring Light Nebula), these are calibrated to give me the same intensity/falloff as in real life. I've measured the distance/angle of these lights and recrated them in 3dsmax.Then i shoot something with and without polarization filter. You can then create a glossiness map from the specular map you get from cross polarization (You need to light the object from all angles, or else the specular map will not be correct). I don't have any fancy app that generate the glossiness right now, so I adjust the map until the cross polarization diffuse/glossiness match the non-polarization picture.

Edit:I want to mention that I adjust my white balance in camera. I shoot the greycard of my Color CheckerPassport, go into custom white balance on my Canon and select the picture. You might think, but I'm shooting in raw, it doesn't matter. Well, it does matter when you run the raw through the script I just posted. The polarizing filter will tint your pictures, just white balance in camera and be done with it.

"And this is how you create the Specular and Albedo texture.I say 100-50% here, this depends on your filter. Only the ratio matters.Here is my Layer stack (Linear 32bit)

When you shoot cross polarization, you get two pictures.One with 50% Albedo and one with 100% Specular and 50% Albedo"

Can you explain the 50% albedo issue? When I do X-pol I tend not to extract a spec pass and just end up making bump/spec from the Albedo. But when i X-Pol capture my Albedo i expose as close as possible to the ColorChecker values and then bring them in line 100% with curves. Do you see a problem with this? Also, what is the significance of the 32bit space here?

PS. also, as we discussed before, in terms of getting a liner/flat RAW, I have found a great free open source, raw developer...... It's called Rawtherapee. It has a flat default which does not add any toning and so is perfect for developing textures from.RAW. I would highly recommend it to anyone looking to develop their own textures. It accepts custom camera color calibration (ala adobe dng profile), and lens correction profiles. It is really fantastic and is my go-to texture-from-raw tool.

This is where the cross polarization headache comes in.The polarization filter will lower the exposure a little.You will only get 50% albedo max.You can try to calibrate the camera without the filter, since the color checker values are 100% albedo + 100% specular. And then turn up the exposure, to compensate for the filter.Or the optimal solution would be to tether the camera, shoot booth polarization pictures. Combine them in 32bit with Add, and see if the checker value is correct. If not, just remember to adjust both pictures with the same values.

Hey Dubcat (or anyone else who might know the answer) sorry to reignite an old thread, but keen to know why in the original post from 2015 you change the Lightroom Process to 2010 initially for calibration?

I'm just looking into this properly now, and have been messing around in Lightroom with those Process options and find myself wondering what exactly they're doing and why you'd choose one over the other when it comes to texture creation.

I've just opened a random raw photo in lightroom and 'linearized' it according to the steps in the first post. I then changed the process back to Version 4 (Current) and noticed it changes the tone curve and a few of the tone settings but maintains a similar look so I found myself asking how critical this step was.

The reason I used 2010 is because Adobe moved away from "Recovery" after 2010. Before ACES, CameraRAW Recovery was my go to tone mapper, it's just too good. Adobe has removed 32bit support for CameraRAW in the 2018 update, but you can still manually download CameraRaw 9.9 from their site.

If you select 2010 and do adjustments and then select Version 4, CameraRAW will auto convert the 2010 settings to version 4.

Like many others I've made the change from Photoshop to Affinity Photo. Because Affinity Photo has 32 bit floating point support and the 360 degree HDRi mode is 1000 times faster than the 2017 Photoshop feature. Photoshop clamp floating values above +16 and values bellow 0 is clamped to 0. If you try my ACES tone mapper Photoshop script, you might noticed black pixels, this is because of the poor 32bit support.

I have to edit the original post and update it to 2018 standards with homemade cheap Megascan setup.

If you select 2010 and do adjustments and then select Version 4, CameraRAW will auto convert the 2010 settings to version 4.

Makes sense.

I'm actually using Lightroom. The camera I use has a 'Flat' Picture Mode built into it which looks very Linear (RAW). I've found if I load the RAW file into Lightroom and just switch the Profile to Camera Flat. It looks pretty much like an original RAW file to me but I wouldn't really know otherwise.

If I'm correct, the workflow you were using with the 2010 process was purely to Linearize the RAW so you were actually seeing the original raw data right? So I'm wondering with the Camera Flat profile, whether any of that 2010 process bit is relevant to me anyway.

Also if you can get the same results from 2014 process, I'm assuming you chose to use 2010 as it's what you were comfortable with through experience and you could in fact use either?