Introduction

ft-SSIBL for “Screen Space Image Base Lighting” is based on a topic I covered in a previous post about Roy Stelzer’s “2.5D Relighting inside of Nuke”. In this shader I tried to reproduced a few approach found in his Nuke script. So with a Normal pass (object or world), you will be able to do some relighting with a HDR map. The shader won’t compute the 9 coefficients (spherical harmonics) needed for you as describe in this paper : http://graphics.stanford.edu/papers/envmap.

My tutorial for AETuts+ is finally out !!
It covers Time Remapping, Waveform to keyframe conversion, expression, … I got really inspired by watching all those reels from motion designer or filmmaker with music most of the time from Hecq. I was wondering how they did their editing and cuts, so I came up with this idea. I don’t know if it’s the way they did it, but this is my approach.

Introduction

Yeahh Matt Ebb just commit my patch (SVN r27733) for the “Color Balance” node in Blender 2.5 Compositing node !!! Now it should be much easier to work with.

You can get a version of blender at Graphicall.org (any version above revision 27733)

There is still some precision issue on the color wheels, I guess some day it will be possible to move the color picker slower.

How to use it ?

First I would recommend you to un-check the “color management” setting in Blender 2.5 or it will make the blacks really hard to control.

If you are not so familiar with color grading, and push-pull techniques, I would really recommend you to watch Stu Maschwitz’s (Prolost) video tutorial using Magic Bullet Colorista. The settings won’t be exactly the same, but the approach quite the same though !

How does it works ?

I described the Lift/Gamma/Gain in a previous post, and mostly this node is based on the formulas specified there. We just slightly modified it so the 3 defaults values parameters are equal to 1.0 just like in Colorista. Which makes it much easier to control the blacks.Actually the “Color Balance” node before this revision was the same formula but with lift default value equal to 0.0

Some presets !

While making some comparison test between Colorista in After Fx and the “color balance” node in blender, I tried to mimic some of Colorista’s presets.
You can download the “Blender Color Grading Presets” here : http://code.google.com/p/ft-projects/downloads/list

Even If I’m aware of what gamma and Linear workflow is, I’m not quite sure I’m using it always in the correct way. So I decide to dive into documentations and forums again to refresh my memory about it and at the same time closing a few gap.
Since so many people, even in the industry, still don’t know what it is and how it works, I thought I would make kind of a dairy of what I found on with my research those couple days.

Introduction

To get started, there is this great example from AEtuts+.com talking about Linear workflow in AE. It is not a the deepest explanation out there, but it will give you a nice overview with simple words and explicit example of what Linear workflow is and why it is so important !

Most graphics software are written for a linear color model, i.e. they make the simple assumption that 255 is twice as bright as 128. But since the monitor is non-linear, this is not true. In fact, for most monitors (with a gamma=2.2), you need to send the pixel value (0.5^(1/2.2))*255=186 if you want 50% of the brightness of 255. The commonly used value of 128 will only produce about (128/255)^2.2 = 22% brightness.

Digital cameras have a (roughly) linear response to light intensity, but since they are intended for display on computer monitors, they embed the non-linearity (gamma) in the image. (True for .jpg, whereas RAW files are just that – RAW, i.e. linear data, that gets non-linear when converted to JPG, for example)

Therefore, if you input .jpg imagestaken with a camera into a graphics software, you need to compensate for the images gamma. (by inverse gamma. 1/2.2 = 0.455)

And if you display the (linear) data generated from a graphics algorithm, you need to compensate for the display gamma. (add 2.2 gamma to the picture)

A few facts :

When creating a texture in Photoshop, you’ll see its color with 2.2 gamma applied (Because screens are big liar :p). Meaning when you think you got the good “brightness”, you actually made it twice (or more) brighter than what it’s supposed to be in real world.
When for painting, or montage it might not be import, for texture it is really important !!! Because as said above, your renderer/shader/… will assume the picture is linear and will apply math according to that.
So the only solution to bring this picture back to a “linear color space” is to set the gamma to the inverse of what the monitor shows you. As we know on PC, gamma are shown as 2.2 (I think it’s 1.8 on mac OSX). So the gamma value of your texture before saving it should be 0.455 (1/2.2).

Tips : In Photoshop, on top of your layer, add a “Level Adjustment Layer” and set the gamma value (mid-tone) to 0.455

With most today software I don’t think it is necessary to do that any more, but to be honest, this really depends on how the software you are using integrate Linear Workflow. For instance in 3Ds Max you can enable the Gamma correction in the “Gamma and LUT” tab of the preferences panel.

Because renders works in Linear space, your rendering would seems to look darker on your screen. So in case you are saving it to a 8bits type file (as JPG), you should set the output gamma parameter to 2.2. But in case you are saving it to a floating point file (HDR, RAW, EXR, …), this parameter should remain 1.0. Because all the dynamics of your picture is saved in those raw file, you would apply the Gamma only in post process (compositing).

In the above case with After Effects, by making sure to activate the linear space workflow, it should take care of that for you, so you don’t have to change gamma to anything, just leave it.

Since there is no mask in the “Compositing Editor” of Blender yet, I found a simple tricks which could work pretty well especially in case of color grading.

You’ll see nothing really fancy here since the mask can only be square (or pretty close to a square shape though). But if you check out Colorista for instance, the two allowed shapes are ellipse and rectangle.

I did bump into this website a few days ago, it looks pretty old, but it’s the first time I heard about it, and I think it is worth looking at it for open source software development (Ramen already implemented it).

Introduction

From there website :

“OpenFX is an open standard for visual effects plug-ins. It allows plug-ins written to the standard to work on any application that supports the standard. This avoids the current per application fragmentation of plug-in development and support, which causes much heartache to everyone, plug-in developers, application developers and end users alike”

Who use it ?

Well this is the interesting part ! Big major plug-in development company use it as a few in the following list :

So beside the bullshit talks about “Blender should remain freedom and shouldn’t mix with closed source third party apps or commercial one”, this could be really useful for some of us who still want to use Blender as main frame, but still be able to use great external (sometimes commercial) plug-ins !

Here for instance, The Foundry Keylight, which in my opinion is one of the best Keying plug-ins ever (and the first one who tells me blender’s can do the same job, I give him 12 shots to do in a week and expect it to be perfect 🙂 ).
If you want to use it today, you’ll need to buy the plug-ins (175€) + on of the software compatible withasNuke, Shake or so (which I believe is around 2000-3000 €) or even buy an After Effects licence because it come bundle with it now (about 700€). Pretty expensive just to do Keying don’t you think ?

When you can actually only spend 175€ on the plug-in and use it with your favourite apps (even your own if you’d like to).

I’ve been waiting for this since ever ! Blender 2.5 is managing linear workflow (gamma correction, … ).
Matt Ebb just did a great update where the UI (color picker, …) give you feedback of linear color and present the linear workflow.

Here is the comment of the SVN update (revision 25065) :

Changes to Color Management

After testing and feedback, I’ve decided to slightly modify the way color
management works internally. While the previous method worked well for
rendering, was a smaller transition and had some advantages over this
new method, it was a bit more ambiguous, and was making things difficult
for other areas such as compositing.

This implementation now considers all color data (with only a couple of
exceptions such as brush colors) to be stored in linear RGB color space,
rather than sRGB as previously. This brings it in line with Nuke, which also
operates this way, quite successfully. Color swatches, pickers, color ramp
display are now gamma corrected to display gamma so you can see what
you’re doing, but the numbers themselves are considered linear. This
makes understanding blending modes more clear (a 0.5 value on overlay
will not change the result now) as well as making color swatches act more
predictably in the compositor, however bringing over color values from
applications like photoshop or gimp, that operate in a gamma space,
will give identical results.

This commit will convert over existing files saved by earlier 2.5 versions to
work generally the same, though there may be some slight differences with
things like textures. Now that we’re set on changing other areas of shading,
this won’t be too disruptive overall.

How, why, cool ?

So really basic programming, but I thought it would looks cool. Actually, what was going to be a cool looking animation turn out to become a cool visualisation tool !
I found out that by just showing those pixels in a 3d space based on there RGB values you could see several dimensions at once :

Red value : X axis

Green value : Y axis

Blue value : Z axis

Luma value : is the vector between the black color (0,0,0) -> white color (255,255,255). It means that if the point cloud is closer to the white corner, brighter the picture is (… no kidding 🙂 )

Saturation value : it is the vector perpendicular to the luma vector. it means if the picture is saturated wider the point cloud would be, and of course more it is desaturated finer the point cloud will be. A black & white picture would only show particles on the luma value.
This one was the less obvious to me (but I’m not really smart :p)

Examples

Saturate picture = wider point cloudDesaturate = finer point cloudblack&white = point cloud only show as a straight line on the luma vectorbrighter = point cloud closer to the white cornerDarker = point cloud will be closer to the black corner

You will have to add a quicktime movie (.mov) in the “data” folder called “vid.mov”

Conclusion :

For sure all this sounds pretty obvious, and I’m pretty sure I’ve seen people doing this kind of stuff before, but I’m surprised I haven’t seen it in any “video editing” software before (or maybe I miss it).

I think it could be a really helpful tool to have a quick over look on your picture and just in a snap being able to tell if its too saturate, too red, or too blue, too bright or too dark…

Feel free to leave any comments about that or if you know something similar, just drop a line in the comments. By the way, this is my really first complete project with processing, so I probably did things the wrong way, you are welcome to correct me 🙂