In ZBrush, Alphas are one very useful way to sculpt some detail into our models. We can create alphas from any photograph, any image. In this tutorial, I want to show how I create alphas for Zbrush, formerly in Photoshop and recently in Crazybump and in the end I want compare both alphas in Zbrush.

When I was experimenting, I found out that CrazyBump is a great tool to create alphas with depth compared to the alphas created in Photoshop. I don't deny other detailed methods to create more precise alphas in Photoshop and the same result can surely be achieved as well in Photoshop with a different procedure but here I am just comparing the two different one-minute methods.

For this tutorial, I chose Antique Marble texture. I want to create an alpha to sculpt detail for some assets I am currently working on.

First of all, It is important to have a square image, as ZBrush will stretch the image into a square if we have a different ratio image. I try to keep the image size at 512 pix or 1024 pix. Larger image sizes will be heavy on Zbrush and harder to use.

Turn the image into greyscale and make curves adjustment to bring up detail.

Then we will create a gradient mask in a new layer. We will use this gradient to mask our image so that we can use it as a brush alpha.

Make sure to convert the RGB image into greyscale so that ZBrush can recognise it as an Alpha.We need to save this as a .psd file into ZBrushRootFolder/ZAlphas directory so that the newly created alpha will be recognised in ZBrush Lightbox.

ZBrush Lightbox works as a file browser and is a quick way of accessing native or custom-made brushes, alphas, models, noisemakers etc. It can be activated by " , " (the comma). When you clcik on the Alphas tab, here you will see the alpha you have just saved. I have the habit of saving with my initials as prefix so it is easier for me to browse my custom-made alphas. Just double-click on the alpha of your choice and it will be placed in the alpha of the current brush.

Now I will create the same alpha in CrazyBump and afterwards compare the alphas.

CrazyBump is a small but a powerful software to create and edit normal and displacement maps. We load our AntiqueMarble photo with the start-up menu. Then the software tries to evaluate the depth of the image and asks us to choose the correct peaks and valleys.

Then we edit the normal map and the displacement map with the sliders on the right-hand. It is pretty straight-forward and simple to use. The preview window is a very good feedback of the overall adjustment. I tend to keep my eye on this window.

When we are done with editing, we save our displacement map to clipboard or to a folder and repeat the same procedure of masking in Photoshop and then save our new file again to ZBrushRootFolder/ZAlphas directory.

Now both alphas are there and ready for testing. Just open up ZBrush and load the 2 alphas one by one. You can see my rsults below. Remember that both alphas are applied at the same z-intensity and same brush size.

You will notice the difference that the alpha created with Photoshop applies a surface displacement while the alpha created with CrazyBump has a better depth range.

I hope this tutorial has been helpful. Your comments, your feedback is always welcome.

Gamma Correction is calculated during render but is not baked in to the image. It is displayed with the sRGB button.

I use Linear Workflow but I tend to not-bake gamma correction in the output. In Vray, by the use of sRGB button on VrayFrameBuffer (VFB), we can display the output in a gamma corrected way but can still have the image non-baked.

Why Not Baking The Gamma ?A single image might be a mixture of:

3D object

Lighting Passes

A matte painting

Photos

Masks

Live action footage

Additional effects

Not all of this is created by one person or station. Data can come from many sources. The advantage of non-baked image is to allow the compositor to work on the raw image and apply the gamma curve by his own decision. By this way, the true 32bit values are stored in the image.

Vray Render Settings For Non-Baked Output

What is 32bit Image?

In a jpeg image: The dynamic range is from 0 to 255.0 is the pure black and255 is pure white.

This means :White wall (properly lit) = value 255Sun= value 255

Both the white object and the sun will have the same value on histogram. This is because of the limitation of a 8bit images which display the brightest white point as 255. Both whites get the same value on jpeg photo or a render.

However in real life, there is huge amount of brightness difference between a white shirt and sun. 32bit images-High Dynamic Range Images allow a higher range of brightness for different object. The images can be in HDR format or OpenEXR format (by Industrial Light and Magic)

In a 32bit high dynamic range image, there is a huge tonal difference between a white shirt and sun, although they look the same on the monitor.

You can check this on VFB by right clicking on different points on the rendered image and reading the pixel values. You can notice that, areas which receive excessive amount of light (burnt areas) has a very high pixel value. ( using Linear Color Mapping)

The 32bit images allows :

Precision in calculations and color

Greater tonal range, (of highlights and shadows) like in real life.

Wider range of colors (visible on a high-end monitor) for color grading

Ability to work in detail on dynamic range

Ability to restore the original range after manipulating the histogram

There are many long threads on forums about Linear Workflow (LWF) and many tutorials, but it still remains as a confusing subject for many people especially the newcomers. In this tutorial, I would like to share my understanding of LWF to simplify the concept of LWF ; I am not an expert on electronics so instead of the technical scientific explanation, this is my point of view as an artist.

Why Do We Use LWF?Both 3dsmax and Vray are working in gamma space 1. However our monitors are not working in the same gamma space. What does this mean?This means that we don't see correctly what 3dsmax + Vray renders for us unless we adjust our software.

While 3dsmax and Vray are working in gamma space 1, most LCD displays work in gamma space 2.2. There are even some CRT monitors, which work in gamma space 2.5.

The output (render) displayed on our monitors is the result after the application of a 2.2 gamma space. (that is made automatically by our monitors).

So we need to adjust our software and render engine to work in same gamma space with our monitor and this is referred to as LWF ( Linear Workflow).

Although it may sound complicated at the beginning, in fact it is simple. In theory, it is just about adjusting the transition curve of midtones from black to white. You can think of it as a falloff curve, defining the transition of greys from black to white.

As 3D artists, we are familiar with falloff maps and fresnel curves and all of us adjust the falloff curves to create different effects. We use curves tool to adjust histogram of our images in Photoshop too. These are all different transition curves controlling various settings as functions of the transition from white to black, or in other words transition from yes to no.

So it is the same usage of curves and LWF helps us adjust the gamma space which actually controls the midtones curve in transition from black to white.

Move back slightly from the screen and gaze at the following images with your eyes half-closed.

Visually compare the square outlines and the stripes around them, looking for patterns that appear to have the same tone of gray (brightness).

The pattern for which the square frame and the striped pattern around it appear closest in brightness represents the rough gamma value to which the monitor is currently configured.

Adjusting The Software For LWF (3dsMax & Vray)The next step is simple. We will tell our software to adjust to our monitor's display gamma. We need to change the Gamma & LUT preferences on 3dsMax and ColorMapping settings in Vray as shown below :

With these settings, we are telling 3dsMax and Vray that we want to apply a curve adjustment to our output so it is displayed corrrectly on our monitor. Thus, 3dsmax applies the same correction on all the input images such as all the texture maps used on the scene so that everything in the scene are in one consistent gamma space.

If you notice, under color mapping settings, I use the color mapping mode only.This settings is for not baking the gamma on the image and keeping the image in gamma space 1. I will talk about baking-non-baking the gamma and uses in the next tutorial.

I hope I was able to simplify the concept of Linear Workflow and its usage and it can be useful to anyone who is confused with long threads of discussions on forums.

When I was modeling a sofa the other day, I collapsed one of the meshes which has a Turbosmooth modifier in Isoline Display. In fact, I didn't notice until this time that there is a bug and I didn't even notice the buggy result at that moment. Probably I was in edge mode and when I noticed the outcome pretty later that it was too late to undo.

So here is how the problem was looking:

isoline

As you can see, the new edgeloops are not there but only the vertices.It is a very easy fix with a simple box model, but I had a complex mesh which was really hard to fix and which would take a lot of time I was very lucky to find this script which saved my day:

Vertex Killer by Mauricio B. Gehlinghttp://www.scriptspot.com/3ds-max/scripts/vertex-killerThis tool will remove any SELECTED vertices that are shared by ONLY and EXACTLY 2 edges.I thought it would be very useful to share this in case if anyone collapse the Turbosmooth in isoline display.

Slides of my presentation at Academy Day3 are ready for download from the State of Art site:http://www.stateofartacademy.com/ad3-interviews-pixela/?lang=enThe pack consists of all the slides I presented for Making Of Boho scene with all the modeling, texturing, lighting stages.

After I published my Rosso scene, I had many requests and many people were asking me to prepare a tutorial about how to create a curtain shader. Here is the mini-tutorial I promised for Ronen Bekerman Forum which has been one of the leading and creative arch-vis sites that I feel happy to take part.http://www.ronenbekerman.com/material-curtain-v-ray/

Since a few days I am feeling excited that I will be able to use dual monitors. My second monitor doesn't have great quality in contrast and color depth- in fact it is a cheap monitor but it is great to be able to move all the secondary windows to this while working. In fact it feels a bit dizzy and confusing at the beginning because hardest part is that there is no taskbar on the second monitor due to the windows 7 limitation. So I made some search and discovered very nice tools and thought it would be useful to share all these with you who are new to using dual monitors like me.

Windows Shorcuts :

If you are like me and like to use many shortcuts and hotkeys from keyboard instead of mouse, then you will love these windows system shortcuts which are very handy.

It is very handy to have the taskbar on your secondary monitor so that you don't have to go back to your primary monitor any time you want to make something with the taskbar and saves great time.

Winsplit :

ecran

Again another freeware program to let you organise your windows with keyboard shortcuts. I installed it and I like how it works very much. You can download it from: http://www.winsplit-revolution.com/home

Today is a special and lucky day for me because it is very nice to have 2 of my tutorials published in one day. This is a tutorial that I have prepared for Evermotion : The Making Of Story Of Bronte Scene. Hope you find my tutorials useful.

Vray Lens Effects is a new tool in Vray 2.0. Last week I had time to test this new feature on my Rosso Scene night view and to share my experiments with you, I prepared one tutorial to be published on my friend Matt Guetta's blog. The tutorial is English - only the introduction is in French. Here is the link for the tutorial:

I have prepared a new tutorial on Max Hair& For modifier for Ronen Bekerman Architectural Visualisation Forums,one of the most innovative sites of the community. This is an article tutorial telling about how to create a fury object with 3dsMax Hair and Fur Modifier, the one that is ony my scene "My Bedroom Concept."

You can check the tutorial on Ronen's site where it is hosted and hope find it useful :

This is a Making Of Tutorial that I have created for Evermotion, one of the leading forums of the industry.Inside the tutorial, you can find the various stages of the project Modelling, Texturing and Lighting as well as tips about how to create specific effects.

This is the 2nd part of the Ground Elements tutorial. You can find the 1st part here. In this part, I will try to explain how I create grass with Particle Flow. (Please click on images to view in higher resolution.)First, I modeled few different little stones. I modeled the stones of vraying sizes and shapes. 19 different stones was enough for to build the variance.

[singlepic=93,550,,,left]

Then I converted these meshes into VrayProxy to save RAM. VrayProxy is an excellent feature of vray. Neil Blevins has written a very good tutorial about scattering objects in www.neilblevins.com After reading this tutorial, I decided to use a similar method.

I created a Particle Source. You can control all parameters of your ParticleFlow when you go into Particle View. PLease look in the 1st part of the tutorial: Grass With Particle Flow about detailed of Particle FLow Settings. ShapeInstance operator is to choose the particle geometry object. In my case it is the sphere. The PositionObject operator is to choose the emitter object. From birth operator, you can adjust the amount of particles. If you want to scatter different type of geometry, you can make a parent dummy object and link the child objects to this. In my case, this is not necessary.

[singlepic=92,550,,,left]

[singlepic=94,550,,,left]

Proxy objects loose their proxy charecteristic if they are scattered directly. I replaced this sphere meshes with proxied pebble models in the later steps. After distributing the spheres with the particle flow, I baked the particles into a mesh by using Bobo's BakePFlowToObjects script. By making this, all particles become geometry so we can replace them. You can find this very useful script in www.scriptspot.com

[singlepic=91,550,,,left]

To replacing the particles with VrayMeshes (proxy objects of little stones),I used another great script ObjectReplacer by Neil Blevins. (www.neilblevins.com) I replaced all dummy spheres with the meshes. This steps were necessary to be able to retain the proxy characteristic of the scattered objects. Position, Rotation, Scale properties are adjustable.

Actually, this is not a new tutorial; I created this as a part of the Making of Story for my scene Time Under Trees. But I have been requested few times to publish this part in a seperate tutorial so it can get listed in the search engines and include a grass version of the same method. This first part is about how to create grass using Particle Flow Method. (Please click on images to view in higher resolution.)

First, I modeled few different grass strands to scatter around.

[singlepic=95,550,,,left]

Then I created a Particle Flow System.

[singlepic=99,550,,,left]

When you click on the Particle view, you can see the settings of Particle Flow in a new window. Here as the first step, I set the Emit Start and Emit Stop value to 0 as I am not animating but using it for a still image. You can specify the number of particles.I choose my Emitter Object with the Position Object Operator. This is for schoosing the object that we scatter our particles on (emitter).

[singlepic=97,550,,,left]

In the Rotation Operator, it is important to choose Random Horizantal. This setting provides us to limit our rotation on the surface plane, without rotating on the volume.

[singlepic=98,550,,,left]

To choose particle geometry object (our meshes to that will scatter around), we will use Shape Instance Operator. To scatter different objects, you can group them and enable the "Group Objects" settings or you can make a parent dummy object and link the child objects to this. I used a basic primitive box object because my target is to get use of VrayProxy Objects for my particles. I will talk about this in next steps. Now we will distribute the box object.

[singlepic=96,550,,,left]

After distributing the boxes with the particle flow, I baked the particles into a mesh by using Bobo's BakePFlowToObjects script. By making this, all particles become a mesh so they can be replaced. You can find this very useful script in www.scriptspot.com

[singlepic=91,550,,,left]

To replacing the particles with VrayMeshes (proxy objects of little stones),I used another great script ObjectReplacer by Neil Blevins. (www.neilblevins.com) I replaced all dummy boxes with the meshes. These steps aew necessary to retain the proxy characteristic of the scattered objects. Position, Rotation, Scale properties are adjustable.

I have prepared a mini tutorial about how to activate the VFB History Window. This is a very useful but hidden feature of VRAY and it is one of my favorite tools.

[singlepic=45,320,240,,] It is a very handy feature to have your render history -Very easy to switch between previous renders with one click, and no need to save them them as a file, give seperate file names, open in an image viewer anymore! You can view all your previous renders with the 32bit VFB. To activate this feature, you need to create an environment variable:

1. Right click on MyComputer. Go to Advanced Tab. Click on the "Environment Variables" button at bottom of the window.

2. Click on "New"

[singlepic=48,360,,,]

3. Make a new Environment Variable as shown in the image:

Now when you start max and render into VrayFrameBuffer, on the right bottom of VFB window , you will see a new button which opens and close the history window. You will be asked to assign a temp path in your first usage. And this is all :) From now on, you can compare your renders by clicking on the thumbnails in your VFB History.