So If 1 uv unit is equal to 10 global units, you have to multiply them to match exactly. Also in this scene I wanted to displace on x and z directions, thats why u is connected to x and v to z.

For the shader I ended with completley different solution. At first this approach was not bad, but in simulation global space displace is not a good idea (you have to displace each piece in its own space). So I created 3(2 are also ok) vectors per each point and stored them during simulation. Then in shader it was easy to construct matrix for each point and displace them in every direction I want. Maybe this is not the best solution but in my case it was good enough.

The tricky part was to guess which parameter(point or vector) is in what space. Some of them are in shader/camera space, others in global, so you have to create convertions to be sure everything works ok.

Share this post

Link to post

Share on other sites

Thats right. I have one transformation matrix per each point, so when objects are driven by simulation/dop displacement stays in right space. Think of it as matrix per simulated piece, where each point of that piece has the same matrix.

I know that dops have this kind of data per piece, but I`m not expierenced enough to use it.

Hi-poly model is not required, because renderman renderers have this step called dicing, where geometry is subdivided into polys with size smaller than 1 pixel.

Creating displacement in shader means that attributes are transfered from points to micropolys, and shader is executed per micorpoly, so you have this extra hi-poly detail for free.

Edited June 21, 2012 by rayman

1

Share this post

Link to post

Share on other sites

I haven't been able to try implementing the matrix per point approach cause I got caught in another project and this was for a personal project so no time now, but just wanted to pass by and thank you for your kind help