Then copy the Z line to the points of X and then copy the result to the points of Y. So far we have a 3D grid of points.

Now we are going to assign random velocities on the nodes. Lay down an attribute wrangle node and copy the following code:

Create random velocities

C++

1

2

3

v@v=rand(@ptnum);

floatw=fit(rand(@ptnum),0,1,0.2,0.8);

v@v=v@v*w;

The
v@v is read as “declare a vector with name v equal to”.The wrangle node is set to run over points and the
@ptnum is the point id which is used a seed to the rand function that generates a random.

The second line generates a random float value which is scaled between 0.2 and 0.8. Last the velocity vector is scaled using the w value.

We can visualize the velocity field using the visualize node. Connect the output of the attribute wrangle node to a new visualize node and change under the visualizers tab the type from color to marker the style to vector and choose v as attribute value. You may need to scale the vectors down a bit. (Note also that sometimes you need to go one level up to see the velocity field):

Last select all the nodes that are used for the random velocity field generation and create a subnet (shift+C) and rename it to Velocity field for example.

Generate Random initial particles

This is another process which most of the time is not random as the initial particle positions are defined by the problems. However here we will generate random initial particles within the space 0 1. This involves just on attribute wrangle node with the following code

C++

1

2

3

4

5

6

7

intN=chi('Number_of_particles');

floatseed=ch('Random_Seed');

for(inti=0;i<N;i++){

vector pos=rand(seed*(float(i)+1));

addpoint(0,pos);

}

The first two lines generate GUI elements for this node. After inserting these two lines press the highlighted button which looks like sliders. This will generate two sliders below the code area.

With these two sliders we are able to change the number of particles and the random seed without writing code.

Velocity interpolation

Now its time to write the code for the velocity interpolation. Just to stay organized lay down a null node (which is doing absolutely nothing) and with the null node selected press Shift+C to create a subnet and rename the subnet as Particle tracer. Change also the input labels as follows:

and the network so far looks like the following:

Double click to enter the Particle tracer node and delete the null node.

The first input is expected to be a number of points with the initial particle positions. It is possible that these points form primitives which we don’t want. therefore we will make sure that the primitives are deleted. Lay down an attribute wrangle node, rename it to delete primitives and set its Run Over option to Primitives. There is only one line of code for this wrangle node:

1

removeprim(0,@primnum,0);

Now that we are sure that there are no primitives, we will create one primitive per particle, a polyline of the trajectory of the particle within the velocity field. Add another attribute wrangle node, rename it to create polylines and leave the Run Over option to Points. Paste the following two lines which, for each point add a primitive and vertex. At this point the primitives cannot be displayed. we need at least two vertices to display a line.

1

2

intiprim=addprim(0,"polyline");

addvertex(0,iprim,@ptnum);

Setup solver node

Finally its time to do the interesting part. Lay down a solver node and double click to enter inside the solver node. What makes solver special is that it gives us access to the geometry at the previous time step. Connect the Prev_Frame node with a new attribute wrangle node. Rename the new node to find next position, make sure to change the Run Over at primitives and connect the second input of particle tracer to the second input of the new wrangle node. The network should look similar to this:

Since the find next position node will do all the work, there is going to be a lot of code here.

First we are going to create a slider that will help us choose the number of points to be taken into account during velocity interpolation and a slider to define at what distance the points will be considered. Here we also define a scale velocity slider that will be used at a later time. The if statement ensures that even if one uses wrong number of interpolation nodes the code will still work. however the interpolation method will be similar to nearest neighbor interpolation.

1

2

3

4

5

6

intNint=chi('N_points_velocity_interpolation');

floatsearch_radius=ch('Search_radius');

floatscale_vel=ch('Velocity_scale');

if(Nint<=0)

Nint=1;

Next step is to find how many vertices the primitive has.

1

intNverts=primvertexcount(0,@primnum);

There should be at least one vertex and the following lines get the point index and the position of that last vertex in the primitive

1

2

intpt_last=primpoint(0,@primnum,Nverts-1);

vector pos_last=point(0,'P',pt_last);

Now we can search for
Nint nearest points within a sphere with radius
search_radius

1

intclosept[]=pcfind(1,"P",@P,search_radius,Nint);

The array
closept contains the ids of the velocity points that satisfy the above criteria. After we initialize some help variables we loop through the velocity points. (Note that the code will use Inverse Distance Weighting Interpolation IDW)

For each velocity point get the velocity vector and calculate the distance of the current trajectory point with the velocity interpolation point.

1

2

3

4

5

6

7

floatw[];

floatsum_w=0;

vector vel[];

foreach(intpt;closept){

intout;

vector interp_vel=pointattrib(1,'v',pt,out);

floatd=distance(pos_last,point(1,'P',pt));

According to IDW if the distance is 0 set directly the velocity equal to the velocity node. When this condition is true the code removes any other weight and velocity, appends only the values of this velocity node and exits the loop.

1

2

3

4

5

6

7

if(d==0){

resize(w,0);

resize(vel,0);

sum_w=1;

append(vel,interp_vel);

append(w,1);

break;

if the distance is not zero, the weight of this velocity point is set equal to the inverse distance

1

2

3

4

5

else{

append(w,1/d);

sum_w+=1/d;

append(vel,interp_vel);

}

Now we can calculate the velocity of the point

1

2

3

4

5

6

7

vector my_vel={0,0,0};

for(inti=0;i<len(w);++i){

my_vel.x=w[i]*vel[i].x;

my_vel.y=w[i]*vel[i].y;

my_vel.z=w[i]*vel[i].z;

}

my_vel=my_vel/sum_w;

Finally we can compute the position of the next point in the particle trajectory. Note that we multiply the calculated velocity with the scalar that has the effect of speeding up or slowing down the velocity.

1

vector new_pos=pos_last+my_vel*scale_vel;

Although the following is not needed, will help us during visualization. The first line sets the calculated velocity as point attribute of the previous point and calculates the normal of the previous point so that it points in the new position.

1

2

3

setpointattrib(0,'v',pt_last,my_vel,'set');

vector normal_pt=new_pos-pos_last;

setpointattrib(0,'N',pt_last,normal_pt,'set');

So far we have found the new position but we have to add it to the primitive.

1

2

3

intnew_pt=addpoint(0,Nverts);

setpointattrib(0,'P',new_pt,new_pos,'set');

addvertex(0,@primnum,new_pt);

That concludes the code if this wrangle node. Here is the full code without interruptions

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

//Number of points involved in the velocity interpolation

intNint=chi('N_points_velocity_interpolation');

floatsearch_radius=ch('Search_radius');

floatscale_vel=ch('Velocity_scale');

if(Nint<=0)

Nint=1;

// find how many vertices exist in the primitive

intNverts=primvertexcount(0,@primnum);

if(Nverts>=1){

// get the id of the last one

intpt_last=primpoint(0,@primnum,Nverts-1);

//i@temp = pt_last;

// position of last vertex

vector pos_last=point(0,'P',pt_last);

// Find the position of the last point in the primitive

intclosept[]=pcfind(1,"P",pos_last,search_radius,Nint);

//i@Nfind = len(closept);

// Calculate the distace from each velocity point

floatw[];

floatsum_w=0;

vector vel[];

foreach(intpt;closept){

intout;

// get the velocity of the pt point

vector interp_vel=pointattrib(1,'v',pt,out);

// find the distance between the velocity point and the point in question

floatd=distance(pos_last,point(1,'P',pt));

// If the distance is zero then assign this velocity

// ignore the other values and exit the loop

if(d==0){

resize(w,0);

resize(vel,0);

sum_w=1;

append(vel,interp_vel);

append(w,1);

break;

}

else{

append(w,1/d);

sum_w+=1/d;

append(vel,interp_vel);

}

}

// Next compute velocity as weighted average

vector my_vel={0,0,0};

for(inti=0;i<len(w);++i){

my_vel.x=w[i]*vel[i].x;

my_vel.y=w[i]*vel[i].y;

my_vel.z=w[i]*vel[i].z;

}

my_vel=my_vel/sum_w;

// assign the velocity as attribute to the previous point

setpointattrib(0,'v',pt_last,my_vel,'set');

// find the next position

vector new_pos=pos_last+my_vel*scale_vel;

// calculate the normal vector for the last position

vector normal_pt=new_pos-pos_last;

setpointattrib(0,'N',pt_last,normal_pt,'set');

// add the point in the primitive with id the next number

intnew_pt=addpoint(0,Nverts);

setpointattrib(0,'P',new_pt,new_pos,'set');

addvertex(0,@primnum,new_pt);

}

Make sure that the blue flag is set to the wrangle node and exit the solver node. Pressing play will show the particle trajectories as lines.

Particle trajectory appearance

The computation of particle tracking has finished in the previous section. Here we will work just on the appearance. First we will add color to the points according to velocity. To this end add another attribute wrangle node, rename it to set color and add the following lines. Press the button to generate the gui elements. This snippet calculates the velocity magnitude and assigns a color between green and red.

1

2

3

4

5

6

7

floatmin_vel=ch('Minimum_value');

floatmax_vel=ch('Maximum_value');

vector red={0.9,0,0};

vector green={0,0.9,0};

floatl=length(@v);

@Cd=lerp(green,red,fit(l,min_vel,max_vel,0.0,1.0));

Convert polylines to tubes

Depending on the velocity field the polyline may be jagged. Therefore we lay down 2 nodes. A smooth node and a resample node. Both of these nodes have their own parameters that define the smoothing amount resampling amount. Its best to play around with those values to find what looks better.

It is very common to display the particle trajectories as tubes. To achieve this in Houdini is trivial.

First create a cirlce node and scale it down to something that makes sense. In my example I set the Uniform Scale parameter to 0.004.

Now we are going to loop through the primitives (at the moment the number of primitives is equal to the initial number of particles). However we will do that using the For-Each Primitive node. This is going to lay down two nodes. A for each begin and a for each end. This is going to loop through the primitives and execute whatever nodes are in between. Add the copy to points and skin nodes as shown in the figure:

Last we add a normal node which corrects the face normals so that the polygon faces are lighted correctly

At large scales, rivers are delineated as lines. Typically river network data are in the form of polyline shapefile. To convert polyline network to polygon nework that is consistent is a quite challenging task. In GIS, the buffer approach is not returning the desired results as it returns a rounded buffer around the end polyline points.

For example, using the fixed distance buffer in QGIS with one segment the result is the following:

where each polyline segment creates a buffer polygon that overlaps with other.

The proposed workflow is not ideal but tackles some of the issues, while it returns a continuous representation of the river using polygons.

However the method is quite involved as requires some Matlab coding, and some experience with Houdini. A great advantage of the workflow is that it is non destructive, which means that modifications of the original input shapefile propagated at the end result without additional work.

Starting with Matlab, first we read the polyline shapefile:

MATLAB

1

cvhm_stream=shaperead('gis_data/CVHM_streams');

Each river consists of multiple polyline segments. Therefore we have to group them. Here we use the field NAME to group the rivers.

Group river polylines by name

MATLAB

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

CVHMSTRM(1,1).Name=cvhm_stream(1,1).NAME;

CVHMSTRM(1,1).segments(1,1).X=cvhm_stream(1,1).X;

CVHMSTRM(1,1).segments(1,1).Y=cvhm_stream(1,1).Y;

CVHMSTRM(1,1).segments(1,1).CELLNUM=cvhm_stream(1,1).CELLNUM;

CVHMSTRM(1,1).segments(1,1).row=cvhm_stream(1,1).ROW;

CVHMSTRM(1,1).segments(1,1).col=cvhm_stream(1,1).COLUMN_;

forii=2:size(cvhm_stream,1)

new_river=true;

forjj=1:size(CVHMSTRM,1)

ifstrcmp(CVHMSTRM(jj,1).Name,cvhm_stream(ii,1).NAME)

% add a segment to an existing river

Nseg=size(CVHMSTRM(jj,1).segments,1);

CVHMSTRM(jj,1).segments(Nseg+1,1).X=cvhm_stream(ii,1).X;

CVHMSTRM(jj,1).segments(Nseg+1,1).Y=cvhm_stream(ii,1).Y;

CVHMSTRM(jj,1).segments(Nseg+1,1).CELLNUM=cvhm_stream(ii,1).CELLNUM;

CVHMSTRM(jj,1).segments(Nseg+1,1).row=cvhm_stream(ii,1).ROW;

CVHMSTRM(jj,1).segments(Nseg+1,1).col=cvhm_stream(ii,1).COLUMN_;

new_river=false;

break;

end

end

ifnew_river

Nriv=size(CVHMSTRM,1);

CVHMSTRM(Nriv+1,1).Name=cvhm_stream(ii,1).NAME;

CVHMSTRM(Nriv+1,1).segments(1,1).X=cvhm_stream(ii,1).X;

CVHMSTRM(Nriv+1,1).segments(1,1).Y=cvhm_stream(ii,1).Y;

CVHMSTRM(Nriv+1,1).segments(1,1).CELLNUM=cvhm_stream(ii,1).CELLNUM;

CVHMSTRM(Nriv+1,1).segments(1,1).row=cvhm_stream(ii,1).ROW;

CVHMSTRM(Nriv+1,1).segments(1,1).col=cvhm_stream(ii,1).COLUMN_;

end

end

The grouped rivers are stored into a temporary CVHMSTRM variable. Each polyline segment has shared nodes with other polyline segments, which are supposed to be connected. Here we make a unique list of river nodes.

Graphs are powerful data classes that provide many functionalities. An example is the ability to plot them.

Plot a graph

MATLAB

1

2

3

4

5

ii=8;

p=CVHMSTRM(ii,1).G.plot;

p.XData=CVHMSTRM(ii,1).ND(:,1);

p.YData=CVHMSTRM(ii,1).ND(:,2);

p.NodeLabel=[1:size(CVHMSTRM(ii,1).ND,1)];

With the river graphs at our disposal we are able to identify the main rivers and separate them from the tributaries. The following lengthy code does two things. First Identifies the longest paths which are supposed to be the main rivers and the remaining paths connected to the longest one. Secondly calculates normal vectors for each river node. For the nodes that are connected to only one segment, the normal is equal to the segment normal. For the nodes that are connected to two segments the normal is the average of the two segment normals. Normals will be used later to create the river polygons.

Finally for each river a width and flow rate is added. While the width is a required information for the method the flow rate could be omitted. However when I export the river polygons I would like to have the flow rate associated with each polygon. The code to add width depends on the application and is not shown here.

Last we write the river information into files for reading into Houdini. Note that the coordinate values are divided by a large constant 100000. This is because software like Houdini, blender etc do not handle very well large coordinate values.

Next we are going to build a small graph to achieve the desired result.

Right after python code we have to remove duplicated nodes using the Fuse node. Then we use the Point-Old node to set the normals:

I have also added a Null node that does nothing other than improving visibility and organizing the graph.

If we turn on the display points and display normals options we can see the river polyline, the points and the point normals.

Next we lay down 2 Attribute VOP nodes. These nodes offset the main river by the width amount along the normal direction. In practice I created the positive offset and then duplicated to modify the negative offset.

By double clicking the attribute VOP (dive inside the node) we can modify the geometries at a different level. In addition we can see that the available nodes now are very different.

First we lay down a Get Attribute node to read the width information. For the Get Attribute to work we have to set float as Signature, First Input as Input method, Point as Attribute Class and type width for Attribute. As shown above the ptnum have to be connected to i1. In addition we lay down a constant float equal to 2 and a divide node to calculate the half width.

There is also a normalize node that is connected with the N value of the geometry. This is normalizing the normals of the nodes. The following figure shows the continuation of the Attribute VOP graph

The outcome of half width is further divided by the same large constant we use when writing the houdini files, which is then multiplied by the point normalized normals and added to point Position values. Finally the result of add operation is connected to P geometry output node which shifts the nodes by half width amount. The negative offset node is identical with the exception that between the divide1 and multiply1 we have added a negate node.

Next outside the Attribute VOP node we merge the two offset nodes and then add a skin node. The following figures show the result of the merge and the result of the skin node. For the skin to work properly we have to set some rules about which lines to connect. Set Quadrilaterals for connectivity. For Skin pick Skip every Nth primitive, and for N value use the expression
detail("../python2/","Nseg",0) which sets as N value the number of stream segments. Note that in the initial python code we set a global attribute Nseg.

We can see in areas where the river has sharp bends some weird artifacts may be produced. These will be taken care later. However note how the river width changes smoothly between segments of different width.

Somehow the Skin operator does not transfer the Qrate attribute to the polygons and sets all Qrates to 0. A simple attribute VOP takes care of that. A get and a set attribute node does exactly that.

Then the fuse node with a certain tolerance resolve the artifacts displayed above

Finally we use a python node to write the polygons into a file to export them into a suitable format using the following python code.

However there are cases where the default colormaps are not sufficient. For example the figure below shows the hydraulic head trend over an 80-year simulation. For the areas with positive values the groundwater table is rising while in the areas with negative values the water table drops. However using the default colormap is not easy to distinguish these areas.

MATLAB

1

2

3

4

5

6

7

8

9

clf

holdon

plot(study_area.X,study_area.Y,'k','LineWidth',2)

trisurf(TR,p(:,1),p(:,2),baseslopedecade,'edgecolor','none',...

'FaceColor','interp','FaceLighting','phong');

colorbar

axisequal

axisoff

holdoff

To improve the figure we are going to modify the colormap so that the color at value 0 is white, while the positive values will have a blue gradient from deep blue to white, and the negative values a gradient from white to deep red.

After we have plotted the figure we can obtain information about the color axis as follows

1

2

ax=gca;

[cmin cmax]=caxis;

In my example cmax = 7.2 and cmin = -5.4. We can also set our own custom limits for the colors by setting
ax.CLim=[-58]; . This will make the cmin = -5 and cmax = 8. This is very useful if one wants consistent colormaps between different figures with different range values.

Next we’ll create a variable that has values that range between cmin and cmax and identify the last row that is negative. In my case that was the 28th row (id = 28).

MATLAB

1

2

lcol=linspace(cmin,cmax,64)';

id=find(lcol>0,1)-1;

Next, for each row of the variable lcol we have to assign a color. The lcol(1) corresponds to cmin therefore we want to assign deep red (130,0,0) while the row id corresponds to 0 values which should have white color (255, 255, 255). The last row corresponds to cmax and we will assign deep blue (0, 0, 130). For the remaining row we will interpolate their values as follows: