Merge
geometry node

Merges geometry from its inputs.

This node merges together the geometry from its inputs
into a single stream of geometry, which you can then send through other nodes.
If you need to continue distinguishing different parts
of the merged geometry later, try using the Group node
to put the input geometries into groups before merging them.

You can merge together a maximum of 9999 inputs.

Note

Bypassing this node causes it to only export the first input. It will not compute the merged objects.

Examples

The Merge SOP applies all incoming attributes to all input geometry. Each input geometry may have its own set of attributes.

Three spheres are wired into a Merge SOP. The first has no attributes applied. The second has a color attribute (Cd[3]) applied by a Point SOP. The third has a normal attribute (N[3]) applied by another Point SOP.

The Merge SOP does NOT know how to build attributes, but can apply them. As a result, all applied attribute values are set to zero.

This is why the first two spheres display and render black. They have normal attributes applied, but their values are set to zero.

In addition, the first and last spheres have a color attribute applied, but their values are set to zero.

It is better to set attributes explicitly, instead of relying on the Merge SOP to do so.

This example shows a pieces of cloth with different
properties colliding with spheres. By adjusting the
stiffness, bend, and surfacemassdensity values, we can
give the cloth a variety of different behaviours.

The setup creates an army of agents. There are two paths created. Middle part of the army starts moving and then splits into two formations. One goes to the left, the other groups keeps marching forward and slowly changes formation to a wedge shape.

To keep the agents in formation a custom geo shape is used. It’s points are used as goals for indiviudal agents. Using blendshapes the shape can change allowing for different formation changes. Dive inside the crowdsource object to see the construction.

Note

The animation clips need to be baked out before playing the scene. This should happen automatically if example is created from Crowds shelf. Otherwise save scene file to a location of your choice and click Render on '/obj/bake_cycles' ropnet to write out the files. The default path for the files is ${HIP}/agents.

The setup creates two groups of agents. The yellow agents are zombies which follow a path of the street. The blue agents are living pedestrians that wander around until they come into proximity of the zombies and then they swtich into a running state.

Triggers to change agent states are setup in the crowd_sim dopnet. The zombies group uses proximity to the stoplights and the color of the light to transition into a standing state when lights are red. The living group transition into a running state when they get close to the zombie agents.

Note

The animation clips need to be baked out before playing the scene. This should happen automatically if example is created from Crowds shelf. Otherwise save scene file to a location of your choice and click Render on '/obj/bake_cycles' ropnet to write out the files. The default path for the files is ${HIP}/agents.

This example creates a torus of paint which is dropped on the Grog
character. The Grog character is then colored according to the paint
that hits him. This also shows how to have additional color
information tied to a fluid simulation.

This example demonstrates how the Up Res Solver can now be used
to re-time an existing simulation.
The benefit of this is that one can simply change the speed
without affecting the look of the sim.
On the up-res solver there is a tab called Time. The Time tab
offers various controls to change the simulation’s speed.

This example uses the Pyro Solver and a Smoke Object which
emits billowy smoke up through a turbine (an RBD Object). The blades
of the turbine are created procedurally using Copy, Circle, and Align
SOPs.

This example demonstrates the how the shatter, RBD Fractured Object,
and Debris shelf tools can be used to create debris emanating from
fractured pieces of geometry.

First, the Shatter tool (from the Model tool shelf) is used on the
glass to define the fractures. Then the RBD Fracture tool is used
on the glass to create RBD objects out of the fractured pieces.
Then the Debris tool is used on the RBD fractured objects to
create debris.

This is an example of how to use the RBD Glue Object node to create
an RBD object that automatically breaks apart on collision. It also
demonstrates one technique for breaking a model into pieces appropriate for
this sort of simulation.

This example actually includes eight examples of ways that
you can use voronoi fracturing in Houdini. In particular, it
shows how you can use the Voronoi Fracture Solver and the
Voronoi Fracture Configure Object nodes in your fracture
simulations. Turn on the display flags for these examples
one at a time to play the animation and dive down into
each example to examine the setup.

Ambient occlusion is a fast technique for producing soft, diffuse lighting in open spaces by using ray tracing. It is computed by determining how much of the hemisphere above a point is blocked by other surfaces in the scene, and producing a darker lighting value when the point is heavily occluded. This technique can be useful when you need a GI-like effect without paying the price for full global illumination.

With this particular example, an Ambient Occlusion light and some geometry is provided in the form of a Digital Asset. An Environment Light was used, and it’s parameters were promoted for easy access.

Decreasing the sample count allows you to improve render time at the expense of some additional noise in the render. The following render uses the same shader as the image above but decreases the samples from the default of 256 to 16. This value is set on the Sampling Quality under the Render Options tab of the Light.

Environment Maps

If you have a smooth environment map, it is possible to replace the global background color (white) with the value from an environment map. You can also enable the Sky Environment Map under the Sky Environment Map tab.

Volume rendering is a rendering approach that allows high-quality, integrated rendering of volumetric effects like smoke, clouds, spray, and fire.

Volume rendering is suitable for rendering many types of volumetric effects. Scenes that are particularly suited to rendering with mantra volumes include:

Detailed "hero" clouds, smoke, or fire

Fields of instanced clouds, smoke, or fire

Scenes where volume rendering may not be quite so applicable include:

Scenes with a single uniform fog

In this particular example, a bgeo file (1 frame only) was exported from a fluid simulation of smoke and is now referenced using the File SOP. A material using VEX Volume Cloud is assigned to this volumetric data at the top level of the Volume Object. To see this scene in shaded mode, ensure that HOUDINI_OGL_ENABLE_SHADERS is set to 1 in the environment variables.

Controlling Quality/Performance

Volume rendering uses ray marching to step through volumes. Ray marching generates shading points in the volume by uniformly stepping along rays for each pixel in the image. There are two ways to change the quality and speed of the volume ray marching:

The samples parameter on the Sampling tab of the mantra ROP. More pixel samples will produce more ray marches within that pixel leading to higher quality. Using more pixel samples will also improve antialiasing and motion blur quality for the volume.

The volumesteprate parameter on the Sampling tab of the mantra ROP. A larger volume step rate will produce more samples in the volume interior, improving quality and decreasing performance. A separate shadow step rate can be used for shadows.

Which parameter you should change will depend on your quality requirements for pixel antialiasing. In general, it is better to decrease the volume step size rather than increase the pixel samples because a smaller volume step size will lead to more accurate renders.

This render uses 2×2 samples and volume step rate of 1. Notice the detail in the shadows.

This render uses the same scene with 4×4 samples and a volume step rate of 0.25. The fine detail in the shadow has been lost and the volume is somewhat more transparent. The quality level is approximately the same.

In this file we create a downhill lava flow with crust gathering and hardening at the base of the slope. All of the animation is achieved through the shader itself, and all of the geometry is completely static.

Note

Most of the parameters for the lava material are overridden by point attributes created in the surface nodes.

No geometry is animated in this file.
All animation is achieved by animating the textures

Flames are grids so that UV textures can easily be applied, they are then warped around a metaball using a magnet SOP. The flames are then assigned to either a yellow or blue Flames texture. The Flames' opacity mask wrap is set to Decal to prevent the texture from repeating and showing a single pixel ring at the top of the flame geometry. I'm also using a mask file named flameOpacMap.jpg to enhance the flames' shape at the top. The noise offset has been animated over $T with an greater emphasis on the Y axis so that the flames look like they are rising. This is the same reason the Noise jitter is larger for the Y axis as well.

The coals are spheres that have been copy stamped onto a deformed grid. Using Attribute Create surface nodes I am able to override and copy stamp the lava texture’s parameters at the SOP level so that local variables, such as $BBY, can be used to animate the texture. This way the texture’s crust and its crust values can be used only to form the tops of the coals. This reserves the lava aspect of the texture to be used on the bottoms of the coals. The lava intensity (Kd attribute) is then stamped and animated to create the look of embers on the bottom of coals glowing.

This network demonstrates the many uses of the Add SOP to build and manipulate geometry:

It is used to create points in space which can then be used to create polygons using designated patterns. These polygons can be open or closed. Futhermore, each point can be animated through expressions or keyframes.

It is used to both create points and grab points from other primitives. These points may be used in polygon creation.

The Add SOP may be utilized to create a polygon using points extracted from another polygonal object. A Group SOP allows for the creation of the point group that will be referenced by the Add SOP.

The Add SOP is used to create a polygon from a group of animated Null objects. An Object Merge SOP references the null points in SOPs which are then fed into an Add SOP for polygon generation. A Fit SOP, in turn, is used to create an interpolated spline from the referenced null points. The result is an animted spline.

The Add SOP is used to generate points without creating any primitives. Also, points from other objects can be extracted through the Add SOP.

Finally the Add SOP can additionally be used to procedurally create rows and columns.

The Attribute Transfer SOP can be used to transfer color attributes from one geometry to another. The effective field of transfer can be controlled through the various parameters in the Attribute Transfer SOP.

This example shows how to setup the Bake Volume SOP to compute the
lightfield created by the shadowing of a fog volume. It then exports
the fields properly to be rendered in Mantra by a constant volume
shader.

The Box SOP is used for more than just creating boxes. It can also envelop existing geometry for specific purposes.

The Box SOP can either create a simple six-sided polygon box, calculate the bounding box size for geometry, or be used in conjunction with the Lattice SOP.

There are two objects within the box.hip file that are examples of this:

animated_bounding_box

The animated_bounding_box object shows how you can envelope an object and surround it with a simple box, even if it is animated. This can be useful when displaying complicated geometry, in which case you would put the display flag on the box object and the render flag on the complicated geometry.

box_spring_lattice

This is an example, a Lattice SOP used in conjunction with the Box SOP. The Box SOP is used to envelope some geometry, in this case a sphere. Divisions is checked to create the proper geometry by referencing the number of divisions in the Lattice SOP.

The top points of the box are grouped by a Group SOP. The Spring SOP uses these points as the Fixed Points from which to create the deformation.

Using the Box SOP in this way allows you to change the incoming geometry (the basic_sphere in this case) and have the box and lattice automatically re-size for you.

This network is a demonstration of how the Carve SOP can be used to extract
various elements of the surface geometry.

Depending on the type of geometry, the Carve SOP may be used to extract
points from polygonal objects or curves from NURBS surfaces.

Furthermore, the Carve SOP uses the surface U and V information to extract
the various elements, and by animating the U and V values we can create
various effects as the points and curves move on the geometry surface.

This network contains an example of how the Carve SOP can extract 3D Isoparametric Curves from a surface, and how those curves may be used as a copy template.

The Carve SOP can be used to slice a primitive, cut it into multiple sections, or extract points or cross-sections from it.

In this example, the Extract option has been used to Extract 3D Isoparametric Curve(s). A series of disk-like shapes are created as the Carve SOP extracts curves from points around the surface with the same V Directional value.

It then uses the points along those curves as a template on which to copy sourced geometry.

This demonstration contains four different examples of applying the creaseweight attribute to polygonal geometry utilizing the Crease SOP, Vertex SOP, Attribute Create SOP, and Subdivide SOP.

It also points out some of the differences between rendering with Mantra vs. RenderMan. It is important to know that Mantra can not render the creases due to Copyright laws.

Note

Rendering creases with Mantra requires the addition of a Subdivision SOP for calculating the geometry. The Render tab’s Geometry parameter at the object level should be set to: Geometry As Is.

If Renderman is being used, the Subdivide SOP is only for previewing the result. Renderman calculates creases during the render. The Render tab’s Geometry parameter at the object level should be set to: Polygons as Subdivision Surfaces.

This example shows two different ways in which particles can be crept on a surface. In this case, the surface is a contorted tube.

One version shows how particles are crept inside the surface, the other shows how particles are crept outside the surface. This is done by changing the z scale in the Creep SOP, which offsets the particles perpendicular to the surface.

The particles are birthed from a circle that is carved from the tube geometry.

This example shows how to create a low res - high res set up to support RBD objects.
The two main methods are to reference copy the DOP Import SOP and feed in the high res
geometry or to use point instancing with an Instance Object.

The Fillet SOP is used to create a bridge between two NURBS surfaces with control over its parameterization. The fillet uses the original surface uv information for bridging.

Fillet types may include Freeform, Convex or Circular. The Freeform fillet usually provides a smooth natural form. Such parameters as the left and right UV, Width, Scale, and Offset may be used to control the fillet location between the surfaces.

This is an advanced example of how to use the FindShortestPath SOP to prefer "central" paths, based on centraily measures computed using FindShortestPath and AttribWrangle. This helps avoid staying too close to walls where avoidable.

Turn on the Display Option > Optimization > Culling > Remove Backfaces to see inside the space more easily. Try visualizing the different centrality measures using the switch node. The same example without considering the centrality of the path is demonstrated in a side branch of the SOP network, in order to see the difference.

This example demonstrates using the Fractal SOP to deform geometry to get a random, jagged subdivision surface. This is a useful tool in creating things such as bumpy terrains, landscapes, rocks, or debris.

The Fractal SOP is applied to each geometry type to show how the displacement changes based on the geometry type.

This example demonstrates how the Fur SOP and Mantra Fur Procedural can be
applied to an animated skin geometry. CVEX shaders are used to apply a
custom look to the hairs based upon attributes assigned to the geometry.

The Merge SOP applies all incoming attributes to all input geometry. Each input geometry may have its own set of attributes.

Three spheres are wired into a Merge SOP. The first has no attributes applied. The second has a color attribute (Cd[3]) applied by a Point SOP. The third has a normal attribute (N[3]) applied by another Point SOP.

The Merge SOP does NOT know how to build attributes, but can apply them. As a result, all applied attribute values are set to zero.

This is why the first two spheres display and render black. They have normal attributes applied, but their values are set to zero.

In addition, the first and last spheres have a color attribute applied, but their values are set to zero.

It is better to set attributes explicitly, instead of relying on the Merge SOP to do so.

This example shows the ability of the Particle SOP to define a default Size for any given birthed particle.

A simple Grid can be used to create a dynamic solution of particles streaming off as if blown by the wind. As these particles leave the grid, their size slowly diminishes, as the particle continues to die.

The given example file takes a grid, and using the Particle SOP in combination with the Metaball and Force SOPs, creates a dynamic animation.

A metaball ship jets through space driving particles out of its path along the wake of the ship. With the help of the Force SOP, the metaballs are given the properties necessary to make this reaction possible.

The Particle SOP enables the creation of particles at the SOP level and allows those particles to directly interact with geometry. Furthermore, these particles are in turn treated as point geometry.

In this example, particles are both crept along and collided with a collision tube object. It is possible to also manipulate and control particles in SOPs through the adjustment of point normals (including those of the particles).

The Platonic Solids SOP generates platonic solids of different types. Platonic solids are polyhedrons which are convex and have all the vertices and faces of the same type. There are only five such objects, which form the first five choices of this operation.

This example shows all seven of the different polyhedron forms that can be made using the Platonic Solids SOP.

Using the Point SOP, a simple displacement is created and applied to a portion of a spherical surface.

Using the normals of a point, which is basically a vector, and adding that number to the position of the point, the point is displaced in that given direction. With a Merge and Skin SOP the displaced surface is then connected back to the original.

This example file uses the Point SOP to turn a regular line into a spiral.

There are two different approaches used in this example. The first uses the point numbers of the line to define the expression calculations. The second uses the position of the points in the line’s bounding box for the expression.

This example demonstrates the various options for joining polygons using the PolyKnit SOP. The PolyKnit SOP is useful for filling in holes, gaps, or to re-define edges on polygonal geometry.

PolyKnit can be used to manually knit joining polygons between existing polygons. Polygons are created by specifying a list of input points from which to "knit" the new polygons.

PolyKnit will yield different results, depending on the pattern by which the points are selected or listed. Please see the Helpcard documentation for more information on how the PolyKnit SOP builds new polygons.

This example demonstrates how the Polystitch SOP can stitch together or refine seams between polygonal surfaces with incongruent U and V divisions. This is useful for smoothing and eliminating cracks at seams.

This example demonstrates the use of the Resample SOP on three types of curves. (Polygon, NURBS and Bezier)

The Resample SOP rebuilds the curve by converting it into a series of Polygon Line Segments.

The curve may be rebuilt "Along Arc" or "Along Chord". "Along Arc" utilizes the Hull information as a basis of reconstruction, and can be defined by a Maximum Segment Length and/or Maximum Segment number. "Along Chord" can only be defined by Maximum Segment Length.

Resampling the curve based on Maximum Segment number divides the line into segments of equal, but unspecified length, spanning from start to endpoint. Line detail is directly proportional to the Segment number.

Resampling the curve based on Maximum Segment Length will rebuild the entire line into equal length segments except the last segment. If the Maintain Last Vertex option is on, the last segment will be less than or equal to the Maximum Segment Length value, depending on its distance to the endpoint. With the option off, the endpoint is disregarded and the line is created out of equal lengths.

Turn on Points in the display to see how the Resample SOP resamples line segments.

The Rest Position SOP creates an attribute based on the surface normals that allows a shader to stick to a deforming surface.

All primitives support the rest attribute, but, in the case of quadric primitives (circle, tube, sphere and metaball primitives), the rest position is only translational. This means that rest normals will not work correctly for these primitive types either.

Use the Rest Position SOP only when you are deforming your geometry and you are assigning volumetric or solid materials/patterns in your shader.

Rest normals are required if feathering is used on polygons and meshes in Mantra. NURBs/Beziers will use the rest position to compute the correct resting normals.

This example demonstrates the Revolve SOP’s ability to create geometry by spinning curves and surfaces around any described axis. Simple objects, such as a torus and a vase, are generated by the Revolve SOP and user-defined inputs.

This file also shows off how different geometry types react to different Revolve SOP parameter changes.

This example demonstrates how you can use the Scatter SOP to
scatter points that stay consistent through topology changes
like remodelling the input geometry or breaking it. It does
this by using the option to scatter in texture space.

This example demonstrates how you can use the Scatter SOP to
scatter points that stay consistent when separate pieces are added or
removed. It does this by using the option to use custom random seeds
for each primitive.

This network utilizes three SOPs (Bound, Spring and Lattice) that commonly work together to simulate certain physical dynamics.

We have created a simple polygonal sphere to act as the source geometry. The sphere is then fed into a Bound SOP which will act as a deforming reference. The Bound SOP also behaves as re-enforcement for the deforming object.

Then the bounding box is wired into the Spring SOP with a group of grids as collision objects. The Spring SOP simulates the dynamics by calculating the proper deformations and behaviours of our source geometry as it collides with other objects. The Spring SOP is where we can apply external forces along with various attributes (characteristics such as mass and drag) which influence how the object deforms.

Finally the Lattice SOP takes the deformation information from the Spring SOP and applies it to the source sphere geometry.

This example demonstrates a simple ray traced shader
using a vop vex network. To modify the shader properties,
create a properties shader in the material and connect
it to the output shaders node. You can then add
rendering parameters to the properties node.
For example to control the number of reflection bounces,
you would add the reflect limit parameter.