Pixelkram / Moritz S. (of Entagma) and I are proud to announce MOPs: an open-source toolkit for creating motion graphics in Houdini!
MOPs is both a suite of ready-to-use tools for solving typical motion graphics problems, and a framework for building your own custom operators easily.
More information is available from our website: http://www.motionoperators.com
Enjoy!

Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves.
OpenGL
OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers).
Modern GL implies a lot of different things, but the key ones are:
geometry data and shader data must be backed by VRAM buffers,
Shaders are required, and
all fixed function lighting, transformation, and shading is gone.
This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes.
OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well).
Bottlenecks
As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI.
The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer.
The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU.
Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit.
Mantle, Vulkan, and DX12
AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences).
Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging.
In short, it's a handful and can be rather overwhelming.
Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems.
Apple and OpenGL
When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since.
Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist.
Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture).
Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal.
Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan).
Enter Metal
Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language.
If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions.
I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess.
And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world

I thought it fitting to post this here too ;). For better or worse, I'm launching a vfx and animation studio at the end of the week. Some of you may recognize some of the name (if you squint and look at it just right).
http://theodstudios.com

Hi everyone,
Herer's a little personal project I did over the last year. No keyframes where used for the animation. Each movement is generated through physical simulation or procedural noise.
The Bananas and Pears are done in H16.5 using CHOPs controlled Bones and then fed into a FEM simulation. All the other fruits are done using H17 and Vellum.
ÖBST: "How would fruits move if they could?"
Hope you like it.

I've wanted to tackle mushroom caps in pyro sims for a while. Might as well start here...
Three things that contribute greatly to the mushroom caps: coarse sub-steps, temperature field and divergence field.
All of these together will comb your velocity field pretty much straight out and up. Turning on the velocity visualization trails will show this very clearly. If you see vel combed straight out, you are guaranteed to get mushrooms in that area. If you are visualizing the velocity, best to adjust the visualization range by going forward a couple frames and adjusting the max value until you barely see red. That's your approximate max velocity value. Off the shelf pyro explosion on a hollow fuel source sphere at frame 6 will be about 16 Houdini units per second and the max velocity coincides with the leading edge of the divergence filed (if you turn it on for display, you'll see that).
So Divergence is driving the expansion, which in turn pushes the velocity field and forms a pressure front ahead of the explosion because of the Project Non-Divergent step that assumes the gas is incompressible across the timestep, that is where where divergence is 0.
I'm going to get the resize field thingy out of the way first as that is minor to the issue but necessary to understand.
Resizing Fields
Yes, if you have a huge explosion with massive velocities driven by a rapidly expanding divergence field, you could have velocities of 40 Houdini units per second or higher! Turning off the Gas Resize will force the entire container to evaluate which is slow but may be necessary in some rare cases, but I don't buy that. What you can do is, while watching your vel and divergence fields in the viewport, adjust the Padding parameter in the Bounds field high enough to keep ahead of the velocity front as that is where you hope for some nice disturbance, turbulence and confinement to stir around the leading edge of the explosion.
or...
Use several fields to help drive the resizing of the containers. Repeat: Use multiple fields to control the resizing of your sim containers.
Yep, even though it says "Reference Field" and the docs say "Fluid field..", you can list as many fields in this parameter field that you want to help in the resizing. In case you didn't know.
Diving in to the Resize Container DOP, there is a SOP Solver that contains the resizing logic that constructs a temporary field called "ResizeField", importing the fields (by expanded string name from the simulation object which is why vector fields work) with a ForEach SOP, each field in turn, then does a volume bound with the Volume Bounds SOP on all the fields together using the Field Cutoff parameter.
Yes there is a bit of an overhead in evaluating these fields for resizing, but it is minor compared to having no resizing at all, at least for the first few frames where all the action and sub-stepping needs to happen.
Default is density and why not, it's good for slower moving sims.
Try using density and vel: "density vel".
You need both as density will ensure that the container will at least bound your sources when they are added. Then vel will very quickly take over the resizing logic as it expands far more rapidly than any other field in the sim.
Then use the Field Cutoff parameter to control the extent of the container. The default here is 0.005. This works for density as this field is really a glorified mask: either 0 or 1 and not often above 1. Once you bring the velocity field in to the mix, you need to adjust the Field Cutoff. Now that you have vel defined along side density, this Field Cutoff reads as 0.005 Houdini units per second wrt the vel field.
Adjust Field Cutoff to suit. Start out at 0.01 and then go up or down. Larger values give you smaller, tighter containers. Lower values give you larger padding around the action. All depends on your sim, scale and velocities present.
Just beware that if you start juicing the ambient shredding velocity with no Control Field (defaults to temperature with it's own threshold parameter so leave there) to values above the Field Cutoff threshold, your container will zip to full size and if you have Max Bounds off, you will promptly fill up your memory and after a few minutes of swapping death, Houdini will run out of memory and terminate. Just one of the things to keep in mind if you use vel as a resizing field. Not that I've personally done that...
The Resolution Scale is useful to save on memory for very large simulations, which means you will be adjusting this for large simulations. The Gas Resize Field DOP creates a temporary field called ResizeBounds and the resolution scale sets this containers resolution compared to the reference fields. Remember from above that this parameter is driving the Volume Bound SOP's Bounding Value. Coarser values leads to blurred edges but that is usually a good thing here.
Hope that clears things up with the container resizing thing. Try other fields for sims if they make sense but remember there is an overhead to process. For Pyro explosions, density and vel work ok. For combustion sims like fire, try density and temperature where buoyancy contributes a lot to the motion.

Hi,
Just posting some of my recent art. Most of it is houdini. Some is a mix of Houdini, Daz and Marvelous Designer. If you see a character, that's definitely from Daz. Everything is rendered in Octane.
regards
Rohan

Few tips and tricks to manipulate gas simulation.
1. Independent resolution grid. E.g. Overriding vel grid size independent to a density grid.
2. Creating additional utilities. E.g. gradient, speed, vorticity and etc which can be used to manipulate forces.
3. Forces via VEX and some example snippets.
smokesolver_v1.hipnc
P.S. Some of this technique are not Open CL friendly though.

what if the self buttons were creating DOP setups inside one SOP network instead of having a Geometry node,
a DOP network for simulation and another Geometry node to import the data and save to disk.
It makes much more sense to see the data flow from top to bottom in one network without having to jump to different levels for no reason.
maybe it's just me...
grains.hipnc

Hi! Here is something I've been working for a few time on and off. I was searching for something that I could use for plants that are colliding near the camera. So I ended up using the bullet solver with packed geometry. I'm not sure if it's the right way to go for this kind of things but at least I learned a lot about the constraints.
Thinking afterwards I probably should have used a few more substeps but I hope you can enjoy it anyway! Suggestions for improvements are welcome

I'm working on something related to art direct the swirly motion of gases; Its an implementation of a custom buoyancy model that let you art direct very easily the general swirly motion of gases without using masks, vorticles, temperature sourcing to have more swirly motion in specific zones, etc. Also it gets rid of the "Mushroom effect" for free with a basic turbulence setup.
Here are some example previews. Some with normal motion, others with extreme parameters values to stress the pipeline.
For the details is just a simple turbulence + a bit of disturbance in the vel field, nothing complex, because of this the sims are very fast (for constant sources: average voxel count 1.8 billions, vxl size 0.015, sim time 1h:40min (160 frames), for burst sources, vxl size 0.015, sim time 0h:28min).
I'm working on a vimeo video to explain more this new buoyancy model that I'm working on.
I hope you like it!
Cheers,
Alejandro
constantSource_v004.mp4
constantSource_v002.mp4
burstSource_v004.mp4
constantSource_v001.mp4
burstSource_v002.mp4
burstSource_v003.mp4
burstSource_v001.mp4
constantSource_v003.mp4

Hi everyone!
The past week I worked on a personal project for learn something about hairs - vellum. It's my first project ever with hairs so I guess is nothing special but several people asked to see the hip file so here it is.
Final result:
Hip file (I had to recreate it but it should be pretty much the same):
groom_clumping_03.hipnc

"The Tree"
Another R&D image from the above VR project:
The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty...
(I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren )
It's really unbelievable how little we know about these beings.
So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest.
Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything.
Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched.
For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry.
Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc.
To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc.
In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable.
I also created plants, hands, rocks etc. in a fitting style.
The team at Invisible Room took over from there and did the Unity part.
The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer.
I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level.
Interesting times :-)
Cheers,
Tom

ok, here is the example file with 4 ways (cache the instance geometry first, both blue nodes )
1. (Purple) rendering points with instancefile attrib directly through fast instancing
2. (Green) overriding unexpandedfilename intrinsic for any packeddisk primitive copied onto points without stamping
3. (Red) just for comparison Instance SOP, which is using copy stamping inside, so it will be slower than previous methods
4. (Yellow) copying static alembic without stamping and overriding abcframe in this case to vary time for each instance independently (if you need various alembics you can vary abcfilename as well)
ts_instance_and_packed_examples_without_stamping.hip

Project Non-Divergent Step and Mushrooms
The Project Non-Divergent DOP is responsible for 99.9% of the simulation's behaviour. Yes hundreds of DOPs inside the Pyro Solver all playing a part but all funnelling through that single Non-Divergent step.
This means that if you don't like the look of your sim and the mushrooms, it's ultimately because of the Non-Divergent step creating a vel field that doesn't do it for you.
If you want to see for yourself, unlock the Pyro Solver, dive in, find the Smoke Solver, unlock that, dive in and find the projectmultigrid DOP and bypass it, then play. Nothing.
For most all Pyro sims, this is the Project Non-Divergent Multigrid as it is the fastest of the Non-Divergent micro-solvers. This specific implementation only takes the vel and divergence field and assuming across the timestep that the gas is non-compressible when divergence is 0, will create a counter field called Pressure and then apply that pressure field to the incoming vel to remove any compression or expansion and that gives you your velocity, nice turbulent and swirly, or combed straight out.
Just tab-add a Project Non-Divergent Multigrid DOP in any dop network and look at the fields: Velocity Field, Goal Divergence Field and Pressure Field (generated every timestep, used, then removed later on).
All the other fields in Pyro are there to affect vel and divergence. Period. Nothing else. At this point I don't care about rendering and the additional fields you can use there. It's about vel and divergence used to advect those fields in to interesting shapes, or mushrooms.
If you want to create your own Pyro Solver taking in say previous and new vel, density, temperature, and then in a single Gas Field VOP network, create an interesting vel and divergence field, then pass that straight on to the Project Non-Divergent Multigrid microsolver, then advect density, temperature and divergence afterward, go for it.
Knowing that only vel and divergence drive the simulation is very important. All the other fields are there to alter the vel and divergence field. So if you have vel vectors that are combed straight, divergence (combustion model in Pyro) or buoyancy (Gas Buoyancy DOP on temperature driving vel) have a lot to do with it. Or a fast moving object affecting vel...

Hello everyone!
My name is Daniele, I'm new to Odforce. This is my first post, nice to meet you!
I've been a character animator for a few years but recently I started learning Houdini. I'm learning using resources I find online, including some super useful posts from this forum.
I would love to use this post to keep track of my progress and share my results with you.
My first project is a procedural building, hip is attached.
Cheers!
Daniele
procedural_house_01.hipnc

it's pretty straightforward out of the box
just use v@N, v@up or p@orient on your instancing points in such a way that resulting reference frame has Y pointing in up-down direction of your ocean (so in normal direction of the ball) and X in the direction you want wind to blow in
in your file, since v@N is pointing outwards and v@N defines Z axis, your ocean deforms in a tangential direction and therefore you are seeing weird deformation
here is the modified file
ts_ocean_on_ball.hip

Hi roberttt!
I did that specific fracture before the Houdini 16+ booleans were available, using a custom voronoi cutters technique. Basically, I used boolean-style cutter geometry to guide a voronoi fracture.
1) Scattered lots of points on the cutter geo, point-jitter them for width, and create cluster attributes on those points to create small clumps
2) Create a band of voronoi points a bit further from the cutter geometry, to define the large chunks. These points all get the same cluster value, and make sure that cluster value isn't used in the small-chunks clusters.
3) Run the fracture with clustering.... although the new H17 voronoi fracture doesn't seem to have clustering built in. So I believe you need to do the clustering post-fracture in H17, which unfortunately doesn't have an option to remove the unnecessary internal faces, so the geom can be a bit heavy with the new workflow. (Unless I'm missing something obvious!)
I don't think I've used this voronoi fracture workflow at all since the H16+ booleans were released, and I've removed that technique from my CGMA destruction class. Nowadays I would handle this in one of these ways:
- Running a primary boolean fracture to define the main chunks, and then running a secondary pass where I generate additional fragments on the edges of the main pieces. There are various ways to generate those secondary boolean edge cuts, and it's always a bit experimental.
- Fracture everything at once into lots of small pieces, and use noise or geometry-grouping to define the larger shapes from the smaller fracture. Then once those large chunks are defined, use constraints or the name attribute or double-packing to get them to behave as individual large pieces.
Hope this helps! :-)

Hey All!
The first part of this tutorial has been available for almost a year now, but because of the sad news that hit CMI, I was unable to upload the 2nd "half" there.
So instead I just made it available on Youtube for everyone, to make up for that I suppose. I'm considering putting the first part on there as well, if enough people want that.
In this tutorial covers the following, using Houdini
* Generating water meshes
* Updating the terrain based on the water
* Generate walking paths on the terrain
* Create some basic instances
* Build a flexible system using external files to place these instances.
Built on Houdini 16.5, but should work on 16 too, or all the way back to H14 if you skip the heightfield part.
Recommended specs: at least 16GB of RAM, reduce the terrain size if you have less.
Disclaimer: Work files are as is, and do not contain the cached geometry to save on space,
this may explain node errors on the various "File Cache" nodes.
It does however also contain the work done in the first half of the tutorial, albeit mostly undocumented:
https://tinyurl.com/y89egjvq
Hopefully its of some use!
Twan

I have been exploring how constraints work and I have put together a basic RBD car rig. The vehicle/car rig supports front and rear wheel drive, a spring suspension, motor speed, adjustable wheel size with front and rear axle offsets and a switchable front/back engine block mass.
Dive inside and look for the node named Controls to play with the various settings.
I have a 1st draft attempt at steering, but it does not really work yet. If anyone has any ideas on how to link steering to the constraints I'd love to see them.
Thanks to Richard Lord and Julian Johnson for posting their constraint systems and Matt Estela's CGWiki . Dissecting their work helped me build this rig.
ap_basic_vehicle_090318.hiplc

Last week a guy asked in the brazilian houdini group, in facebook, how to simulate colored smoke. I believe there are lots of hip files with this kind of effect, but while thinking about it, it came to my mind the possibility of using CMYK instead of RGB, since cmyk is more suitable for mixing colored things other than light.
I couldn´t spend more time testing it or improving the file, but it seems to work, so here´s the hip file.
colored_smoke_V002.hip

Hi all, I had been doing a rnd project on how to generate knitted garments in Houdini lately. And one my inspiration was from a project which was done by Psyop using Fabric engine and the other one is done by my friend Burak Demirci. Here are the links of them.
http://fabricengine.com/case-studies/psyop-part-2/
https://www.artstation.com/artist/burakdemirci
Some people asked to share my hip file and I was going to do it sooner but things were little busy for me. Here it is, I also put some sticky notes to explain the process better, hope it helps. Also this hip file is the identical file of the one that I created this video except the rendering nodes https://vimeo.com/163676773 .I think there are still some things that can be improved and maybe done in a better way. I would love to see people developing this system further. Cheers!
Alican Görgeç
knitRnD.zip

Here is another slight variation. Instead of generating a fixed length line segment that moves through time, generate the full path over the entire time range. Then add a primitive attribute @path_pos to each line primitive. Drive the offset along path value, of the path deformer, with this attribute. Then you can have some geometry leading others as they each flow along their own path.
float frame_end = 200.0;
// deform_path node expects input in the range of 0-1.
f@path_pos = fit01(@Frame,0,frame_end);
// Now offset each path based upon it's index.
float delta = 0.05; // per-line delay time can be set here.
f@path_pos -= (delta * @prim);
ap_ps_Cardume_Odforce_v3.04.hiplc

You're losing sight of the bigger picture here, which is to create art. FX TD's are by definition going to be on the technical side of things, but their goal is to facilitate the creation of art. The final image is what matters, 99% of the time. People with engineering mindsets sometimes like to get caught up in the "elegance" or "physical correctness" of their solutions, but that stuff rarely (if ever) matters in this field.
Rotating an object is conceptually a simple thing, but it turns out that there's quite a bit of math involved. Is it really insulting one's intelligence to not assume that every artist is willing to study linear algebra to rotate a cube on its local axis? I do know how to do this, and I still don't want to have to write that code out every single time. It's a pain in the ass! Creating a transform matrix, converting to a quaternion, slerping between the two quaternions, remembering the order of multiplication... remembering and executing these steps every time gets in the way of exploration and play. Besides, all of that is only possible because SESI wrote a library of functions to handle this. Should we be expected to also write our own C++ libraries to interpolate quaternions? Should we be using Houdini at all, instead of writing our own visual effects software? Who engineered the processor that you're using to compute all this? This is a rabbit hole you'll never escape from.
Anyways, Entagma and MOPs are not officially affiliated at all, so Entagma's core mission of reading white papers so that you don't have to is unlikely to change.

Check out my latest project - creating an open library full of learning resources about various areas of VFX. It has many houdini-related presentations and theses.
library: https://github.com/jtomori/vfx_good_night_reading
blog post: https://jurajtomori.wordpress.com/2018/06/11/learning-resources-about-vfx-and-cg/

Hello,
I am FX/CG generalist. Here you can watch my new showreel compiling some work for feature film, commercials, personal work and R & D.
I am available for freelancing remote or berlin based studios.
Breakdown:
https://www.dropbox.com/s/94tmwcmkmk279c2/breakdown_reel_2018_tomfreiag.pdf?dl=0

spammers will spam, that's the nature of the interwebs. We've turned on first post moderation for now, which will shield everyone from the spam. Hopefully it dies down soon and we can turn it off again.
M

I would use the wind tunnel option on the Pyro Object DOP. Enable the wind tunnel option and set the vector as the direction and magnitude of the wind tunnel.
Make sure that it is a smoke sim.
Disable anything to do with temperature. Set buoyancy to 0. Disable any temperature field in the source volume and the Fluid Source VOP.
Get rid of gravity.
Make the container size just big enough to run the sim. Very narrow.
On the Solver, disable all shaping including diffusion except for confinement. Use lots of confinement to add nice swirling detail.
See the attached hip file for one example setup.
From my sim comparing to the reference footage, a lot of finessing with either localized sink in the shoe to velocity sculpting or invisible colliders are used around the shoe to get the streamers to do what they are doing which is to be expected. That the streamers would be art directed far beyond a simple physical simulation.
wind_tunnel_shoe.hip

Try this...
Put down a measure SOP and set it to measure the perimeter of your curves.
After that a primitive wrangle and write.
#include <groom.h>
adjustPrimLength(0, @primnum, @perimeter, @perimeter*@dist);
groom.h is a included file containing some functions used in the grooming tools and one of the functions is...
void adjustPrimLength(const int geo, prim; const float currentlength, targetlength)

Methods to Stir Up the Leading Velocity Pressure Front
We need to disturb that leading velocity pressure front to start the swirls and eddies prior to the fireball. That and have a noisy interesting emitter.
Interesting Emitters and Environments
I don't think that a perfect sphere exploding in to a perfect vacuum with no wind or other disturbance exists, except in software.
Some things to try are to pump in some wind like swirls in to the container to add some large forces to shape the sim later on as it rises.
The source by default already has noise on it by design. This does help break down the effect but the Explosion and fireball presets have so much divergence that very quickly it turns in to a glowing smooth ball. But it doesn't hurt. It certainly does control the direction of the explosion.
Directly Affecting the Pressure Front - Add Colliders with Particles
One clever way is to surround the exploding object with colliders. Points set large enough to force the leading velocity field to wind through and cause the nice swirls.
There are several clever ways to proceduralize this. The easiest way is with the Fluid Source SOP and manipulate the Edge Location and Out Feather Length and then scatter points in there then run the Collide With tool on the points.
Using colliders to cut up the velocity over the first few frames can work quite well. This will try to kick the leading pressure velocity wave about and hopefully cause nice swirling and eddies as the explosion blows through the colliders.
I've seen presentations where smoke dust walls flowing along the ground through invisible tube colliders just to encourage the swirling of the smoke.
You can also advect points through the leading velocity field and use these as vorticles to swirl the velocity about.
The one nice thing about using geometry to shape and control the look, as you increase the resolution of the sim, it has a tendency to keep it's look in tact, at least the bulk motion.
As an aside, you could add the collision field to the resize container list (density and vel) to make sure the colliders are always there if it makes sense to do so.
Colliders work well when you have vortex confinement enabled. You can use this but confinement has a tendency to shred the sim as it progresses. You can keyframe confinement and boost it over the first few frames to try and get some swirls and eddies to form.
Pile On The Turbulence
Another attempt to add a lot of character to that initial velocity front is to add heaping loads of turbulence to counter the effect of the disturbance field.
You can add as many Gas Turbulence DOPs to the velocity shaping input of the Pyro Solver to do the job. Usually the built-in turbulence is set up to give you nice behaviour as the fireball progresses. Add another net new one and set it up to only affect the velocity for those first few frames. Manufacturing the turbulence in this case. In essence no different than using collision geometry except that it doesn't have the regulating effect that geometry has in controlling the look of the explosion, fireball or flames, or smoke.
As with the shredding, turbulence has it's own visualization field so you can see where it is being applied. Again the problem is that you need a control field or the resize container will go to full size but if it works, great. Or use both colliders and turbulence pumped in for the first few frames and resize on the colliders. Up to you.
But you could provide some initial geometry in /obj and resize on that object if you need to.
Hope this helps...