This dedicated thread describes the main changes of version 3.07. Some parts are not completed yet. Please bear with us until that is done.

Object controls for the OctaneRender viewport

We added the possibility to modify placement and scatter nodes using handles for rotation, scaling and translation in the viewport. Since OctaneRender is not a 3D modeling software, this will not modify any mesh geometry, but we may add an additional transformation pin to mesh and volume nodes in the future. This way you wouldn't have to explicitly create placement or scatter nodes to be able to move meshes around.

You can choose either the move, rotate or the scale tool in the render viewport:

handles all.png

You can display a small representation of the world coordinate acis in the top left corner:

handles world axis_cr.png

You can also specify if the coordinate system that is used is aligned with the world axis or with the local axis:

handles move local world.png

The move tool allows movement along each axis or constrained to the plane defined by two axis using different parts of the control. We will also add a handle to allow free movement of the object while keeping its camera depth.

The rotation tool allows allows rotation around each of the axis via the axis rotation bands, a free rotation via the inner orange circle and the rotation around the camera-object axis via the outer yellow circle.

The scale tool allows the scale along each axis by selecting one of the axis lines, constrained to two axis by selecting one of the corners near the origin and the uniform scaling by selecting one of the axis end handles. We will change the behaviour by adding a new control at the origin which will allow uniform scaling and combining axis line and end handle to one unit to control scaling along one axis.

We re-labelled the emitter option "Cast illumination" to "Visible on diffuse", because that's what it does. We also added the option "Visible on specular" to emitter nodes which is kind of complementary controls the visibility of emitters on specular surfaces. And last but not least we added an option "Cast shadows" to the emission nodes. If the option is disabled no shadow rays are traced during the direct light calculation causing objects to lose their shadows.

This scene for example consists of simple ground plane, an emitter and 3 spheres with a specular, diffuse and a glossy material:

visible on specular on.png

Disabling "Visible on diffuse" makes the specular reflections of the emitter disappear, but keeps the diffuse reflections:

visible on specular off.png

Disabling "Visible on diffuse" causes the diffuse reflections to disappear and only keeps specular reflections:

visible on diffuse off.png

Disabling "Cast shadows" removes the shadows of the spheres:

cast shadows off.png

Double-sided emitters

We also added an option to make emitters double-sided. The emitter on the left of the example below has the option "Double-sided" enabled:

double sided emitter.png

Transparent emission

We added a new option "Transparent emission" to the emission nodes to allow the user to choose whether the emission power should be scaled with opacity or not. Until now, transparent emitters were always taken fully into account even if they were transparent, i.e. they behaved as if "Transparent emission" is enabled. This is useful if you want to control the light in your scene without your emitters being directly visible. But there are cases when transparent emitters should not emit light, for example if you would like to modulate an emitter surface using opacity. If you have for example this opaque emitter:

transparent emission opaque.png

And give it a fancy opacity map with "Transparent emission" enabled (default), you can see that the reflection doesn't really match the emitter:

transparent emission on.png

To solve it, you can disable the option "Transparent emission" which reduces or eliminates the emission of the transparent parts of the emitter:

transparent emission off.png

We also improved rendering of transparent emitters in general. Until now transparent emitters were not taken into account if they got hit by an indirect (eye) ray, which resulted in weird artifacts like in the rendering below, where the left emitter has an opacity of 1 and the right emitter has an opacity of 0.01 (so you can still see it):

fixed transparent emission 3.06.png

As you can see the floor close to the transparent emitter gets darker than the floor further away from the transparent emitter, which of course doesn't make a lot of sense. In 3.07 we fixed the problem by properly taking the transparent emitter into account if it is hit by an indirect (eye) ray:

fixed transparent emission 3.07.png

Please be aware that this fix can cause older scenes to render differently than in older versions and you may have to fix the brightness of some emitters.

Improved rendering of non-uniformly scaled emitters

Until now, non-uniformly scaled emitters were rendered incorrectly. The problem was that in that case the direct light sampling was done non-uniformly, too, resulting in artifacts in some corner cases like the example below:

stretched emitter 3.06 ann.png

You could work around the issue by setting the sample rate in the emitter to 0 and thus disabling direct light sampling for this emitter. Unfortunately this often results in excessive amount of noise:

stretched emitter 3.06 no DL ann.png

With the fix, stretched emitters are now sampled correctly:

stretched emitter 3.07 ann.png

As you can see above, the difference can be quite large in some cases which means that older scenes may be rendered slightly differently, but we don't see a way to avoid this.

Importance sampled texture environment for all textures

Until now, only image textures could be used for importance sampled environments, but now you can mix and match different textures while still using importance sampling. Since importance sampling of the environment makes only sense if it's not constant, Octane tries to figure out if the input texture actually is constant or not and implicitly disables importance sampling if it is, but it's probably best to manually disable "importance sampling" if you don't want to use importance sampling.

User defined instance IDs

We added the possiblity for plugins (or Lua scripts) to explicitly define instance IDs in all geometry nodes except the volume node. This alone isn't terribly useful, but combined with two new textures plugins can now explicitly define colours for instances. In this simple example, a Lua script graph creates a grid of instances of some input geometry (a cube) and assigns them an instance ID that matches the image used in the instance colour texture:

instance color_cr.png

This example uses the instance color texture which holds an image and maps instance IDs to pixels of the image, starting bottom left and counting in row major order to the top right. After that, it wraps around and starts at the bottom left again. This texture stores explicit colors in an image and thus requires additional data storage. As an alternative you can use the instance range texture, which converts the instance ID into a greyscale colour by mapping it to the range 0 .. Maximum ID. This has the advantage that no additional data needs to be stored. Plugging this texture into a gradient texture allows you to use the instance IDs for some useful effects:

instance range_cr.png

By giving instances an ID and then assigning a colour to an ID via textures allows you to specify more than one colour per instance, which wouldn't be possible if the colour would be stored directly with the geometry.

Some more technicalities: The user instance ID is set to -1 by default. All negative instance IDs are considered invalid. When the scene graph is traversed, parent geometry nodes that define a valid instance ID (i.e. >= 0), override the instance IDs of the children. This way you can group geometry and give it one ID. If no valid ID is defined for an instance it will fall back to 0.

Baking texture

We added a texture that allows you to bake an arbitrary texture into an image. We implemented this mainly to allow the use of procedural textures in displacement mapping, but it may have some other uses, too. The baking uses the texture preview system and then appears like an image texture to the rest of the system. The baking is done whenever an input is changed and is calculated on-the-fly. The internal image will not be stored in the project and thus needs to be recalculated whenever the project is loaded.

With this texture node, you can utilize the full power of procedural textures and combine them with displacement, which for example makes it super easy to create alien landscapes like this:

baking texture_cr.png

UVW transform texture

On request by the Unity plugin developers we also added an UVW transform texture node. It allows you to specify a UVW transform that is applied additionally to the UVW transform of its input texture (tree). One use case is to combine different scales/orientations/translations of the same image texture to create a larger detail range without creating patterns that are too obvious. If you take this image for example:

uvw transform ori.png

Imagine you want to scale it down by a factor of 2 to create finer details, you end up with an easy to spot repetitive pattern:

uvw transform 2x2.png

But combining these two makes the pattern less obvious:

uvw transform overlay.png

This is the corresponding node graph for the above image:

uvw transform node tree.png

The nice thing about it is that you can now replace the one texture by another texture in one step. Obviously, you can mix and match things even more to make things less regular and stick it all into a nested node graph and just keep the one texture and maybe some parameters as inputs.

Baking UV transform

The object layer node supports now a new transform that affects the way the UVs from that object layer are projected into the UV space when rendered using the baking camera.

What this allows is simply to be able to bake entire scene light-maps, including all render passes in one single render without any additional compositing.

To the already existing Baking group ID we have added a new pin Baking UV transform in the Baking settings group of the Object layer node.

ol_baking_settigs.png

The value specified as baking UV transform will be used by the baking camera to place all UVs that belong to the geometry in that object layer in the UV space.

As a way of example, if you would like to bake each object from this scene:

cornell_bake.png

Each of them would occupy the entire UV space, so each one of them should be baked independently, while now they can be arranged into their own areas:

ol uv transform combine.png

This allows baking a full scene lightmap atlas, which can be mapped back to the geometry using the exact opposite transform.

ol uv transform result.png

Direct configuration of net render daemons

In the net render settings you can now specify the host names or IP addresses of net render daemons directly, which is helpful if they are located outside of the subnet the net render master is connected with:

net render daemon config.png

To avoid having to enter all the daemons on every master computer, you can export/import this daemon list and shared it between computers, for example:

We added support for loading FBX and glTF files. Both file formats load - similar to Alembic - as a geometry archive i.e. a node graph with "lots of stuff" inside and providing material and object layer input linkers as well as camera and geometry output linkers.

Although we added support for bones (see below) We don't support inverse kinematic (IK) animations, which means you will have to convert your IK animation to forward kinematic (FK) to make your FBX files work in Octane.

We also developed a geometry exporter for the plugin API to allow plugins to export to FBX. This work isn't completely finished yet, but it should be completed soon. Originally, we hoped to get a better and faster alternative to Alembic, but it turns out that FBX has it's own suite of issues and ideosyncrasies, which you only figure out when you do the actual implementation. For example, we had to add our own extensions to allow the export of all the geometry features Octane currently supports (e.g. hair). These will of course not be loaded or loaded correctly when such an FBX files is opened in some other non-Octane application.

Another problem is that vertex animations are not stored in the FBX file itself, but in separate vertex cache files. That's not necessarily a problem, but causes some issues when FBX files are packed in ORBX packages. Since those vertex cache files can only be loaded from the file system and not directly from the package, we have to unpack them on-the-fly while an ORBX package with vertex cache files is open. These temporary files will be deleted when the package is closed.

After we had learned that lesson it seems we will have to implement our own geometry format if we ever want to allow a seamless exchange of scenes and assets between different Octane applications. It will be based on our node system and thus should be faster to export and import and doesn't require us jumping through hoops to make it work somehow. It's not clear yet, when that is going to happen, but we hope it won't be too far in the future.

Bone deformations

To support FBX and glTF we needed to add support for bone deformations. This way character animations can be stored a lot more lightweight than if the deformed geometry would need to be baked, like in Alembic. It will also allow some potential optimizations in the geometry compilation in the future. Since you can't set up or edit bone deformations in the Octane node graph editor, you can ignore the two geometry nodes you can find under "Geometry|Bone deformation" and "Geometry|Joint".

You do not have the required permissions to view the files attached to this post.

In theory there is no difference between theory and practice. In practice there is. - Yogi Berra