What’s new in Qt 3D with Qt 5.9?

I am pleased to announce that there are lots of new goodies coming along with Qt 3D as part of the Qt 5.9.0 release as well as the usual round of bug fixes. Engineers from KDAB and The Qt Company have been hard at work over the last months (and will be for some time to come) adding in some of the most frequently requested features. In this article, we will give a brief overview of some of the bigger features coming your way.

Using Qt Quick within Qt 3D

A very common request that we have had since we first released Qt 3D is to have the ability to embed Qt Quick scenes within a Qt 3D application. The use cases for this are wide ranging, but a typical example is to place a 2D user interface onto the surface of some planar piece of geometry in your 3D world to simulate an “in-3D-universe-control-panel”. As VR/AR become more popular, this use case is likely to increase in importance.

I’m happy to say that embedding Qt Quick into a Qt 3D scene is now easily possible. Even better, it is possible to interact with such 2D UIs within the 3D world! The Scene2D QML Example shows how to use the new Scene2D type to render Qt Quick to a texture, apply it to a 3D object, and ensure that events are forwarded to the QQuickItems to allow interactivity.

In short, we can render a Qt Quick 2 scene into a texture with Scene2D and register it to receive events from a Qt 3D entity called “cube”:

The ObjectPicker in the above code is picked up by the Scene2D’s “entities” property and used as a source for events.

The counterpart to Scene2D is Scene3D which allows embedding a Qt 3D scene inside of a Qt Quick 2D UI. Hopefully, in the future, we will be able to get a seamless blend between Qt Quick and Qt 3D.

Physics Based Rendering

Modern rendering engines have largely adopted Physics Based Rendering (PBR) to improve the visual appearance of their results and to make it easier for artists to get predictable results. PBR encompasses a family of techniques but essentially says that rendering should be more physically rigorous when shading than older ad hoc lighting models. For more information and background on PBR please take a look at this talk from Qt Con 2016.

New to Qt 3D in Qt 5.9 are two new materials: QMetalRoughMaterial and QTexturedMetalRoughMaterial which implement a PBR algorithm with Blinn-Phong specular highlights. There is also a new light type QEnvironmentLight that can be used in conjunction with the above materials to provide nice environmental reflections and image based lighting (IBL) as shown below.

The source code for the above examples are available at KDAB’s github.

Key Frame Animations

Qt Quick has long had support for easily specifying animations using an intuitive API. This can be used along with Qt 3D to animate properties. A frequent request from digital content creators is for support of key frame animations. Also, given the highly threaded architecture of Qt 3D, it would be nice if we could find a way to improve how animations scale compared to having them evaluated by the main thread. The technology preview of the new Qt3D Animation module achieves exactly this.

Whereas the Qt Quick animation API (and the QPropertyAnimation API in C++) specify an animation curve between the start and end times, when using key frame animations we instead specify the property values at specific times known as key frames. To evaluate the animation at times that do not exactly correspond to key frames we use interpolation/extrapolation. For now, Qt 3D Animation implements a Bezier curve interpolation, but we will expand this with other types for Qt 5.10. However, with the key frames being Bezier curve control points, it is already possible to achieve a wide range of animation curves.

The animation curves are specified using QAbstractAnimationClip which boils down doing it programmatically with QAnimationClip or loading baked animation data exported from a digital content creation (DCC) tool with QAnimationClipLoader. Qt 3D provides a handy addon for Blender to export the animation data of objects into the json format consumed by Qt 3D.

The actual playback of animation data is handled by a new QComponent subclass called QClipAnimator. As with all other QComponents, the clip animator must be aggregated by an entity to give behaviour to the entity. In addition to the raw animation data, the clip animator also needs a way to specify which properties of which target objects should be updated by the animation. This is done with the QChannelMapper and QChannelMapping classes. Please take a look at this blog post showing a simple application that animates a character using the Qt 3D Animation framework. The source code for this can be found at KDAB’s github.

The clip animator, animation data, and channel mapper types represent different concepts which are all merged in the familiar Qt Quick and QPropertyAnimation APIs. Separating these out allows us more control and reuse of the individual parts but we will look at adding some convenience API on top for common use cases.

The QClipAnimator class simply plays back a single animation clip. But what if we wish to combine multiple animation clips? We could drop down a level and craft some new animation data that combines the clips but that’s tedious, error prone and doesn’t work well when changing how we combine the clips at a high frequency. Enter QBlendedClipAnimator. Instead of a single animation clip, the API of a blended clip animator takes a pointer to the root of an animation blend tree – a data structure describing how an arbitrary number of animation clips can be blended together. The blend tree consists of leaf nodes representing animation clips and interior nodes representing blending operations such as linear interpolation or additive blending. For more information on blend trees, please take a look at the Qt 3D Overview documentation.

In order to have performance that scales, the Qt 3D Animation framework evaluates the animation curves on the thread pool utilised by the Qt 3D backend. This allows us to scale across as many CPU cores as are made available. Also, it is often the case that the object tree on the main frontend thread doesn’t need to know or care about the intermediate property updates during an animation. Therefore by default, only the final change from an animation is sent to the frontend objects. If you do need the intermediate values, perhaps because you have property bindings depending upon them, then you can subscribe to them via the new QNode::defaultPropertyTrackingMode. The default behaviour is consistent with the Qt Quick Animator types.

In addition to key frame animations, the Qt 3D Animation module introduces initial support for morph target animations that allow blending mesh geometries between a set of specified target shapes. The blending can be controlled via the above key frame animations. This is likely to be moved into the Qt3D Render module once we flesh out the support a little more.

Level of Detail

When building larger 3D scenes it can be useful to only render the full resolution geometry with high-resolution textures when the object is close enough to the camera to justify it. When the objects are farther away, and the projected screen size is small, it makes sense to use lower fidelity models, textures, and perhaps even shaders to reduce the overall rendering cost and therefore increase performance. Qt 5.9 introduces the QLevelOfDetailand QLevelOfDetailSwitch components to enable this.

The QLevelOfDetail component allows you to set an array of the camera to object distance thresholds or projected screen pixel sizes. As these thresholds are reached, the component adjusts its currentIndex property to match that of the corresponding threshold. You can bind to this property (or connect to its notification signal) to do whatever you like in response. For example, use a loader to load a different mesh, set smaller textures, use a simpler material or shader. It’s up to you.

The QLevelOfDetailSwitch component works in a similar way but implements a common use case where the children of the Entity upon which it is aggregated are all disabled except for the one matching the currentIndex property. This allows you to quickly and easily get a level of detail system into your application.

Text Support

Although we have support for embedding Qt 3D within Qt Quick and vice versa, it is still nice to be able to directly use textual content within your 3D virtual worlds. Qt 5.9 introduces 2 ways to do this (beyond Scene2D).

First, there is 2D planar text implemented with distance fields just like the default text rendering in Qt Quick. This is done by way of the QText2DEntity

Text2DEntity {
id: text
text: "Hello World"
width: 20
height: 10
}

This can be positioned freely in 3D space just like any other entity, and so this becomes a first class citizen of your 3D world.

For those of you that want solid, extruded, 3-dimensional text, there is the QExtrudedTextGeometry class. Or for even more convenience, there is the analogous QExtrudedTextMesh which is a subclass of the QGeometryRenderer component and can, therefore, be directly aggregated by an entity.

If you need any other kinds of text support in Qt 3D please let us know.

Summary

I hope that this has given you a good overview of the new features coming to Qt 3D with Qt 5.9. We think that you will find them useful in building ever more ambitious 3D projects with Qt. We have even more surprises in the pipeline for Qt 5.10 so watch this space.

7 comments

nice, Qt 3D 2.0 is getting better all the time. here’s a thought: a mode (disabled by default) where smaller images for use in LOD are automatically generated (and cached) by Qt 3D rather than manually by the coder. there could be some compression ratio property to tune it or something. idk maybe you guys already have this in the works

Could you maybe share your proof of concept for VR? I’m working on VR support for qt3d in my spare time too. My current state can be found on https://github.com/dabulla/qt3d-vr/tree/virtual-reality/examples/qt3d/virtualreality and https://github.com/dabulla/qt3d-vr/tree/virtual-reality/examples/qt3d/vr-window.
Headtracking and rendering works in OpenVR and Ovr natively (Vive and Oculus), with OSVR in mind.
Currently there is about 3 frames lag in my implementation and I guess this is due to frontend/backend sychronization. I would be very interested in how you reduced this lag. Or even how it can be measured in Qt3D.
I wanted to use atomic counters to profile a roundtrip from input (shader Parameter in frontend) to screen. However, this is a seperate challange.
If you find something useful in my example I’d be happy to contribute (I even followed the Qt styleguide the most time).
PS.: Although I cloned Qt3D completely, the examples run with the unchanged Qt3D. Only private headers of Qt3D are needed at the moment.