The latest from the DEVELOP3D Blog:

Finally got around my subscriptions update with Inventor 2010 (you need both SP1 and the Subs pack), so I’ve spent most of the day playing with Fusion Tech Preview 2.0 which was announced earlier this week and made live at the same time. What I’ve found has intrigued me, but before we get onto that, lets look at what we’re dealing with.

Fusion is about two things, in no particular order, other than it makes sense in terms of our focus. Fusion TP2 bring some changes and refinements to the user interface, which Autodesk is testing out with this project, so see what people like, what they don’t and what sticks. Some of the changes for this update include a rework of the geometry selection tools (the feature recognition for selection is still a bit flakey), but for me the work done on the triad to make aligning it to other geometry (there’s a small glyph you hit to then align the triad) makes the whole direct editing thing work, as you would expect.

While these are interesting, the big news is the introduction of the Change Management add-in for Inventor. Essentially, this enables the round tripping of data between Fusion and Inventor. You have an Inventor part, you want to edit it in Fusion, load it, edit it, directly, without recourse to geometry History. You save it. Load it back into Inventor and rationalise the changes to update the history and feature tree. It tracks what you’ve edited, what you’ve added and what you’ve deleted and updates the history and feature data accordingly.

This is the background to the Fusion name. Autodesk are attempting to fuse the ability to use both Direct Modelling in combination with History based modelling tools. The eventual goal, at present according to the powers that be, is not two separate applications, but to prove out the technology and integrate it all into core Inventor, rather than providing it as a standalone application. Hence, it’s Technology Preview, not a Product Preview.

So, let’s have a look and see what we can do.

1. Let’s start with a simple part. One of Inventor’s sample parts. As you’ll not particularly complex. The Feature tree shows a free extrudes, a few holes and that’s about it. Easy meat for Fusion to Edit.

2. Loaded up into Inventor Fusion, we started by removing this hole feature. Easy job, as you simply select the faces you want to take out, the system removes them and removes the internal boundaries of the faces and closes the surfaces out.

3. Once that counter bored hole is gone, I shifted the remaining hole along the edge and back into the part a little.

4. Next I grabbed this feature, all in all, 8 faces and shifted it a few mill up the face. The new Triad orientation thing works very nicely indeed.

5. Final move was to take these two other holes, use the Press/Pull comment to make them both 2mm larger by offsetting the internal faces. Not an accurate way to redesign a hole, but for this purpose, it’ll do.

There you have it. Four things have changed. Two things have moved in position, one has been deleted and two have been resized. It took a couple of minutes at most to effect these changes. So let’s see what happens when you read it back into Inventor Professional.

When you read the part into Inventor, the Change Management add-on kicks into gear and inspects the part you’re loading. What’s interesting is that because I was using .ipt files and Fusion saves out .dwg files, I was actually not loading the original file back into Inventor. Obviously Fusion, when making edits, stores the original feature information somewhere and the new edits alongside it. So what happens?

6. The Change Management add-on presents you with a list of edits made to the model and shows you graphically on screen on where those changes occur. It shows you the changes in blue/yellow slowing before/after for geometry changes and red showing deletes. In the image above you can clearly see the four edits we made.

You have the ability to control exactly what happens with each detected change. You can have them executed, so the feature gets rebuilt, you can ignore it, or if you encounter problems, then you can choose to have the faces extracted and the system try to patch them back into the model with the Sculpt command.

Thinking this was a reletively simple set of changes, I hit the Apply All.

It broke. and while it rebuilt one feature out of those three changes, very little else worked.

I was surprised so I did some digging.

What it boils down to is that because of the nature of whatever Autodesk is doing with Fusion and the edits it makes, there are at present, very explicit limitations in terms of what can and can’t be done when you’re moving data between Fusion and Inventor and hoping to have the history and feature tree reconciled and maintained.

Your features have to be features in their own right. There can be very little interdependency between features, no patterns, no mirrors, no nothing. If you try to edit a child of a pattern, you get an error. If you try to edit the parent, you have problems with updates. I looked into this simple part we have here.

If you look a the history tree, you’ll find it’s all driven by patterns and there are three core features that are repeated throughout the part. I’ve coloured coded them above and below - the grey is the base extrude, the green features are the parents, the red features are the children. Below shows what happened when I just edited the parents. Because patterns are based on linear references, you find that the pattern tries to update too and that breaks the history tree.

Now. I don’t want you to think I got the hump, threw the toys out the pram and proclaimed “this fusion stuff doesn’t work”. I didn’t. I tried out some more parts, built some myself, tried them out and found that there are instances where the Change Management tools actually work and work well.

But these are very simple operations in very rare conditions where the feature has almost no effect or influence on other parts of your geometry. For anything else, if you want to, purely for technical reasons, try to get this to work and with this first release (that’s key to note - this is a first public showing of a new technology - NOT a product), then you have to understand exactly how your changes will effect your history tree if you want to try and reconcile even the simplest of changes.

This struck me as odd. Part of the pitch for direct modelling is that its easier to use when you either aren’t aware of how a part’s been built, have forgotten or that make a simple change you need to make is going to royally mess with your history-based part. The irony is that if you do use a technology like Fusion and unless Autodesk pull something out of the bag in terms of improving this Change Management technology, you’re going to have to be just as familiar with your part’s build history, its method of construction and the effects a seemingly small edit will have - then you might as well just do it in core Inventor.

What should also be noted is that this doesn’t change Fusion’s potential as a direct modelling technology. The user interface, although much the same, feels slicker, with a nice workflow and use model now developing and it’s power is there. The ability to open a part, make edits and get on with the job at hand, without worring about data source is a good and valid one for many - and Fusion executed all the operations I wanted to do perfectly. it seems more consistent and more robust and repeatable than the first Tech Preview - or maybe that’s just me.

But in the end, I can’t help but wonder where the Change Management technology is going and whether it’s actually worth the effort or indeed, even possible to get this to work in a reasonably functional manner. Kudos to Autodesk for trying to pull off what’s an incredibly complex thing. The good news is that this is all done, for free to the user, so they can see what it can do, try it out and feedback into the process. Autodesk Labs (as with all Labs sites) is all about trying things out. Some work and some don’t. Some make it to market, some don’t. Portions of some products get shipped while the rest gets scrapped. The good thing is that these days, we all get to try it out and see for ourselves.

Looking at the industry as a whole, there are many different approach and different ways that vendors are looking at the Direct Modelling world and this is just one avenue of experimentation. For those with an interest in how product modelling is moving forward, these are truly interesting times.

I got briefed about this last Friday (as did the rest of the press/blogger community I’m sure), but I wanted to wait till it was up and live on the Autodesk Labs web-site and I’d had a chance to play with it. When Autodesk publicly and officially announced Inventor Fusion Technology Preview, much was made of the name and the aims of the technology.

This is Autodesk’s answer to the likes of Synchronous Technology, Instant3D and all the other non-history-based modelling technologies out there. What the company has focused on, with both the messaging and the name, was the ability to rationalise together traditional feature+history-based model edits with the history-less modelling practices founds in Inventor Fusion.

Essentially you could edit a traditional Inventor model in Fusion, without using features of any kind, then pass it back to Inventor to rationalise the changes and update the history tree to integrate those changes back into the history-based model. This isn’t the history-tree appending method used by the likes of NX and SolidWorks, but something more integrated at the very core of your modelling history. The problem was, that capability wasn’t available in the first Technology Preview - which many missed. That changes today with the introduction of the Change Manager add-on for Inventor alongside Tech Preview 2.0.

Today, Tech Preview 2 has gone live and you can download it and play with it. Again, you have the ability to try Inventor Fusion if you’re from the qualifying countries. But you’ll need to be a suscriptions customer using Inventor 2010 Subscriptions release if you want to try the Change Manager. The two peices of code (fusion 2.0 and Change Manage) are delivered as separate zips/installs.

It’s also worth remember that separate applications are not the end goal of this process. This is a Technology Preview, and separating out the Fusion tech from core inventor, allows the team to play with and distribute the code and see how users like the interaction between the two different types of modeling methodology - but the end goal is that Fusion technology will be built directly into Inventor, not sold as a separate application.

We’ll be trying it out later on today, once we’ve updated Inventor to the latest release (without which the change manager won’t work). There are also some new additions and changes to the core Inventor Fusion tools as well, so stayed tuned for more. In the meantime, here’s some video fun for you.

As ever, it looks like Josh beat us to the jump with this one, but its worth covering a little. Autodesk just pushed out the 1.1 release of SketchBookMobile and it addresses some of the issues with the initial release, namely, layer perservation (you can now push out a .PSD file out to Photoshop) and for me, the big one, importing landscape image (which is something I’d asked about when it launched) and brush preview when you’re resizing them. It’s available now on the App store. Josh also has a very handy comparison chart looking at other sketching apps for the iPhone.

Finally, here’s a slick little vid* that shows a workflow with moving data from concept to 3D with SketchBookMobile and Inventor.

* nice video, but honestly. Where the hell are they getting this music from?

Nvidia and mental images are reaching for the Cloud to offer ray-traced rendering over the web using stacks of GPUs (Graphics Processing Units) instead of CPUs. Set for official launch at the end of November Nvidia’s RealityServer 3.0 platform will enable architects, automotive engineers and product designers to send 3D scenes up into the cloud with the rendered results streamed back over the web. The major sell for this is significantly reduced rendering times, but the tech will also be able to stream interactive 3D to any web connected device including mobile devices - though of course bandwidth will be an issue.

The platform is highly scalable, and more users can be serviced simply by adding more GPUs. Nvidia is already talking to a number cloud computing providers and expects to announce partnerships with several of them later this year, one of them being Amazon EC2 (Elastic Compute Cloud). The cost of cloud-based deployment is expected to be less than one EURO per hour.

While the Cloud computing aspect of the technology is sure to dominate the headlines, of equal interest is the fact that RealityServer 3.0 can be deployed within the confines of a firewall, not only as a GPU-based ‘render farm’ to serve up rendered scenes in double quick time, but also as a means to distribute interactive 3D graphics throughout the enterprise.

The background to this technology is Nvidia’s CUDA programming architecture that enables Nvidia GPUs to carry out computationally intensive tasks usually reserved for CPUs. CUDA was used to devise a new GPU-based rendering mode called iray, which is based on mental images’ mental ray 3.8 rendering engine. This is different to most rendering technologies which rely on CPUs to do the calculations.

On the hardware side, RealityServer consists of multiple Nvidia’s GPGPU (General Purpose GPU) Tesla cards, which are used to render out the scenes plus a few CPUs, which are really just used for housekeeping, says Nvidia.

The technology is already primed up to be exploited a number of 3D CAD companies. There are over ten major CAD applications that already use mental ray, including Autodesk (3ds Max, Inventor, Revit), SolidWorks, Dassault Systemes (CATIA), and most recently PTC (Pro/Engineer Wildfire).

The critical technology here is mental ray 3.8, which is due for release later this year and will enable GPU-accelerated mental ray rendering for the first time. Once these vendors implement mental ray 3.8 into their core products, they would have all the tools to hook up to RealityServer, says mental images, but for some CAD software, particularly the more mature products that carry a lot of ‘architectural baggage’ the implementation would not be trivial. That said, mental images told DEVELOP3D that development is already underway at many CAD companies and it expects to see applications supporting RealityServer next year.

While mental images was unable to name all the names it did confirm that all of the aforementioned CAD developers are already working on systems that would allow them to virtualise their applications or to at least have server-based collaborative solution directly connected to their applications. As a result the company is confident that this technology is well placed to take a lot of work off the CAD developers’ plate as they are essentially offering them a whole suite of tools to get started faster instead of doing everything themselves. mental images also disclosed that Autodesk showcased the technology at a conference in Munich, Germany only yesterday.

In terms of the actual rendering technology RealityServer is a progressive renderer, so users are able to get a good idea of the final render in seconds or minutes, even though the final rendering may take hours. For comparative render times between CPU and GPU-based solutions it was hard to draw mental images on exact figures. However, the company did provide an example of an architectural scene that took 45 mins to render on a four Tesla cluster system and 8-10 hours on a more traditional four core CPU-based system. That said, it was wary of comparing apples and oranges as the scenes were not identical because the GPU renderer is slightly different from the CPU renderer in terms of shading technology. The company did say that it that would be providing benchmark results from customers next month and the early results are encouraging.

While for most CAD uses the emphasis is likely to be on using Reality Server as a rendering server, mental images was keen to point out that it also provides a platform on which companies to build applications that utilise the technology in different ways. In the automotive sector, for example, it is already working with a number of manufacturers on projects to develop and enhance their in-house design / review pipelines. A dedicated car paint shader is also in development and will be released early next year.

For those that wish to set up their own facility there are three different packages. In true American style there is no small - instead just a M, L and XL. Medium is a 2U rack mounted system with 8 Tesla GPUs and is suitable for smaller architectural offices and product design teams with 10s of concurrent users. Of course, this depends on the intensity of use and some customers may need to dedicate four GPUs for a single task. The ‘Large’ package features 32 Tesla GPUs for 100s of concurrent users, while ‘XL’ features 100 Tesla GPUs for serving 1,000s of users over the web.

Nvidia is still working on overall system costs, but with a single Tesla cards costing in excess of 1,000 EUROS one may speculate that a medium system would cost around 15,000 - 20,000 EUROS just for the hardware. On the software side, however, customers should expect a one-time licensing cost of 2,000 EUROS plus 20% maintenance per Tesla card.

From complex architectural visualisations and 3D city modelling to product design and automotive styling, the CAD-centric target markets for RealityServer are huge. And with mental ray already the rendering engine of choice for most major CAD developers, one may speculate that it’s only a matter of time before RealityServer becomes a widely supported platform for CAD.

What makes this technology particularly interesting is the fact that it is designed to use GPUs in the Cloud and not CPUs, but this is also a current barrier to deployment. None of the large Cloud service providers currently offer GPUs in their facilities, but Nvidia expects this to change early next year. This coupled with the expected release of RealityServer-compatible CAD products should make 2010 a very interesting year for rendering in the Cloud.