Tag: WebGL

History is interesting. There isn’t one history, rather a collection of narratives that present a perspective on events from particular viewpoints. This was brought home to me recently when, as part of a WebGL book I’m co-authoring (see WebGL Programming Guide) I looked back at work I did on VRML2.0 around 1995 and how it has fed into the VRML2.0, X3D, WebGL story. I realized that in various “re-tellings” our contribution at Sony didn’t play as prominent a role as I thought it deserved. I went back to a book we wrote at the time on VRML2.0 and dug out the intro chapter which explained VRML’s evolution from our perspective. VRML2.0 evolved primarily from Sony’s extensions to VRML1.0 which we called E-VRML and our collaborations with Mitra Ardron at Worldmaker and the folks at the San Diego Supercomputing Centre (SDSC). Here’s our story ……

Excerpt from Chapter 1 of Java for 3D VRML worlds, by Lea, Matsuda and Miyashita, New Riders Publishing, 1996, ISBN1-56205-689-1

The origins of VRML 2.0: Moving Worlds

Our original foray into the VRML community was based upon the experience we had gained with E-VRML. When people began discussing the next phase of VRML’s development on the VRML mailing list, we proposed, in Aug’95, E-VRML as a basis for VRML2.0.

The approach used in our initial proposal was similar in approach to that being suggested by Mitra, then with Worlds Inc. but soon to set up his own company, WorldMaker. Mitra had several years experience in the area of multi-user shared environments and was a key member of the VRML community and the VRML Architecture Group (VAG).

Our work was a natural match with Mitra’s and, more importantly, his goals for the long-term development of VRML as a basis for CyberSpace. This allowed us to quickly begin working together to understand if we could merge our respective draft proposals. Mitra not only brought his proposal to the table, but also the work of the San Diego Supercomputer Center (SDSC) groups, who shared the same vision for VRML.

Over the subsequent weeks we refined our ideas, and in Oct’95 we had a first draft of a joint paper that was to form the basis of future work. In early Nov’95, a behaviors workshop was held at the SDSC where several groups interested in behaviors presented their ideas. At this meeting, Mitra, SDSC and Sony agreed to formally work on a joint proposal for VRML2.0. By the end of Nov’95, this joint paper had evolved into a joint proposal. In parallel with that work, SGI had also proposed a first draft of a behavior model. This model, although having many things in common with our work, based its execution model not on the notion of events but on a dataflow model. In addition, the SGI proposal didn’t rely on an external scripting language that manipulated the scene through an API, but used an internal language that interacted with the VRML directly. This approach had its roots in existing technology that SGI had already developed in the area of 3D graphics.

It was clear that there was a certain degree of synergy between the SGI work and ours; but it was also clear to us that a dataflow model was completely unsuited for shared multi-user worlds. Since it made good sense, for both ourselves and the VRML community, to try and combine our proposals, we began discussions in Nov’95. After much debate, SGI were convinced that an event-driven model was more practical, and in early Dec’95 it was formally agreed to combine our proposals.

Shortly afterwards, at the VRML’95 conference in San Diego, this proposal was presented as a joint offering and named Moving Worlds. At the technical session and in the demo session, we used our E-VRML multi-user system to explain the basic principles behind the Moving Worlds proposal and to demonstrate what the proposal would allow scene authors to do.

During the subsequent time-consuming and often fractious six weeks, this basic Moving Worlds proposal evolved to being acceptable to all parties. SGI devoted significant resources to the technical writing, and other companies where invited to comment and contribute to the basic final proposal. In early Feb’96 this proposal was presented to the VRML community as Moving Worlds and put forward as a possible contender for VRML2.0.

From Moving Worlds to VRML2.0

The VRML community had requested, via a request for proposals (RFP), technology for the VRML2.0 standard. This RFP was made in Jan’96, the deadline for submission being February 4th. The goals of this RFP was to ensure that the best technology was evaluated, that the process was open to all members of the VRML community, and that the whole community could evaluate all technology and vote on the final choice.

Other contenders for VRML2.0

The VRML2.0 CFP deadline produced a clutch of proposals; all were excellent in their own right and all addressed very complex issues. In addition to the Moving Worlds proposal, the following five others were submitted.

Apple Computer – out of this world

The proposal from Apple Computer was based on their open 3D meta file format (3DMF) which is also used in their QuickDraw 3D technology. Their proposal basically encapsulated VRML1.0 within a 3DMF file. This approach treats VRML1.0 as just another data format that can be found in 3DMF files.

Lastly, the Apple proposal attacks the problem of multi-authored scenes by designating a master script that is responsible for talking to the browser. This script receives information, via a public interface, from other scripts telling it what they want to do. It then uses this information to update the view the user see by interacting with the browser.

To achieve all this, the scripts have access to a high level API that provides them with routines for controlling graphics data, including support for graphic data manipulation, drawing, selection etc.

The German National Research center for Information Technology (GMD) – The Power of Dynamic Worlds

GMD’s proposal was based on their existing work in the area of Computer Supported Cooperative Work (CSCW). They proposed a set of extensions to the VRML1.0 language that supported an event model, behaviors, a set of new nodes and, interestingly, multi-user worlds. Their proposed approach was to extend VRML1.0 with enough built-in support to allow authors to build dynamic scenes without recourse to an external scripting language. It relied on a rich event model that was extended to work in a multi-user environment. In the same way as E-VRML, events were sent between VRML nodes; however, these events caused the execution of behavior nodes, which manipulated the scene directly.

To support multi-user scenes, the GMD model categorized behaviors as autonomous or shared and allowed events to be propagated between browsers which supported the different categorizations.

IBM Japan – reactive virtual environments

IBM’s Japanese research lab proposed a set of extensions to VRML that used a model of reactive behaviors. Motion engines supporting reactive behaviors could be attached to entities in the scene and were used to describe how the entities changed over time. The motion engines supported their own, built-it, simple scripting language, which allowed scene authors to describe how entities change as a result of incoming events. The motion engines supported a notion of callbacks, so that the system could deliver events, much in the same way as event handlers in E-VRML.

Motion engines could be linked together either serially or in parallel to allow complex interrelated animation to take place.

The IBM proposal has a number of similarities with the GMD proposal, in particular its desire to build behaviors into the scene. However, it had no specific support for multi-user scenes.

Microsoft. ActiveVRML

The proposal for Microsoft was named ActiveVRML and was part of the ActiveX strategy from Microsoft. ActiveVRML was an interesting proposal because the language has its roots in the functional programming world. ActiveVRML was really a functional programming language, designed to support several media types as basic data types in the language.

The strength of ActiveVRML came from the power of the functional programming model. To programmers familiar with procedural or object-oriented languages like C or Java, functional languages often seem weird and incomprehensible. However, one significant advantage of a functional language for the manipulation of rich media formats is that it abstracts away from the notion of time. The passage of time is captured in the functions themselves, allowing programmers to ignore time and so not to have to deal with tricky issues such as synchronization. This benefit is useful for a 3D simulation, which is what a 3D VRML world becomes when you add support for behaviors. However, its strength really lies in supporting multimedia such as audio and video.

It is interesting to note, that unlike most of the other proposals, ActiveVRML lives on. It has been renamed and targeted more at multimedia presentations than 3D interactive environments. This makes sense given that the strength of ActiveVRML in the field of time dependent media.

SUN Microsystems – HoloWeb

The HoloWeb proposal from Sun Microsystems grew out of two internal Sun developments: their long-standing interest in 3D graphics and their recently launched WWW language, Java. HoloWeb was three things – a file format, a programming API and a CyberSpace metaphor. The file format of HoloWeb was a departure from VRML1.0. It didn’t offer a selection of high-level 3D primitives, but a simple subset of dots, lines, triangles and text. What’s more, it was by default a highly optimized format based on compression. This had obvious benefits for file transfer in the WWW.

The basic graphics types could be used to build more complicated 3D models and would be manipulated and managed, via the HoloWeb API, by the Java language.

The HoloWeb viewing metaphor was based on the city model. A home page would be a building in the HoloWeb universe. Viewing a page would be like entering that building, surfing the web like walking down a city street. However, the city metaphor allows more structure than possible in today’s WWW. All computer companies, for example, could be located in the same part of the virtual city, allowing surfers to easily locate information. One interesting aspect of the proposal was that Java programs would also be spatially defined and would have their own 3D coordinates in the HoloWeb universe, allowing them to be manipulated like any other 3D entity.

The vote

The vote on the proposals was taken after in-depth discussions within the VRML community via the electronic mailing list. The results of the vote are shown below and represent a clear victory for Moving Worlds.

Figure 1.2 Results of the VRML2.0 proposal vote

There is no question that all of the proposals had something interesting to offer, and all represented significant and original work. The overwhelming choice of the Moving Worlds proposals was probably less to do with its technical merits – which although good, where not significantly better than some of the other proposals – and more because of the process used to develop Moving Worlds.

From the outset, it was a proposal born of existing expertise from Sony, SGI and Mitra, and represented a distillation of all three of the original proposals. Thus it was already well-discussed and criticized. Further, during the drafting stage, it was well publicized within the VRML community and received a significant amount of input and comment. By the time it was proposed as a candidate for VRML2.0, the proposal was therefore already well honed and represented the collective wisdom of many of the key players in the VRML community.

Evolving a proposal into a standard

The hard task of developing Moving Worlds to be an acceptable basis for VRML2.0 was then begun. Continuing the approach taken in the formation of the proposal, this process was carried out in full view, and with significant participation from, the VRML community. However, in contrast to the time before the vote where there were six proposals to divide peoples attention, now, with only one proposal left, the entire community focused on it. This was an immense effort and coordinated by three SGI members, Rikk Carey, Gavin Bell and Chris Marrin. They performed an excellent job of balancing the differing requirements and goals of the VRML community and reaching a fair consensus on contentious issues.

In parallel with this specification effort, we at Sony began the task of building a VRML2.0 browser that would conform to the rapidly evolving specification. During the period from Apr96 to Aug’96 we publicly released five new versions of the browser, each one tracking the evolving standard and culminating in a version, demonstrated at Siggraph, that supported the final specification. Each one of these versions allowed us to perform checks on the paper specification to ensure that it was both possible to implement and useful.

In parallel, a group a Sony Pictures Imageworks developed a set of multi-user shared VRML2.0 worlds that showed off the facilities of VRML2.0 including movies, animation and Java scripting.

At Siggraph, in early Aug.’96, the VRML2.0 specification was published and made available in its final form. The interest in VRML was now significantly higher than at previous events. There were a large number of companies, small and large, all showing VRML related technology. The majority of this was obviously VRML1.0 related but Sony and SGI displayed VRML2.0 versions of their technology, proving the possibility of building the VRML2.0 specification, and taking the first step towards the dream of CyberSpace.

VRML2.0 current status

VRML is evolving, even while you are reading this. The goal of VRML2.0 was to provide an open, extensible system that supported 3D interactive scenes on the WWW. But our sights, and that of others in the community, still rested on the support for shared multi-user spaces. VRML2.0 is sufficiently open and extensible to allow anybody to begin experimentation into the issues of building multi-user spaces. At the time of writing, Sony, along with a handful of other companies and individuals are experimenting to understand best how to do this. The result of that experimentation will result in new proposals in the area of multi-user standards.

As part of our own experimentation, you will find that the VRML2.0 browser that comes with this book, Community Place, is a full multi-user system. It will allow you not only to experiment with VRML2.0, but also shared multi-user scenes. Since the goal of this book is to show you how to use Java to build standard VRML2.0 scenes, we have restricted most of our discussion to standard VRML2.0. However, at the end of the book, we will return to the issue of multi-user worlds and show you both how to build a simple multi-user server of your own, and how to use some of the multi-user features of Community Place.

Round up

This chapter has tried to give an overview of the development of VRML2.0 as seen from our perspective. That perspective is obviously biased, and concentrates on events and motivations that we think are important. It is clear that VRML is many things to many people and will be used in a wide variety of ways in the coming years. For us, and for many others, a principle use of VRML will be as a building block for CyberSpace. Building CyberSpace is a technical challenge, and will not come eaisly. Our goal in the rest of this book is to equip you with enough information to be able to meld the strenths of Java and the flexibility of VRML so that you can begin building interesting, interactive 3D content. In this way, you can become part of the CyberSpace dream.