*Giggle* It's interesting the different ways people think about problems. You, at least in this case, seem to think in a bottom up manner. I tend to think in a top down manner.

It would not have occured to me to start were you have suggested. This code you mentioned is at SourceForge somewhere? Could you post a link to it? Also I don't suppose you have any simple examples that show it being used so I can avoid going down the wrong path while trying to incorporate it?

EDIT: typosEDIT2: just found the linkEDIT3: however there are no release files in the project

SPGL is more of a technology dumping ground than a client library, from what I can see. Cas reserves the right to break it at any time, and I personally doubt there'll ever be a formal release from it. That's not what it's about, at least not at the moment.

But don't let that disuade you - there's some fascinating stuff in there. Grab a CVS client and pull the code directly. Think of it as a nightly build...

Thanks for the tip. I'm making good progress on the scene graph lib. Thanks to the fact that I'm basing it on the Java3D scene graph. I've been doing an awful lot of reading as I'm sure you can imagine. Especially since I really haven't had any previous 3D experience.

If the code Cas has fits the model I'll just rip it out and put it in my packages so it doesn't get broken in the future if he needs to make a change.

Currently, in my scene graph code, for methods that require a point or points be passed into them I'm using the Vector3f class from lwjgl. Was wondering if this would confuse people and if I should make another class named Point3f for this purpose. Then only use the Vector3f class for methods that really need a vector.

While some might consider it unnecessary "syntactic sugar", I personally prefer it. If it's a point, call it a point. If it's a dimension, call it a dimension. You use a bit more memory for another couple of class files, but it makes more sense IMO.

I once tried to do a scenegraph in C++ on top of directX. But I failed. I wanted to embed render states directly as nodes (e.g. have TextureNode); to do the sorting, LODing, ... and rendering all in a single graph traversal..... too much at once.

In the end, the complex copy and reuse patterns of subgraphs killed me. If I only had a GC....

I once tried to do a scenegraph in C++ on top of directX. But I failed.

Just curious, did you consider it a failure because it was too slow in the end or because it was simply too complex to finish because of all the memeory allocation/deallocation?

I suspect that when I get the first version of this up for people to look at it will be WAY too slow for practicle use. However I'm hoping people here can help me with algorimths to speed up the rendering then we can all use it.

Do you have any intention of making your scenegraph API more-or-less compatible with the Java3D API? It would certainly make it easier for those of us who are currently using Java3D to switch over to a LWJGL-based scenegraph if/when it makes sense.

It would be great to get better performance, smaller footprint, etc., if it didn't require a rewrite of the entire scenegraph and didn't obliterate the other advantages of using Java3D.

Even partial compatibility would be better then incompatibility. I am fine with replacing all j3d.Point3f's with lwjgl.Point3f's if the interface is fundamentally the same.

Do you have any intention of making your scenegraph API more-or-less compatible with the Java3D API? It would certainly make it easier for those of us who are currently using Java3D to switch over to a LWJGL-based scenegraph if/when it makes sense.

Yes, the plan is to have a scene graph that is used in a way almost identical to Java3D. The package names will be different of course but I'm trying to keep the following as close as possible:

First I have, up to this point, left out methods that take primitive arrays for things like Vector coordinates. In the initial version you would use only methods that take arrays of objects.

I haven't come to a firm decision yet but I don't think I'll have a Universe object. You will simply pass any Locale graphs into the renderer to get them on the screen. Also Locales use floats instead of the HighResCoords that J3D uses. My rationale for this is that with the exception of really advanced games like true flight sims the rounding problem won't be that big of deal. If it turns out to be then I/we can add a new Locale type that uses higher precision coordinates.

Quote

It would be great to get better performance, smaller footprint, etc., if it didn't require a rewrite of the entire scenegraph and didn't obliterate the other advantages of using Java3D.

In J3D it appears that the renderer is started inside of the Universe the instance you attach a Locale with a sub-graph. This keeps you from be able to make a renderer that is most efficient for you particular needs. I plan on supplying a few different renders that handle the scene graph in different ways to allow better performance based on the type of scene be rendered. Obviously you will be able to create a new renderer if the supplied ones don't fit your needs.

Since this scene graph is ONLY going to work with the LWJGL OpenGL binding it will be much much smaller than J3D. What is J3D now, 7 megs for the SDK and runtime? Yeah, this will be smaller.

It should be easy to port to other graphics libraries though.

EDIT: Keep in mind before I started this I really didn't have any 3D experience. So some of the choice I make may be completely wrong.

With respect to points and vectors: I found it immensely irritating that the designers of J3D decided that points and vectors are different things and have different methods.

This isn't how it works in maths. A vector is a point is a matrix.

I cannot agree with that. A point stands for a certain location in space whereas a vector is a direction that has no certain origin and can be located anywhere. That's also how mathematics and physics handle these items.A thing like a position is very different from a thing like speed. A think like a vertex coordinate is very different from a thing like a normal.

And for they transform differently (applying a translation to a vector absolutely make no sense), they have to be treated differently.

Libs that ignore the difference have sometimes very ugly code when it comes to transform normals.....

There was good info on that topic today on the J3D mailing list. Sounds complicated.

Damn low-level 3D stuff...

Quote

> Date: Tue, 8 Apr 2003 06:12:22 -0600> From: "N. Vaidya" <scienfix@hotmail.com>>> Would the Indexed Geometry Array with the USE_COORD_INDEX_ONLY> bit set be comparable in performance to the one using the> Interleaved format ?>> The OpenGL Red Book (3rd edition and OGL 1.2,> pages 76, 77, 81) seems to imply that the benefits could> be "implementation dependent" ( I'm assuming that glDrawElements> is the one used by the Java 3D UCIO format (?) ).

We do use glDrawElements when rendering indexed geometry with theUSE_COORD_INDEX_ONLY attribute set. It's a significant memory andperformance improvement from indexed geometry without UCIO since wedon't copy the geometry if we can use glDrawElements.

The performance of glDrawElements does have quite a bit of variationdepending upon the OpenGL implementation. Some cards have vertexbuffers where vertices can be cached on-board. If the client arrays canall fit in the vertex buffer then you could get performance comparableto interleaved arrays. Otherwise the implementation has to preprocessand segment the arrays to fit in the vertex buffers, which takes time,or perhaps it won't use the vertex buffer at all.

The glDrawRangeElements command has better semantics for utilizingvertex buffers, including enumerants for getting hints about the nativevertex buffer size, but Java 3D doesn't currently use it.

> I have been primarily using the IGA + UCIO + ByRef format, though,> of course, constrained by memory usage of dynamic high poly count> apps.

You're probably fine. Depending upon the app and the OpenGLimplementation, it's quite possible that the various overheads inrendering a Java 3D scene graph could make the performance differencebetween glDrawElements and glDrawArrays with interleaved data fairlyinsignificant. Optimize your biggest performance bottlenecks with theaid of a good profiler, and address the the geometry array format onlyif it becomes clear you can't improve performance anywhere else.

Largely about to become obsolete with the advent of ARB_vertex_buffer_object.

In reality what Mark Hood's said there is not strictly true. The biggest vertex buffer you're ever likely to find will only hold 16 vertices in it. On T&L cards this is held on the server side; on non-T&L drivers this will be on the client side, if it even has one, and there may still be a small vertex cache on the serverside.

Then you've got the option of calling glDrawRangeElements which basically works out the best strategy for you and should be as fast or faster than any cleverness on your own part; and if you can't call that you may be able to call glLockArraysEXT which is a more brute force approach to the same problem.

The whole idea is to avoid repeatedly transforming geometry and repeatedly copying geometry from one memory location to another and to minimize the amount of geometry that's got to be copied over a bus and to use the fastest bus there is to do it. Complicated? Nah

<edit> Talking out of my arse and confusing AGP DMA buffers with post-transformation cache. What has the world come to. Take no notice of me.

I cannot agree with that. A point stands for a certain location in space whereas a vector is a direction that has no certain origin and can be located anywhere. That's also how mathematics and physics handle these items.A thing like a position is very different from a thing like speed. A think like a vertex coordinate is very different from a thing like a normal.

And for they transform differently (applying a translation to a vector absolutely make no sense), they have to be treated differently.

Libs that ignore the difference have sometimes very ugly code when it comes to transform normals.....

When you use different classes for them you have additional code to maintain...

Making a difference between points and vectors by using different classes is actually new to me. I know two solutions, both make no difference at all, both call the thing (which is essentially a 3-tuple) a vector, and none of them produces any 'dirty' code if you know what you are doing.

Solution no. 1 (which I'm using in my engine): There is very very very seldom a case where you do not know if a vector means a position or a direction. Actually I have not yet seen any such case. There are simply two transformation functions, one for points and one for methods.

Solution no. 2 (which OpenGL uses): Very nice, but requires additional CPU time. All vectors are stored as 4-tuples, where the fourth value is 1 for positions and 0 for directions. A transformation is stored as a 4x4 matrix where the rightmost column contains the translation part of the transform. Advantage: You can store the translation in the matrix as well. Also, computing the signed distance of a point to a plane means simply computing the dot product of two 4-tuples.

BTW the ugly code when transforming normals may have another reason: You cannot transform normals the same way as you transform vertices. This only works for either orthogonal or orthonormal transformations (not sure). Otherwise the normals are tilted towards the surface during transformation.

Bounding objectsSound based objects (I'm leaving these out for now)Behaviors

This weekend I hope to get each of the geometry objects functioning with all of the different Appearance component objects when rendering.

I realize this is a lot to ask but since I'm such a newbie to this stuff I was wondering if some folks would be willing to help me write little test programs to make sure everything is working correctly. After I have a testable version of course. The problem is I can infer certain thing but I have no way of knowing if the render results I get are actually what should happen.

Oh well, how about anyone that would be willing to fill in the code for the Transform3D, BoundingSphere, BoundingPolytope and BoundingBox class methods.

For the Bounding classes the intersects and combine type methods.

The Transform3D class is a boat load of matrix calculations.

One other question, I usually just make what I write free-ware and let people get it from my web site however I think this should be a SourceForge project and under the same open source license as the LWJGL. When there are two projects that are related what is the best way to make sure the package names don't conflict?

The package com.sas.lwjgl.imaging contains classes that can read different image file formats (no dependence on anything coming from Sun except the basic language constructs of course). These must implement the ImageFile interface and there are currently two. TargaFile.class and WindowsBitmapFile.class

Where it stands right now:

I keep thinking I'm ready to deal with the Appearance attribute rendering and then I find something else I've forgotten to create. I think that is almost over. I have a few things to finish up for the Transform3D and Bounds classes, then I need to create the View and ViewPlatform replacement classes and I can focus on the rendering engine. Just remembered I have to go through all of the classes and put in the cloning methods, but this can wait.

/*------------------------------------------------------------------------The LWJGL (Light Weight Java Gaming Library) is an open source project toprovide a direct binding to OpenGL for the Java(tm) language. LWJGL is onlyintended to provide that low level binding and as such lacks certain toolsthat many programmers need to complete their projects.

The LWJGL Scene Graph (LWJGL-SG) is a separate project intended to give programmers some of those tools. The major focus of the LWJGL-SG is to provide a scene graph for 3D programs that use the LWJGL. Other utilitiesmay be included in the library though (such as the imaging package forloading image files).

This scene graph is based heavily, class architecture wise, on the scene graph provided in the Java3D API. In fact the initial version of this scene graph was constructed after reading the book:

The Java 3D API Specification (ISBN 0-201-32576-4)

several times, *sigh*. The book hadn't seen the light of day since I bought it a long long time ago. By the time I get a first release madeit will probably be totally destroyed. (Note: Because the book is oldmany features of the current Java3D API my not be (and probably are not)present in this library. That's why we have new releases.)

This is an important point, no source code was copied from the Java3D libraries. In fact the source code was never even looked at (I'm not sure if it is even available). I chose to base my code on the Java3D API specification for two reasons.

First, prior to writing this I had basically no experience with 3D programming. I knew that there was no way I could produce something usablewithout having a road map laid out for me. The API specification was my road map.

Second, I wanted to give Java3D programmers that were considering movingto the LWJGL the easiest path for the migration I could. Obviously thisscene graph is not going to be an exact copy of the Java3D scene graph.That was never the intention, but it is similar enough that porting a Java3D program to it should not be tramatic.

This file is an attempt to describe the areas where this scene graph andthe Java3D scene graph differ.

First, Idon't like the other method, java should have constants built into the language symantics. Second, memory consumption. This library isintended to be used for games where constant object allocation/garbage collection is a major no-no.

The implication of this is the following, the programmer is expected toplay nice and not modify the values in the references returned from the methods. If he doesn'tfollowthisrulethenhewillbespendinganenormousamountoftimedebugginghiscode. :)

3> NoAWTorSwing. ItisimportanttomakesurethatnoAWTorSwingcodecreepsintothislibrary. ThislibraryneedstofollowingthelinesofLWJGLandavoidusinganythingthatmakesitdependantonAWTorSwing.PartofthereasonforthisistheuseofcompilersthatcancompileJavaprogramsintonativeexcutablesoftendon't have the ability to use AWTand Swing. Another reason for this is simply that it is not neccessary andtherefore why do it.

**************************************************************************Classes Left Out:

This section lists the classes from the Java3D scene graph that havebeen left out for one reason or another.**************************************************************************

This section lists the classes from the Java3D scene graph with the same names in this library that have some fundamental differences.**************************************************************************

**************************************************************************Things To Do:

This section lists the things that are not done but should be at some point.**************************************************************************

NodeComponent - uncomment the duplicateNodeComponent and cloneComponent methodsNode - uncomment the duplicateNode and cloneNode methodsImplement the cloning methods in the rest of the Nodes and NodeComponents

Okay I'm getting geared up to handle the Appearance properties of Shape3D objects in the renderer. I bought an OpenGL reference and have been matching up the gl functions with the properties so I can get an idea of what I have to do while rendering.

Since gl is a big state machine and since the state is going to have to change quite a bit while rendering the scene graph I was wondering what the correct way to save the state of gl is. For instance if I'm traversing the graph and come to a BranchGroup that contains 2 subgraphs I need to push the current state onto a stack and then traverse one of the subgraphs. When that subgraph is finished I need to pop the state off of the stack and then go traverse the other subgraph.

What functions, if any, should I use from OGL to accomplish this? If there are functions for this purpose is there a limit to how deep the stack can get?

What functions, if any, should I use from OGL to accomplish this? If there are functions for this purpose is there a limit to how deep the stack can get?

Use PushMatrix and PopMatrix to save the current matrix - usually the viewing or modelling matrix.

Use PushAttrib and PopAttrib with a specific flag to save information like lighting settings, line drawing modes, viewport settings etc. Can also push/pop ALL_ATTRIB_BITS for the shotgun approach!

These stacks do have a maximum depth though, and you'll have an error thrown if you exceed it. Get the current settings with a GetIntegerv of MAX_MODELVIEW_STACK_DEPTH, MAX_PROJECTION_STACK_DEPTH or MAX_ATTRIB_STACK_DEPTH. Specifically, the modelview should have a minimum stack depth of 32, projection is at least 2, and attribute is at least 16.

With those limits in mind, you might want to consider restoring state manually, although I expect you'll have a performance hit with that. Check the limits of current OpenGL implementations and see if it's going to be enough for you.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org