OK. loud and clear. The binding for JPCT I will add to JOODE will not be the same as the one we use in this project anyway.How will people make art? They will use blender -> 3DS just for designing the meshes?

Tom

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

Well its fairly standard practice to have a lower polygon count for the collision mesh to help prevent the sporadic collision checks being a bottle neck. I have no idea whether they will be or not. I suspect collision detection will not be the bottleneck but rendering. If collision detection is an issue we can prevent AI ships from colliding with anything other than missiles with springs or something. Then really the only true collider becomes the player, and the missiles can be simplified to a point or something (i.e. the physics can be tuned pretty easily).

I think our major time saver will be rendering far away ships as really low dimensional poly's eventually simplifying them to be a single triangle of roughly the right color when miles away from the camera.

So how the question is whether the artists have to be responsible for producing different resolution polys or whether we try to do that automatically. (option 1 is hard on the artists, option 2 is hard on the coders)

Quote

What format's best for you on the physics side?

Oh its doesn't really matter. As long as I can get at the mesh verteces to build trimeshes it does not really make much odds.

I think making the artists job the easiest is the best direction. I dunno how people will use procedural texture generation and be able to model in a gfx program at the same time (unless we have a blender plugin or something)

Tom

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

Well its fairly standard practice to have a lower polygon count for the collision mesh to help prevent the sporadic collision checks being a bottle neck. I have no idea whether they will be or not. I suspect collision detection will not be the bottleneck but rendering. If collision detection is an issue we can prevent AI ships from colliding with anything other than missiles with springs or something. Then really the only true collider becomes the player, and the missiles can be simplified to a point or something (i.e. the physics can be tuned pretty easily).

Cool, rendering is definitely the biggest problem!

Quote

I think our major time saver will be rendering far away ships as really low dimensional poly's eventually simplifying them to be a single triangle of roughly the right color when miles away from the camera.So how the question is whether the artists have to be responsible for producing different resolution polys or whether we try to do that automatically. (option 1 is hard on the artists, option 2 is hard on the coders)

No such a problem (hopefully), I'd already thought of some form of imposting (ie pre-rendered images) for more distant objects, so I guess that a single low-poly model will do from the artists. I wonder about ships breaking up though (especially large ones) - we'll need some way of splitting the mesh when a ship explodes... I looked into voxels but no way we've got the CPU beef to do it!

Quote

I think making the artists job the easiest is the best direction. I dunno how people will use procedural texture generation and be able to model in a gfx program at the same time (unless we have a blender plugin or something)

I've a feeling you're right, but we'll have to heavily stress the low poly/small texture requirements!

all the models for the RTS are going to be very low poly (as high polies are not needed in any way). Also I will be making calapsed versions of the polies, (these probably wont be UV mapped as there going to be so small, so texturing will be disabled and they will be rendered in the dominant color.

all the models for the RTS are going to be very low poly (as high polies are not needed in any way). Also I will be making calapsed versions of the polies, (these probably wont be UV mapped as there going to be so small, so texturing will be disabled and they will be rendered in the dominant color.

A primary mesh will have a poly count like the model update I posted.I still really feel that sharing space ships between each project will be a good idea for productivity.

Sure thing - the more common resources the better!BTW could you post the 3DS file for the Manta so's I can check the JPCT loader? All being well I'll put it in the feasibility test applet.

Here is a link for the manta model.manta.zipThe zip file includes:primary ".obj": meshlow poly ".obj": extremely low poly (I use this for long distance objects). It could also be used for collision checking.manta.jpg: primary texture mapmanta_h.jpg: height map used with shadersmanta_n.jpg: normal map used with shaders

edit: Sorry its not in 3DS format, I use obj in my game. Blender can convert from obj to 3DS, in case you needed to.

Whew! It was a struggle but I finally persuaded JPCT to play along - new feasibility test with the Manta in wobbly orbit.NB: JPCT can only handle 256x256 textures. I haven't done any texture-alignment tests yet...

EDIT: someone just pointed out that the thread title is spelled wrong! Hmm... how do you amend a thread title?

maybe also try use the decimated version of the model (in the zip i sent), rendered with a color value of r=0.5 g=0.5 b=0.5or even maybe use the current model, without a texture, just the static color, when its far from the camera.

I get some slightly odd behaviour - it's like the JIT is taking a really long time to warm up. I start with around 5 fps, this slowly increases and after about 10s it reaches 20 fps. I hit 60 fps after about 15-20s.

The other weird thing is that hardware rendering ought to be working, so the JIT shouldn't be having such a bit effect anyway. Regardless, once it's all warmed up it looks right nice

I get some slightly odd behaviour - it's like the JIT is taking a really long time to warm up. I start with around 5 fps, this slowly increases and after about 10s it reaches 20 fps. I hit 60 fps after about 15-20s.

The other weird thing is that hardware rendering ought to be working, so the JIT shouldn't be having such a bit effect anyway. Regardless, once it's all warmed up it looks right nice

I could be totally off here but since it doesn't request any privileges it must be doing software rendering.

I noticed the wireframe thing too, what 'bothers' me more is the sharp edges though - they don't 'blend' well. Perhaps this is easily solved, not really my cup-O-tea

Right well I did a bit more today (Sunday). I've committed the work so far into JOODE's SVN in a subdirectory called jbullet. Its a bit of a pain working with JBullet because it requires additional post compilation mucking about through ant, so development cant really be done 100% in my IDE. Anyway, I have no idea if anything work until I get the JPCT bindings working. Currently they don't so probably nothing else works yet, so I don't advise even looking at the source I have produced yet.

Tom

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

And i have been implementing a raytraced render... To begin with i started using my existing raytracing engine framework however since i had designed it in logical abstractions and heavily object oriented paradigm it unfortunately is not speedy enough.

So I will be creating a more procedural based engine which hopefully will be much faster. I should imagine that it will not be a mammoth task as i can re-use code and algorithms from my existing engine.

you will find a subdirectory called jbullet. This has 4 sub directories containing source code.

1. jbullet-src contains the jbullet source as is (do not edit).

2. src contains the main JOODE adapter to jbullet. This probably doesn't work becuase I can't see the results yet.

3. JPCTbinding. Contains only three classes. JPCTBinding is a live binding to a JBullet geom. JPCTManager which listens to geom added events and creates (and destroys) bindings accordingly. And JPCTConverter which is the hardest part to write. This class creates a JPCT graphical representations of JBullet geometry classes. Currently I am only trying to create spheres for JBullet's SphereShape, but it will be where all the action is eventually. The class structure is pretty much identical to the Xith binding. Writing converters are not hard, but can be time consuming and fiddly getting everyones concept of position lining up. We should get Spheres working, then off-center spheres and then probably jump straight to meshes.

4. Finally there is a test src directory which is where a TestingWorld resides, which will be a super class to all tests that involve JOODE-JBullet-JPCT functionality. TestingWorld currently makes the correct connections between JOODE's dynamics and JBullet's collision engine. It also instantiates a JPCT World. At the moment in the main I create a JOODE body and connect it to a Sphere, and add these to the dynamics and the collision world, which should cascade via GeomAdded events to the JPCTbinding. Once TestingWorld is rendering properly, then a specific subclass involving colliding Spheres should be created. (and then meshes etc.)

Currently the converter does indeed create the graphical object. But somewhere in the graphical setup nothing gets rendered. I have not instantiated any threads or anything. I ran out of development time last week and I am inexperienced with JPCT so no doubt I just havn't set things up properly or set the renderer running. I am sure you could help. I was gonna copy how your demo works on Sunday, but if you think you can help that be great.

So getting the TestingWorld to render a sphere in the test src directory is the priority at the moment. All the event model etc. is probably garbage at the moment, but I can debug that pretty quickly once I can see , it is working enough at the moment to add geoms. Oh yes, adding mouse movement and key board inputs would be great too. Look at the Xith TestingWorld to see the kind of functionality I am trying to reproduce. (PS dynamics is stepped via world.step(x), all geometry should be updated automatically but that might not work yet)

Tom

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

JBullet uses this brilliant yet annoying method for object pooling requiring a post compilation instrumentation step. So run the testing world via the ant task in the build.xml in the root of the jbullet subdirectory: "run-JBulletTestingWorld".

When I decide to integrate JBullet a bit deeper I will be getting rid of that. At the moment I don't want to alter JBullet src.

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

Well I dunno if you ever got round to checking out my JBullet wrapper. I just noticed it was missing loads of libraries needed to run. Anyway the whole lot is in there now I hope.

The initial testing world now manages to move a sphere is response to gravity. I haven't tested collision functionality yet (but this demonstrates JPCT and JBullet are in sync with JOODE). The jstackalloc dependency is driving me nuts so next week I shall get rid of that as my top priority. Thats quite alot of jbullet source code modifications though for no noticeable change in functionality, but it will increase future development productivity 10 fold.

It probably wasn't 4 hours work, but I am a bit sick this weekend. I'll try to be more productive next weekend.

Tom

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

I have looked at it but I'm not sure how you're doing this!JPCT really hates adding/removing objects; ideally all objects are pre-loaded prior to the first rendering, but your code seems to add/remove objects frequently?

My code will call jcpt.addObject() every time a new collision shape is added to the environment. I call the shape.build() once and clone shapes from cached primitives, but you are suggesting I should add them to the jcpt world upfront? How to I make sure they are not rendered until I want them to be? setVisible?

Anyway, my code adds objects as they are added to a CollisionWorld. So as long as you do that before starting your rendering thread then that will be done before any rendering occurs. It looks like spagetti at the moment because its event driven (and JBullet is adapted rather than modified), but once I get proper test cases you will see that the binding system JOODE uses means you don't have to explicitly do any rendering code yourself, other than picking your binding and setting up the display. The JCPTManager ensures new objects have a rendered version, and destoryed collision objects removes the rendered version. Oh maybe you have just not got my code. New objects are not created every world step. Only the transforms are updated. Its a bit of a pain at the moment becuase Jbullet uses one set of Math classes, JOODE another and JPCT another so they all need to be converted between each other every world step. This is all done without garbage being produced, so apart from the *exceedinly minor* performance and memory hit its not a massive issue.

Maybe I should explain some concepts:a JOODE dynamics World emits world step events every time step. It also emits Body added/destroyed events when a dynamics object is added or destroyed.

graphical bindings: currently Xith and JCPT called XithManager or JCPTManager are instantiated on a collision space and world. They listen for Geom added events or destroyed events to create live bindings to collision functionality. They listen to world step events to know when to update their local transformations on the current live bindings.

The JBuller adapter functionality looks a lot like a binding in many ways, because it too listens to world step events and updates the jbullets representation of position every time a body moves. A BodyInterface allows CollisionObjects (JBullet concepts) to be associated with dynamic bodies (JOODE). Its at the point of association that a listener is created to keep those associations in sync (note only one listener ever instantiated per world, that one listener updates all associations for that particular world per world step). JOODE will be converted to run off openMali sometime in the medium future, so eventually all this listening to step events will not be needed for the JOODE JBullet interactions. But I need to test JBullet before I can start integrating it deeper, so this superficial syncing is the best initial move.

Tom

Runesketch: an Online CCG built on Google App Engine where players draw their cards and trade. Fight, draw or trade yourself to success.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org