Okay I'm about mid port in the new functions but I can't find anything documented that specifies what these functions are supposed to do. I need to know what the state of the system is supposed to be after

Java_org_lwjgl_Display_initJava_org_lwjgl_Display_setDisplayMode

Should the system be ready to render after init? If not, what is the goal of the init method? Would be nice to get something that looked like Blinn's model for trip down the graphic pipeline so I can make sure that when the system starts issuing command to rendering and change display modes and such - that it is actually ready to do so. I've been taking a look at the linux code and it appears that after init, the system still can't render. Its not until setDisplayMode is called that something can actually render to the screen (because otherwise the display doesn't have a particular mode set).

Initialize. This determines, natively, the current display mode and stashes it back in the mode static member.

That's exactly what Display.init() does; it is called on classload and determines the current display settings. On the native side it should construct a DisplayMode and then stash this back in the Display class.

The Display class changed considerably since you last saw it; now it is purely concerned with the physical characteristics of the screen, and has nothing to do with any windows or input any more.

Also from the Javadoc of setDisplayMode() (which provides a reasonably good specification of what the native code should be doing:)

Quote

Set the current display mode. The underlying OS may not use an exact match for the specified display mode. After successfully calling setDisplayMode() you will still need to query the display's characteristics using getDisplayMode().

And that's exactly what Display.setDisplayMode() is supposed to do; it either changes the monitor's display resolution to some other mode (which can subsequently be retrieved by the client with getDisplayMode() - it may not be the same mode as they asked for), or it will throw an Exception to say that it could not change the mode. It's therefore your choice as to whether to throw an Exception or choose a similar but available mode if the the user tries a mode which turns out not to be supported.

Of course, getAvailableDisplayModes() should be doing its level best to only return modes that the computer genuinely can display, but we know from experience that this is sometimes a bit unreliable.

AFAIK, Display controls the display, that is the actual monitor. It is not used to draw, but rather control resolution, gamma and such stuff.

Having done some tests:In order to draw, you just have create the GL instance (which inherits from a window). Display does NOT have to used at all - at least on win32... Elias will probably drop by with linux results

So the system should be ready to render as soon as the GL instance has been created.

Quote

I also cannot tell from the interface how a Display is ever disposed of.

It isn't. It lives for as long as the application is running. Display is used to alter (amongst other things) the display mode - if you don't alter it, you're just running in the current display mode..

Okay I see. The CGL Libs are vastly different from what you guys have to do. There is no resetDisplayMode() on OSX. When your application exits, the OS automatically reverts to the resolution it was in - anything you do is transient.

My Display core doesn't do much other than change resolution as with OSX I have to both capture the display:

CGDisplayCapture( kCGDirectMainDislpay );

and then release the display:

CGReleaseAllDisplays();

Because of the way this is done, I've done this in the OpenGL code as opposed to the Display code because it would lock a machine on exit since there is no way to release a display and have the opportunity to call CGReleaseAllDisplays().

As such when Display_init is done, its pretty much just a passthrough call as the OS is going to handle the stashed resolution informaiton. I'll have it mimic what you guys are doing with getting the current display mode and storing it for shits and grins but for me its unnecessary.

Display_resetDisplayMode() for me is an empty body function.

The way the API is oriented makes the way you want to deal with switching display modes extremely awkward on OSX using Core Graphic Direct Display. Setting the display mode for me means invoking:

I can't do this until I lock the display, which as stated before has to be in the GL code because I won't be able to release the display. Because of this you won't ever be able to swap the display mode properly on OSX under the current architecture because of the ambiguity of the state of the box when setDisplayMode is called. The paradigm of separating the Display stuff into one class and the GL stuff into another class makes it very difficult for me to make things work properly because CGL is so straight forward that you don't split things up that way. The flow for OSX is:

And that's pretty much it. Help me understand how this fits cleanly in with the architecture now. It would have before, but now things are happening a lot different and that's causing some issues. It may be that the OSX APIs are now 'too' clean that all the extra steps of the other platforms is causing me trouble

Unless things have changed - LWJGL was a 'draw your own cursor' environment. I turn off the OS cursor altogether. If you want one, you've got the mouse offsets and can draw any texture, 3d object, or whatever at that x,y position.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org