In my spare time I'm an open source game developer. I try to write code that is as portable as possible. My last game was developed under Linux and ported to Windows, OSX, and FreeBSD.

My current project is a little more ambitious than the last one, and I've been struggling with the question of what features of OpenGL I should or should not use.

I would love to use VBOs and ARB vertex programs, but to do so would seriously undermine the portability of my code. It would cut my potential audience to a fraction. As I'm sure everyone is aware, driver quality varies wildly from hardware to hardware and OS to OS. Only by sticking with widely available GL features have I been able to produce code that is both clean and portable.

I'm sure the answer that pops immediately into most heads is multiple code paths. That's fine for a commercial product where it is worth the extra effort in order to squeeze every available dollar out of the market. But this is free code, and I'm just one programmer.

And it's the open source user community I'm targeting. They often have old hardware and they expect it to work. Anyone who cares about 3D enough to spend money on hardware is probably also willing to spend money on a game, and will simply overlook open code as being amateur.

My current thinking is: if an extension has EXT or ARB in the name, then it's fair game. If a user doesn't have the hardware for it, he's SOL. In the opinion of this forum, where is the line? At what point do you cut potential users off?

zeckensack

12-18-2003, 06:41 AM

You can 'emulate' a lot of the more useful extensions. Short example (Win32 specific):

Multiple code paths for combiner functionality - ARB_texture_env_combine vs NV_register_combiners vs ATI_fragment_shader vs ARB_fragment_program etc - are justified, IMO. If you don't have the 'correct' hardware to test a path, don't try to implement it blindly, it will end in tears. If your game is open source, find some helpers with the right hardware.

Most important thing here: do detect during runtime. Don't force on a path through a config file. If you need to do the latter, it's an indication that your detection is broken, so rather fix that.

Trurl

12-18-2003, 08:57 AM

Certainly. I'm aware of ways of hiding the complexity of supporting a multitude of potentially available extensions.

I guess what I'm most interested in is this: If I were to try to select a subset of GL functionality to use exclusively, what is the largest subset I could select, while excepting the smallest number of users?

The only thing worse than having many different APIs for exposing a specific function is having to implement dynamic code to utilize ALL of these APIs depending on the target.

I'd much rather code for one API. But, given the current state of things, it's unsafe to assume anything more efficient than vertex arrays without compatibility problems.

Of course you'll want to exclude certain features of 1.1, like accumulation buffer, which is only available in the most advanced cards. Also, you may have to prepare to support 16bit modes, which means no stencil buffer or destination alpha.

The best way to efficently draw geometry on a wide variety of hardware is display lists, I'd use them for all static geometry. For other geometry use vertex arrays accelated by whatever is available.

If you want something more advanced you'll have to just use extensions or fallbacks if they don't exist. You may want to take this into consideration already when designing the engine, for example try to design the shader system so that the shaders can be multipassed if necessary.

-Ilkka

Trurl

12-18-2003, 12:38 PM

Realistically speaking, what you're saying is that there's no way to write portable code using any feature of the API that was added in the last 10 years.

I know this already. I was merely attempting to solicit opinions on where a reasonable cutoff might be so that a developer could satisfy the largest number of users, with the minimum effort, alienating as few users as possible.

I believe in OpenGL, and I've made a career out of programming for it. But if we can't reliably take advantage of even reasonably modern functionality in a portable fashion without undue effort then a sad state of affairs exists for what purports to be the cross-platform 3D solution.

Korval

12-18-2003, 03:11 PM

I would love to use VBOs and ARB vertex programs, but to do so would seriously undermine the portability of my code. It would cut my potential audience to a fraction.

If you're really considering writing a 3D game, it is important to consider whom your potential audience is.

The VBO extension is built, by design, to be implemented in the absence of hardware support. This isn't like fragment programs, where a software implementation is too slow to be useful in a game. If the implementation doesn't support VBO's in hardware, then they will run as fast as regular vertex arrays, because that is how the VBO extension will be implemented. In short, if VBO is implemented, it will be at least as fast as regular vertex arrays.

OK, so the question now becomes who will implement VBO's? Well, VBO exists for every flavor of Radeon and every TNT and above card by nVidia, as long as they have up-to-date drivers. That covers the vast majority of the market. There may well be some segment that is still holding on to old 3Dfx hardware (or hardware from some other manufacturer who no longer supports some hardware), or is using a non-nForce-based integrated solution. Even the latter case may have VBO support, as I think Intel (if you use Intel) is getting a bit better with their OpenGL support. So, the real question you should ask is whether or not this relatively small segment of the market is important enough to not use VBO's for. I would say "no", because as time passes, more and more OpenGL implementations will have VBO's avaliable, either as an extension or in GL 1.5.