You may remember me from my post on brightening a texture. My friend and I have come a long way with our engine since then and are working on dynamic lights and shadows.

We started out with the example given here:http://www.youtube.com/watch?v=s60AljUbpKYwhich goes into dynamic soft shadows via creating an alpha texture, rendering a black "ambient darkness" sheet with alpha 1.0f over the scene, and then using the alpha map of your lights to "reveal" through the darkness. Downsides to this system were the lack of our ability to do any kind of bloom effect without shader language, which led us to...

http://www.youtube.com/watch?v=fsbECSpwtigFinally implementing GLSL and GL2.0 compatibility, we explored the possibility of using shader programs for our lights. Currently we have the following simple fragment shader that draws a basic circular light:

Which is an example from the youtube video. Using this, we draw the lights to the screen as before, create the alpha map, and then blend this alpha map into our finished scene at the very end.

Again we run into the problem of being unable to add any sort of bloom effect (we assume that this is another type of fragment shader that we would have to pass our lights through), and the idea of how to achieve even "hard" shadows is somewhat of a mystery (save for finding the dot product of the light vector and the normal of the shadow casting surface).

The context of our project: We are developing a 2D indie title that we hope will be as backwards-compatible as possible (I think GL2 is pretty safe). However, having good looking shadows gives a game a great "pop" factor. We are complete strangers to VBOs and FBOs, having used the fixed function pipeline until now. However, we understand the basic purposes of these objects; we're just lost on implementation and how they might help us arrive at a lighting solution without breaking the rest of our working code.

A picture of our current light... uses a shader program, but uses GL_QUADS to actually draw it... kind of a bastardization of GL2, but we're working on it:

Hopefully you guys see that we have put a decent amount of effort into figuring this out, and aren't just begging for help. That said, you're the experts, so any advice would be most appreciated. In return, we can offer a gift... these two droids. Uh, I mean, the two shaders, once we get them working, so that hopefully others can benefit from our combined efforts. If you need any more code from our project in order to help, just ask and I'll try to dig up something resembling a program.

Awesome, love this kind of stuff! =DSmall tips: Use GL_TRIANGLE_FAN to draw an approximated circle using 16+ vertices around the light instead of a quad. You'll probably get a quite big performance boost, considering the costly things you do in your fragment shader.

Awesome, love this kind of stuff! =DSmall tips: Use GL_TRIANGLE_FAN to draw an approximated circle using 16+ vertices around the light instead of a quad. You'll probably get a quite big performance boost, considering the costly things you do in your fragment shader.

Can you explain how triangle fan is better than a quad when you consider the additional amount of vertices needed to complete a 360 triangle fan, vs. the four vertices used in a quad? Is it that much more efficient due to the fact that we'd only be running the shader on fragments that fall in the approximate circle?

The point is that you shouldn't optimize what isn't noticibly slow. So you basically shouldn't care what is fastest. Work on things that matter.

By this logic is it considered a waste of time to invest learning how to use FBOs and VBOs for rendering when the fixed function pipeline works alright for most applications in our project? You're suggesting running the program later and seeing where bottlenecks are and optimizing then. I like that logic better, particularly for game development.

Your best option is probably to combine your initial shadow generation, but output a per-light shadow texture every frame (a bit like shadowmapping for 3d rendering). Then render this in the world with your fragment shader (at which point you can output your bloom pass).

As Riven says, you don't want to create the actual shadow geometry on the GPU, it's be far too slow and the quality wouldn't be as good.

Your best option is probably to combine your initial shadow generation, but output a per-light shadow texture every frame (a bit like shadowmapping for 3d rendering). Then render this in the world with your fragment shader (at which point you can output your bloom pass).

As Riven says, you don't want to create the actual shadow geometry on the GPU, it's be far too slow and the quality wouldn't be as good.

We thought about including the shadows as part of the lightmap. I'm not entierly convinced that this is the best thing in the world, but maybe. Are there any technical limitations to rendering the light map as a bunch of quads rendered on top of 0,0,0,0 and then stored in a texture, vs. rendering to a frame buffer? If there are actual technical limitations to not using frame buffers we'll switch for this.

One of the biggest challenges that I just can't get my head around is what kind of shader we'd have to write. I know we'd figure out the shadow geometry on the CPU and then just render a bunch of quads/blend in the soft edges, but the problem comes when we want to have ambient light.

Example, let's say we want an ambient scene light of 0.2f,0.2f,0.2f,1f. Where is the correct place in the process to do this? Initially we just cleared the screen to our desired ambient light and used this as the basis for the lightmap, and then blended it in with the scene using GL blend and the pipeline. We'd probably have to use a shader to blend it in given the limitations of the pipeline to go to colors past one. The problem then comes if we render the bloom, and THEN try to draw shadows on top - the shadows would get blended into a color that's technically supposed to be blocked by the shadows in the first place.

Yeah, that's why you need to do your shadow compositing to a texture first, *then* clear the screen, draw the scene at ambient light level, and then add each shadow texture on top, then generate the bloom.

Of course you haven't said how you want to do your bloom - do you want to fake it old-school with a separate render pass, or are you going to do proper HDR rendering with filtering and an exponent? Or multiple render targets? All this depends on what graphics hardware you want to support.

Yeah, that's why you need to do your shadow compositing to a texture first, *then* clear the screen, draw the scene at ambient light level, and then add each shadow texture on top, then generate the bloom.

Of course you haven't said how you want to do your bloom - do you want to fake it old-school with a separate render pass, or are you going to do proper HDR rendering with filtering and an exponent? Or multiple render targets? All this depends on what graphics hardware you want to support.

I'll try to hit all your questions.

First though, "Draw the scene at ambient light level". We prefer to think of it as ambient darkness, since everything by default is drawn at (1f,1f,1f,1f). Does this mean blending the scene by our darkness color at the end of the render, or applying some sort of filter to every image that is rendered? I think the end result is the same.

The real challenge with bloom is how to make it ignore the shadows. Theoretically a light shouldn't cast ANY of its light where we have a shadow; instead, only the ambient darkness level should be drawn where the shadow is. i.e. if we have an ambient filter of .2 across the board, shadows should acquire that color, so that everything on the screen except where we have light is drawn at .2.

I don't really know much about some of the advanced things that you said. I had honestly envisioned making a second pass through my lights and using a shader to force a render past one by being in add mode.

Another problem is if we want to render our scene with an ambient of 0,0,0,0. If we do this at any point other than our lightmap, we are effectively changing our scene to black, rendering any future additions useless. Example: set ambient to black, draw lightmap without ambient light, only added light. Render entire scene at 0,0,0,0... blend in light. Surely you see the problem. The color data of the original scene is now gone.

We don't really understand frame buffers well enough to use them, but I understand them well enough to know that they are probably the solution here. I just have no idea how to implement.

I think you're a little confused as to how shaders and lighting passes should be done. For a start you need to stop thinking of light as 'darkening' - light is additive and you need to work with it as such to get decent results. It's also pretty pointless talking about bloom if you don't know which bloom approach you're going to use.

I guess I'm confused about 'darkening' because I keep thinking in terms of a level that is by nature very dark, and then gets brightened by light. Is a better way to achieve this type of result just using textures that are inherently dark?

I'm still trying to read through your process flow; I'll edit this post with my questions/thoughts when I've done so. Thanks for taking the time to do this.

EDIT: I missed the part when you would add your shadows, and I'm a bit confused as to why the shadows and lights are not part of the same texture.

You also say "capture bloom" and I assume this means to a different FBO than the scene. Can you elaborate a bit more on steps 3 and 4, and why you go through the entire process of drawing the surrounding geometry of the lights twice in your steps?

By capture bloom, I mean copy the result from 2a/2b to a texture for later use. This should probably be done via a FBO.

For the blur there's lots of resources on the internet for this, but it basically involves rendering the unblurred texture to another texture via a blur shader.

You don't have to draw the surrounding geometry again, I was assuming you were going for something like a regular forward renderer approach. Alternatively you can do a fullbright pass and use that instead, a bit like a deferred renderer.

By capture bloom, I mean copy the result from 2a/2b to a texture for later use. This should probably be done via a FBO.

For the blur there's lots of resources on the internet for this, but it basically involves rendering the unblurred texture to another texture via a blur shader.

You don't have to draw the surrounding geometry again, I was assuming you were going for something like a regular forward renderer approach. Alternatively you can do a fullbright pass and use that instead, a bit like a deferred renderer.

Gotcha. I think I get it, kind of. I'm not used to FBOs and the flexibility and the power that they offer, so it's hard for me to think in terms of anything other than what's on the screen (even though we are using the deprecated copytex2d). Other than speed, are there any advantages to using FBOs? Is the way to achieve darkness to simply draw the lightmap on top of (0,0,0,0)?

That's how I did it at least...Sorry, Riven... I know I'm an optimizing bastard...

Basic tips:

- Draw your light using GL_TRIANGLE_FAN. As this kind of lighting is extremely fill rate limited, having 18 vertices forming an approximated circle instead of 4 vertices forming an quad will save you a huge screen are of pixels. - Shadows tend to extend far outside the light's area. Enable scissor testing around the light unless it covers the entire screen to quickly discard distant shadows to save a lot of fill rate. - Copying the whole light buffer to the accumulation texture is not necessary. Just keep the scissor test enabled when copying to only copy the relevant part. - Some (old) drivers are extremely slow on FBO switching. This lighting method requires 2 binds per light, quickly becoming a bottleneck. Instead of having 2 FBOs, keep a single FBO (but still 2 textures). Instead of binding an FBO, bind the needed texture to the current FBO. This is much faster on some computers (several times faster). - Don't use immediate mode rendering (should be obvious as you're drawing (lights*objects) shadows for each light).

More advanced stuff:

- If you want to apply a bloom, use HDR rendering (16-bit floating point textures) for the light accumulation. Apply the bloom effect to the final lighted scene. - Keep multiple (4-16) light buffer textures. Draw a light to each of them and do the accumulation with single pass multitexturing, eliminating lots of texture binds (2 per light VS 1 per light + 1 every 4-16 lights). - Draw multiple non-overlapping lights to each light buffer, reducing texture binds and fillrate even further.

With all of these things implemented you'll be able to have 1000+ lights with shadows all over the screen. The most limiting factor will be the size/radius/distance of your lights. If all your lights cover the whole screen you should definitely ignore the advanced stuff (except the HDR/Bloom stuff).

The one thing I'm a bit confused about here. You say that I should do the first five steps for each light. Are you clearing the light buffer FBO each time you start drawing to it again, so that you are essentially handling each light and its shadows by itself, and then handing it to the accumulation buffer to be blended with the rest of the lights/shadows? I can get my head around that.

I'm also assuming that if I wanted my non-lit areas to be black, I'd use black as my clear color for my light buffer FBO and my accumulation FBO. If I wanted a very dim red ambient light before other lights were added, I'd clear with (.2,0,0), etc. etc. This makes a lot of sense.

The problem I have (and somewhat related, is blendFunc(GL_DST_COLOR,GL_ZERO) the same result as what you said?) is this:

If you're using that particular blend function to blend your accumulation buffer into your scene, it is impossible to obtain a color value for the fragments of higher than one if you are using multiplicative blending, since we are not doing any sort of HDR in these steps. Is this correct? The result of this in our setup right now is that if we make the light in our screenshot in my first post any brighter, there are several places in which the fragment color is too bright to be blended into the scene at its true "light" value. Example, if we use .2 as our clear color for the light map, and have a light with a color intensity of 1f, a certain circle in the middle of the light's area will be uniformly 1.0 because of the restrictions of multiplicative blending. Is bloom the only way to get around this and actually make the centers of lights actually BRIGHTEN the scene past their native texture color?

Yes, I clear it with (0, 0, 0, 0) for each light. Sorry, forgot that. The same for the accumulation buffer but each FRAME. For ambient light, clear the accumulation buffer with the ambient color.

Bloom will not solve the uniform circle in the middle of the light. Even if you use floating point render targets you will still have it clamped to 1 as your screen can only show 256 different shades of colors between 0.0 and 1.0. To get rid of that block in the middle you need use floating point render targets and tone mapping to map your color from HDR to a screen color. Tone mapping is a hacky way of "displaying" colors brighter than 1.0 by making the color shown on the screen non-linear. Basically you need a shader which takes the complete HDR backbuffer and applies a tone mapping function on each pixel in the fragment shader. The simplest one is this one:

1

toneMappedColor = color / (color + 1);

This will make the color actually displayed on the screen approach 1.0 as the actual HDR color approaches infinity. In other words, it will get closer to 1.0 but never actually reach it, ensuring that there is always a brighter color available.

Bloom is a way of presenting the HDR information better. In real life as your eyes' lenses contain impurities and dust in the air also reflects a small amount of light, the brighter an object is the "larger" it will seem. Bloom is a way of simulating this effect in games by basically adding an image containing the brightest parts of the scene blurred on top of the scene. It just makes it look brighter by showing it more as we'd expect to see it in real life without actually increasing it's brightness. To get a better blur and better performance you usually downsample this bloom texture. It's also common to use more than one downsampled texture, for example one 1/2 sized, one 1/4, etc. I'd love to explain more on how to actually implement a bloom effect.

Sorry for the slow answer, got caught up in a ass slow League of Legends game. T_T

Bloom is a way of presenting the HDR information better. In real life as your eyes' lenses contain impurities and dust in the air also reflects a small amount of light, the brighter an object is the "larger" it will seem. Bloom is a way of simulating this effect in games by basically adding an image containing the brightest parts of the scene blurred on top of the scene. It just makes it look brighter by showing it more as we'd expect to see it in real life without actually increasing it's brightness. To get a better blur and better performance you usually downsample this bloom texture. It's also common to use more than one downsampled texture, for example one 1/2 sized, one 1/4, etc. I'd love to explain more on how to actually implement a bloom effect. Roll Eyes

You've got me intrigued here. I'm assuming this is why most older 2D computer games or ones designed for compatibility don't really implement this type of stuff (i.e. terraria).

We're developing a side scrolling shooter like metal slug or contra, but including RPG elements like mass effect, with random items like diablo. So the desire to have some decent lighting stems from the fact that we aren't going for Super Mario Brothers 2 here. The issue is ensuring compatibility with older GPUs so that all of our target gamer-audience can enjoy the title.

In the interest of curiosity though, and having advanced bloom for those users whose computers can handle it, I'd love to hear more about what you have to say, if you would actually "love" explaining it Thanks again.

EDIT: A good example of the style of lighting we're going for (possibly with more shadows than they have in some scenes) is COBALT by Oxeye game studios. I don't know if you've seen this but you can youtube Cobalt and their action teaser is one of the first few links. Everything just looks so pretty and bright when they want it to, but it doesn't overwhelm the scene. I'd just love to have a name for that "super brightness" that they achieve for bright lights, projectiles, and explosions, so that I could go around on forums asking questions and not sounding like an idiot because I can actually explain what I want.

I agree that HDR in a 2D game might be a little overkill, but it would look somewhat better, more accurate and believable. You would however limit yourself to slightly newer hardware (about 5 years old). For similar light effects as in Cobalt it would indeed look awesome.However what Cobalt doing isn't that close to what you are doing. Your lighting is much better. Their explosions are "just" a light texture and probably nothing calculated in shaders. No shadows or anything fancy. The bloom effect they achieve has nothing to do with brightness on the screen, it's just how the light texture looks. Of course they also have smoke and debris particles, but that's just to make it look like an actual explosion (nothing to do with the lighting).With your lighting you could make a lot more awesome effects. I would LOVE to see that, so don't take the short road! If you want the game to be playable on low-end hardware just add different graphics settings.

Low: no lighting at all (Intel shit don't support FBOs)Medium: basic LDR lighting High: full HDR lighting with bloom and tone mapping

The only thing that changes is the lighting calculations and the use of a back buffer HDR texure, so this is completely transparent to the rest of the game. It should be easy to implement these graphics settings.

To implement bloom you need 2 floating point textures per bloom level. You'll have to experiment with how many levels to use for the best result. To extract only the bright parts of the screen you should use a small shader that reduces the color slightly and then clamps it to 0 if it becomes negative (floating point textures, remember?). You'll also need 2 blur shaders, one horizontal blur and one vertical blur. A separable gaussian blur is what I used, which gives a nice round blur. The basic idea is to copy the back buffer texture to the first bloom texture using the brightness reducer. Then for a number of passes (1-3 or something) you ping pong between the bloom textures to blur the screen, first horizontally and then vertically. Then copy this blurred image to the next half as large bloom level and repeat. Finally we draw all blur levels (preferably using a multitexturing shader) to the back buffer using additive blending.The difference between a good bloom and a bad bloom is shimmering. Shimmering easily appears as we're reducing the resolution of the scene for the bloom. The difference between a good bloom effect (like in Mass Effect) and a bad bloom effect (like in Call of Duty 4+) is shimmering and aliasing/blockyness. The CoD one looks like complete crap.

Small note: When I say copy, I just mean a fullscreen pass to copy the texture.

For HDR rendering: - Keep a single frame buffer object, created during game initialization. You can attach textures to this FBO without having to bind different FBOs. - You need a single RGB FP16 texture to use as a back buffer. This is what you will render your scene to. - Keep 2 RGB FP16 textures for lighting (light buffer and accumulation).

9. Disable scissor test.10. Attach the back buffer texture to the FBO.11. Enable the (GL_ZERO, GL_SRC_COLOR) blend func and draw the accumulation texture to the backbuffer.

You now have a fully lit scene, which is in HDR. If you want some objects to be unaffected by lighting, now is the time to draw them. Otherwise, it's time to apply bloom (though some do it after tone mapping for some stupid reason).

12. For each bloom level: 1. Attach the bloom level's first texture to the FBO. 2. If it's the first bloom level: draw the back buffer to the bloom texture using the brightness reducer texture. Else: Downsample the previous bloom level. 3. For each blur pass: 1. Attach the second texture to the FBO. 2. Draw the first texture to the second using a horizontal blur shader. 3. Attach the first texture to the FBO. 4. Draw the second texture to the first using a vertical blur shader.

13. Draw all bloom levels using additive blending to the screen using a single pass multitexturing shader.14. Unbind the FBO (bind FBO 0).15. Draw the HDR back buffer to the LDR screen back buffer using a tone mapping shader.16. Draw the game UI directly to the screen back buffer.17. Enjoy another goddamn awesome frame of your game!

If you need actual code examples (I found floating point texture setup to be insanely cryptic and weird), just ask.

Another insanely long post by me. I need to get some sleep and/or a life....I'll check back in tomorrow...

Well I found one error with your long and insanely f*cking awesome post.

Quote

16. Draw the game UI directly to the screen back buffer.16. Enjoy another goddamn awesome frame of your game!

Enjoy another goddamn awesome frame of your game! - Should be step 17.

But I jest. Seriously thank you, this is way more than I could ever ask for. The fact that you're willing to provide example code is extremely generous, but please don't write any from scratch if it takes awhile. We've had to figure out a lot of the other cryptic aspects of openGL as well, and we don't really know anything about the implementation of FBO's, MUCH less floating point textures.

The concept of "bloom levels" is still a bit funky to me, I guess this just means repeated applications of the blur using mipmaps of the texture, but I could be way off on that. I'm guessing that's what you mean when you refer to downsampling.

I'm also a bit confused on how the lighting+scene process allows for HDR, is that one of the benefits of floating point textures?

Definitely get some sleep, you earned it. I'll need tons of time to process this post in its entirety and begin my trek to understanding FBOs.

Enjoy another goddamn awesome frame of your game! - Should be step 17.

Copy-pasta fixed. I shouldn't be posting while I was so tired, but you don't seem to be complaining...

FBOs are a little tricky to get working. To be honest I still haven't figured out why you get the result on the texture upside down, but I think it's because when rendering you have a the bottom left pixel as (0, 0) (or actually (-1, -1), but whatever). Textures seem to use top left, so you get the result upside down... I dunno.

Hehe, "bloom levels" was just a word I made up on the fly, and my explanation was... pretty bad I guess. xD I realize I wouldn't understand it either. I'm gonna try this again. xDThe basic concept of bloom is that you just copy the parts that are brighter than a certain threshold of the scene to another floating point texture, blur it and then add it back to the original scene. However, using only a single blur pass will only give you a bloom that sticks out 2-3 pixels from the bright objects, regardless of how bright they are. You could increase the blur kernel size or do more than one blur pass, but this is not a good idea for two reasons: performance and "quality". Performance wise, doing more texture lookups (for the blur kernel) or more fullscreen passes (2 per pass) is a really bad idea. Secondly the bloom doesn't look very good even if you increase the blurriness. I just don't think it looks as it should.Instead of more expensive or more passes, we can use multiple resolutions of the blur. If we downsample the scene to half size (width/2, height/2), we can get double the blur at 1/4th the performance cost but also a very small hit to quality. This reduction in quality manifests as shimmering and sometimes blockyness if the downsampling is implemented badly. However it is possible to completely avoid this. Therefore it also makes sense to use even more smaller textures than only full size and half size. The performance hit gets smaller and smaller as the resolution gets smaller, so using more of them is basically free and increases the look of extremely bright objects a lot. These are what I called bloom levels. Like you said the layout is a little bit like mipmaps, but we will use them all later.

So we draw the scene to the first full-sized bloom level texture using a brightness threshold shader and blur it. We have our first level of bloom.We then draw the first bloom level to the second level's half sized texture and blur it. We have our second level of bloom.Then we draw the second bloom level to the third level's 1/4th sized texture and blur it. We have our third level of bloom.Well, you see the pattern by now. Of course it doesn't make sense to have too small textures (they will approach 1x1 xD), so you shouldn't go below maybe 1/64 or 1/32, but you can just experiment later. Also note that you only need a single texture lookup when downsampling as you will get the average of 4 pixels for free thanks to bilinear filtering.

Finally we just add all these blurred bloom textures to the HDR scene again. This will actually increase the average brightness of the scene a lot, so you may want to reduce the brightness a little (perhaps 50% original scene, 50% bloom?). Again, just experiment. We then just do the tone mapping. I hope it's clearer now.

The concept of FBOs is pretty simple. They are just a collection of color attachments (either textures or renderbuffers), a depth attachment (a z-buffer), a stencil attachment, etc. You'll just want to use a single color attachment at a time (a floating point texture), and you don't need a depth or stencil attachment.Getting FBOs to work however is quite hard. All attachments have to have the same resolution, and only later DX9 hardware can render to floating point textures. Some combinations of attachments aren't allowed either. Things get even more crazy if you want multisampling, but you're doing a 2D game so we don't need it.

HDR stands for High Dynamic Range, which means that we have a higher color resolution than we can actually display and need to "dynamically" compress it. The concept comes from the fact that our eyes don't see twice as bright things as twice as "bright". As we use 16-bit floating point textures we have better accuracy for low color values while still being able to display extremely bright values. For example, rounding errors can cause banding in normal rendering which is eliminated with floating point textures. I can show you some examples if you want.

Yeah..... that clears it up a bit. Implementation is a whole other matter for me, right now I'm having trouble just converting our scene to use an FBO. Right now we draw a lightmap using copytex2d and save it as an opengl texture object, I haven't gotten to converting this yet. I then create the framebuffer as follows:

This is in our scene's render method, later we'll actually encapsulate all of the references to the framebuffers as static instances in our GameWindow so that we can actually just set them up once when we initialize the window and simply make calls to them from our scene's render to save performance by a lot. This is just for testing however. The framebuffer checks out as complete, and then I simply render my scene as normal. We use a viewport and GLUlookat in order to get things to draw at the right place, and render the scene's images this way. At the end we just draw a quad over the scene with the lightmap texture and the blend mode (DST_COLOR, ZERO). Then I execute the following:

This, I think, draws the contents of the frame buffer to the screen. It's obviously doing something right, but something with how we're rendering our scene is preventing it from being displayed along with the lightmap. It's as though we had GL Blend disabled, but we don't. The lightmap is simply all that's being displayed at the end.

Wow, you're using JOGL. I'm using LWJGL, so please don't hate me if a GL11 or so slips by...

The reason I found framebuffers so tricky was that they wreak havoc on your viewport and matrices. I'm currently working on a small bloom implementation to use in my own future games using only OpenGL 3.0. Well, you can only see your light map, then I guess the problem lies in your matrix settings for the scene rendering. Framebuffers don't reset your viewport or your matrices, but they do interpret them differently if you have a different sized texture attached to the FBO. The easiest way to get it working is just to call glViewport directily after binding it, and also glLoadIdentity and setting up your matrices again. I don't know if this will help though.

Oh, now I see. You forgot to bind the framebuffer. You're checking the completeness status of the default backbuffer, which always returns GL_FRAMEBUFFER_COMPLETE of course. Before you attach your texture you have to call:

1

gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, screenFBO);

Hehe. Let me just tell you a story about my battle against textures a few weeks ago. I trying to get 2D texture arrays working, but couldn't get it to display on the screen no matter what. Turns out I forgot to bind the texture before uploading it, resulting in the texture being unloaded and silently returned black. Same thing for you but for framebuffers.

Yes we're using JOGL. We started out with LWJGL before we knew anything about openGL and decided it was over our heads and moved to java2D. Eventually we learned that using openGL was inevitable, and the first thing we found when we googled "Java OPENGL" was obviously JOGL. We were ignorant to the fact that LWJGL did it all for us to begin with. Hopefully won't cause too many problems down the road, but JOGL so far is fine.

I'm wondering where that little code snippet should be placed. Sorry for the weird naming, I called my frame buffer int 'backbuffer' since I'm basically drawing everything in the scene to it, even though the backbuffer is actually the screen, right?

Tried resetting the modelview right after binding, not sure if I'm binding at the wrong time or what. In order to correctly use the game coordinates we usually use this:

We don't really use glViewport anymore, with this code present when we want to "reset to the correct place", it just works. Except now. ^^

EDIT: I feel like the thing we are doing wrong is not correctly drawing our scene textures to the frame buffer. There must be some code in between the "beginning" and "end" to correctly bind the texture before drawing it that previously we didn't have to do, that is working for the lightmap somehow. I got the impression, though, that when you bind a framebuffer you can just draw to it as though it were the screen.

You generated your framebuffer, but you didn't bind it! Directly after

1

gl.glGenFramebuffers(1, ibuf);

you also need to bind it with glBindFramebuffer. Your FBO setup code is doing nothing!

The backbuffer is actually what you render to if you don't use any FBOs. However, I think it's fine calling a substitute for this a "backbuffer" too, as you use it as one, but remember the difference between an FBO and an attachment! An FBO is nothing more than a container. The actual "backbuffer" would be the texture attached to it.

You generated your framebuffer, but you didn't bind it! Directly after

1

gl.glGenFramebuffers(1, ibuf);

you also need to bind it with glBindFramebuffer. Your FBO setup code is doing nothing!

The backbuffer is actually what you render to if you don't use any FBOs. However, I think it's fine calling a substitute for this a "backbuffer" too, as you use it as one, but remember the difference between an FBO and an attachment! An FBO is nothing more than a container. The actual "backbuffer" would be the texture attached to it.

Which should make this the active frame buffer that I'm rendering to. I think something with the coordinate system is what's messing this up. When we render our light texture, which we create using the old school method, we do more or less the same steps that we do for rendering our entities, except the quad for the lightmap is full screen and the same size as the frame buffer texture.

Assuming it has something to do with the coordinate system, the second half of the code I posted before still confuses me, particularly the part before GL QUADS:

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org