So I've got this big 2d image for map and to implement lighting I got the lightmap image for it and a shader. Lightmap image is a simple black-white image. I know how to use multitexturing in a "normal" way whay, when you bind the texture and set glMultiTexCoords() but what when you want to use that texture as lightmap for all your objects you are drawing? This lightmap must be used when drawing other objects over the map, for example when drawing a player over the map on (x,y) position, I must use lightmap (x,y) position. The problem is I have no idea how to resolve object coordinates to correct texture coordinates in shader.

For example: Both map image and lightmap image are 800x600. I draw the map and use lightmap normally with multiTexCoords(). Now I want to draw player that is located at (100,100), but this time I can't set the lightmap texture with glMultiTexCoords(). How to calculate proper lightmap coords in shader?

Lightmaps only works for static geometry casting shadows on other static geometry. You can't use them for neither moving shadow casters nor moving shadowed objects. You could use simple projection to create a shadow on a single flat surface for moving objects, but if you want fully dynamic shadows, you need to use shadow mapping or volume (stencil) shadows.

For example: Both map image and lightmap image are 800x600. I draw the map and use lightmap normally with multiTexCoords(). Now I want to draw player that is located at (100,100), but this time I can't set the lightmap texture with glMultiTexCoords().

Sure you can. If your lightmap coords for example are in the range of [0.0;0.0] to [0.8;0.6] you can calculate the player position in the lightmap with posX/lmWidth*lmRangeX and posY/lmHeight*lmRangeY (e.g 100/800*0.8 resp. 100/600*0.6 => [0.1;0.1]) do the same with the size of your players sprite (e.g. 50/800*0.8 and 50/600*0.8 => [0.05;0.05]), add the position offset and you have the range of the texCoords you need in your lightmap (e.g. [0.1;0.1] to [0.15;0.15] )

You can play around with the scaling factor to get the illusion, that the player moves a bit in the foreground (so that he enters the shadows sooner or later depending on the "position" of the lights and occluders)

Depends on what you can live with. This simply darkens a player if he is in the same spot as the shadow on the map. With some offset/scaling of the coordinates respecting the distance to the next light, you can create some more realistic effects - but it's surely more of an artistic challenge based on trial and error

Usually with 3D light mapping, the light map is only applied to the static terrain because it can be computed offline with a nice raytraced algorithm (nowadays), and then the dynamic characters are lit with regular shaders that are tweaked to make the lighting match as closely as possible to the lit terrain.

Since they both use the same locations for lights, the effect will always be pretty close, the rest of it is just how accurate of a lighting equation is used, but when running around most people can't notice the difference.

In a 2D game, it's easier to use the light map with the player, although I think it could possibly give odd results. If the light map is just in/out of shadow information for a flat 2D world things will look fine. If your 2D world is a 3D world projected onto 2D (e.g. isometric or some other style), the light map will have the orientation of the 3D surfaces built into it, which when combined with the player will look odd, since the player's curves will not be lit appropriately. This might be nitpicking for a 2D game though and really just depends on the complexity of the scene, otherwise cylab's approach will work fine.

Yes it's fully 2d game and I don't use 3d models at all. So far I only want to do static lighting with global lightmap as described, later on I was thinking I could add dynamic lights by drawing on the lightmap (texture). And I don't need shadows yet, maybe in another game.

Quote from: cylab

Sure you can. If your lightmap coords for example are in the range of [0.0;0.0] to [0.8;0.6] you can calculate the player position in the lightmap with posX/lmWidth*lmRangeX and posY/lmHeight*lmRangeY (e.g 100/800*0.8 resp. 100/600*0.6 => [0.1;0.1]) do the same with the size of your players sprite (e.g. 50/800*0.8 and 50/600*0.8 => [0.05;0.05]), add the position offset and you have the range of the texCoords you need in your lightmap (e.g. [0.1;0.1] to [0.15;0.15] )

Yes exactly, very good example, but I thought I could skip this kind of calculation as all numbers that I need are available, it's just I don't know how to calculate it in shader. Thought that calculating in shader will be faster as you don't have to change the state of opengl (and that is slow as I heard) by making lots of glMultiTexCoords() calls and the second reason is that I could be nice if you can just add the lighting like that with the shader and lightmap to your game without other modifications. You could just add lightmap and shader in your game and that's it, it would work so other game could use it with no need to calculate extra glMultiTexCoord() calls.

I'm working on 2 approaches now:1) send lightmap position as uniform and multiply it with gl_ModelViewMatrix2) send lightmap position in pixels on screen and use gl_FragCoords to calculate proper texture coordinatesAny advices are still welcome. I'll keep you updated here if make it work.

You basically just want to sample the player location from the light map? Why don't you just do it backwards? Load or generate a lightmap and just draw it on top of the complete scene. Would that give you the effect you want?

Yeah. Render the unlit scene to a framebuffer and use that as texture of a full screen quad in the second pass. Applying the lightmap there is a nobrainer and you can use this approach for other post processing effects, too.

You basically just want to sample the player location from the light map? Why don't you just do it backwards? Load or generate a lightmap and just draw it on top of the complete scene. Would that give you the effect you want?

yeah but that is drawing fullscreen resolution texture 2 times with blending. First time for unlit and second time for the lightmap. Wouldn't be much faster if I combine it within shader itself with only one pass?

That depends on how much overdraw you have. By adding the lighting to the scene afterwards you basically separate lighting from the scene rendering (similar to deferred rendering in 3D), giving you a constant cost for rendering it. Having each object have to lookup its light values would make each object slightly more expensive to render. If you have lots of overlapping objects (remember that if you have objects on top of the terrain, you're sampling the lightmap twice there), applying lighting in the end will be faster. My point is that if you have few objects, it doesn't really matter performance wise which one you use, but if you have many objects, doing lighting as a separate pass is faster.

Actually I wasn't so concerned with objects as they are mostly small images like 48x48... but at the end you must draw the lightmap which is fullscreen 1920x1080, and with blending on it affects every pixel and that's a big slowdown (I assume) as OpenGL will recalculate every pixel and combine with existing one. A combined calculation in shader eliminates that second fullscreen render of lightmap.Note I didn't actually tested that scenario but I did render the scene to FBO and then to screen (which is 2x fullscreen drawing) which was slow.. that combined with some knowlage about what is fast in OpenGL I think drawing lightmap afterwards will be noticeably slower then mix in shader approach. Ah well... it seems only thing that's left to do is to actually implement the thing and try it out

Got it! Solution is so simple Tnx everyone for chiping in, I hope this will be some use to you.

So I finally realised texture2d(texture, texCoords) takes texCoords as normalized [0..1] .. and then all became much simpler. By using gl_FragCoord and screen width and height, you can calculate where your pixel is on the screen in [0..1] range and apply use that coordinates for lightmap texture (which is fullscreen).

Ok, tihis is the final frag shader with camera in mind. Camera position is passed as normalized [0..1] values relative to texture size. (e.g. camera position x of 0.25 would mean we are starting the render at three fourths along x axis).

Sure, that obviously works, but I just want to advertise a second pass a little more.

You said you have a number of 48x48 sprites. These have a pixel area of 48*48 = 2304 pixels. A 1920x1080 has 1920*1080 = 2073600 pixels. In theory, your method would be faster until the total area of your sprites is lower than the area of the screen (no overdraw), so you can draw 2073600/2304 = exactly 900 sprites. When you reach that number, no matter where your sprites are, you're sampling from your lightmap more than you would with a second pass. There's just one thing we haven't considered yet: your terrain rendering. I think it's safe to assume that your terrain (or background or whatever) will cover the entire screen, meaning you're already sampling the whole visible part of the lightmap once, and all sampling for your sprites is just sampling the same place again.

Of course doing a second full screen pass is more costly as it requires blending. Performance will also differ between different cards. I believe blending is done by the ROP units of your graphics card, and texture sampling is done by the texture sampling units. How many of these units you're card have and how powerful they are determines where a second pass becomes more effective relative to how many sprites you have. Funnily enough, blended rendering isn't slower at all with my card. I believe the ROP units are balanced for MRT rendering / deferred rendering, which requires about 8x more frame buffer performance than normal rendering. Blending just uses one read and one write, compared to the 8 writes that deferred rendering requires. As this is a completely separate part of the graphics card that is left underused in your game anyway, it is basically free compared to normal rendering. It does however need a second pass though, so it still isn't free, just a lot less expensive than you think it is.

With simply sampling the lightmap for each sprite pixel you'll increase the rendering cost of your already texture limited fragment shader you have by a certain amount. By separating the lighting and sprite rendering, you reduce the cost of rendering each sprite while getting a constant cost second pass.

A second pass would be (slightly) slower for a low number of sprites, but scale better with more sprites, meaning it has a lower max FPS, but a higher min FPS. The trade-off is similar to the trade-off made in deferred shading. The idea there is to separate the cost for rendering objects and the cost for rendering each light.

Of course rendering each triangle is much more expensive with deferred rendering though due to the very high bandwidth required. However, this is easily made up for if you have 100s or even thousands of lights, as you don't have to render the same triangle more than once. I hope you can see the similarities.

In the end, its all just performance. Premature optimization is the root of all evil. You seem to have a working solution right now, so don't change anything unless you actually find this to be a bottleneck later if you realize you want thousands of sprites.

Nice point!Glad the discussion went a little deeper then just finding the first solution to it, I always learn something from this kind of discussions.

I still think second pass is much slower whatever you (in practice) do. Let me explain.Your calculation on 900 sprites is far from complete... you're just counting pixels drawn and ignoring the rest of the pipeline.

this is where it gets interesting... as it's all multiplied but "my additional sampling" are 5 very fast operations, while in 2 pass scenario pixel number multiplied by 2 and is huge (1920x1080). Additionaly "pipeline calcs for each pixel" also must be huge with all that tests and transformations (matrix transformations, depth test, alpha test, scissor test, blending, vertex and frag shaders, ...).

So final numbers for let's say 100 objects may be in domain of:one pass cost: (1920*1080 + 100 * 48*48) * (1000 + 5)2 pass cost: (1920*1080 + 100 * 48*48) * 1000 + 1920*1080 * 1000All the big cost as I see it is in the pixels and the pipeline operations itself.Of course this is all made up to a scale I think represents the speed of it, I don't know exactly or even approx. if there are 1000 operations for a pixel to go through whole pipeline (in fact pixels don't go through it, they are created, but let's say operation cost of rendering a single pixel to screen)I think my point is clear, I think for second pass to be worth it number of objects would have to be in millions.

All that said.. I could simply test it .. just comment that line on the shader and make a second draw with blending. I'll report the results.

I think my point is clear, I think for second pass to be worth it number of objects would have to be in millions.

Is that a challenge? xD I love challenges! Especially I win them! Be sure to test at how many sprites a second pass is faster. It will probably be more than you have, but I can assure you, it's not millions. 1 million sprites = 2 million triangles, and such a program is either vertex or CPU limited anyway, so both of them will be equally fast. Be sure to post your specs, and hopefully a test program so I can run it too!

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org