About this:
You can see in some places bits where it overlaps the other parts of the environment (i.e. tree stumps)

I would not be surprised if this was just additive blend polygons built on the CPU and drawn directly on the scene once all the solid surfaces have been drawn. The lack of depth testing shows that they were concerned about these polygons clipping into ramps/etc. and they're able to reject the light pools early to stop them showing through walls.

@james wild yes i also noticed how they sometimes bleed over the corner of a wall or a column i assumed that was an intentional fx kinda like CELSHADING bloom . but your reasoning sounds more likely ! the little tree remains in the bottom right clearly shows that it doesnt draw "behind stuff" on this image however they do get blocked by the torchstands:

making me believe that there is actually a tech difference between the small and the large ones ...

You picked a great game to breakdown. It looks uber simple to people but there is so much going on behind the scenes. Showing this to beginners to going to really help people get a handle on how games are done.

making me believe that there is actually a tech difference between the small and the large ones ...

I'd imagine that the smaller ones, being so large in number, do a search for the nearest static polygon, and then if that point has line of sight with the camera, draws. This would not work with the larger sources as they're big enough that you'd notice the light pop into sight as you walked into the room. The larger sources, being mostly static, are probably just animated models with some code to change the lighting of any dynamic that walks into it.

I'd imagine that the smaller ones, being so large in number, do a search for the nearest static polygon, and then if that point has line of sight with the camera, draws. This would not work with the larger sources as they're big enough that you'd notice the light pop into sight as you walked into the room. The larger sources, being mostly static, are probably just animated models with some code to change the lighting of any dynamic that walks into it.

wouldnt that require a raycast per lightsource? that sounds mighty expansive or can this kind of stuff get extracted from the zbuffer (probably with a frame delay)

a friend from mapcore "redyager" just let me know that its apparently possible to import windwaker models into mario galaxy 2 and use that as a model viewer somehow thought you might find these different perspectives interesting too:

Consoles have a huge leg up on the number of draw calls per frame. The numbers change every few years, but console games can get away with 10s of thousands, while PC games need to keep it ~500 per frame to not choke. It's just a side effect of the architectures. x86 tech is a bitch and has lots of backwards compatibility quirks.

New x86 processors actually have to design around, and emulate bugs that were in earlier x86 chips! The architecture kept getting faster, but not better. Also, the BIOS you have when PC boots up has always just been a series of hacks over the original old school BIOSes to support newer stuff.

Consoles are designed fresh, and don't have this baggage.

BSP isn't the only way to sort a scene. It's Doom and Quake (1-3) era tech. Zelda is not a corridor shooter. There is brute forcing, and octrees and all kinds of other methods. When a game is capped at 640x480 with most only ever seeing the inner 512x384, then you can get away with overdraw.

There was never an 8 light limit. The old OpenGL spec said that everyone needed to define AT LEAST 8 lighting constants to be available, but any OpenGL implementation could use as many as they wanted. So...
-Not everyone used OpenGL's lighting to do their lighting. Lots of other tricks.
-No one used 8 lights! Too taxing. 2-3 at most.
-You could render 8 lights, then set another 8 and render again. It was only 8 PER DRAW CALL. Just like you have 1 set of textures and a shader during a draw call. But almost no one did this.

What would happen was, for every dynamic object drawn (anything that wasn't part of the static light mapped geometry), you would sort all the lights, pick out the closest x number, and render with those.
Sort lights, Set Light constants
Set Material (textures, draw mode, etc...)
Draw polygons
This is visible in the picture you have with Link between the 2 torches.

Dynamic IK systems are nothing new. Uncharted is lucky if it's in the first 1000 games to implement it. Any game that uses skeletal animation can do IK stuff easily. Dynamic IK is most likely what is used on the staff weapon with the cloth you have pictured near the top.

I took a look at some Super mario galaxy textures not so long ago, I was astonished by the optimization! It's almost unbelievable to think that this game came out 10 years ago and still looks more than fine !