The Above code is very simple but it is enough to demonstrate this issue. Suppose the array in uniform block has been initialized with uniform buffer object: lightSources[0].color1 = (0.25, 0.25, 0.25), lightSources[0].color2 = (0.25, 0.25, 0.25),lightSources[1].color1 = (0.25, 0.25, 0.25),lightSources[1].color2 = (0.25, 0.25, 0.25), and the uniform lightCount is set to 2;
Run the program and the expected output color should be (1, 1, 1). In nVidia video card the output color is OK, but if I run this code in AMD video card the result will be (0.5, 0.5, 0.5), (I have an AMD Radeon HD 7770 card with Catalyst 13.1 driver)
I seems that the uniform value of the second array element in the uniform block is zero when the program evalulate their value. I have checked that both lightSources[1].color1 and lightSources[1].color2 are active uniform after the program is linked.
But if I change the light calcaultion code as follow the output color will be correct:

It looks like that when I access array element by a constant number as its index every thing is fine, but if I use a variable as the array index some strange thing will happen. So is this a feature of GLSL language I have not fully understand or a bug in AMD GLSL compiler?

I can assure that my uniform buffer layout is correct, since I have query the uniform offset by glGetActiveUniformsiv(), and the result coincides with the layout of my uniform buffer layout. And of course vec3 has a base alignment of 16 bytes, this is described in openGL specification, I obey the spec strictly :-)

On older (ATI)-Cards using non-constants as array-index resulted in a compile-time error. Maybe you have to use a
#version xxx
statement before you can use non-const indices (I do not have Access to newer ati-hardware right now so i cannot test this).

in AMD ShaderAnalyzer, this shader doesn't even compile. for some reason it doesn't like you declaring uniform block of struct. i don't actually have experience with uniform blocks, so i couldn't fix it; but it does compile that way:

On older (ATI)-Cards using non-constants as array-index resulted in a compile-time error. Maybe you have to use a
#version xxx
statement before you can use non-const indices (I do not have Access to newer ati-hardware right now so i cannot test this).

In fact, I put a "#version 150" line in the shader since I use OpenGL 3.2 core profile (I fogot to mention this in my first post), the shader compile with no error, but the result is what I have described before. Do you know from which version of GLSL language non-constant array index is allowed ? thanks.

Do you know from which version of GLSL language non-constant array index is allowed ? thanks.

All of them. At least, for arrays of regular values. GLSL 1.10 allowed this. I'd guess that this is a driver bug. Try using vec4's instead of vec3's; it's best not to confuse AMD drivers...

That's true from reading the spec - yes. I've an old Laptop with a Radeon-mobility where I got the compile-time Errors. As I said I do not know for newer Cards. It was just a thought that the Compiler might treat sources different depending on what Version is requested. I'd try to specify the highest Version and test if it works and then decrease the version-number. Otherwise I would not really know how to handle this.
On the old Laptop i used to roll out Loops by Hand to enable accessing the Array.
#define LOOP(x) CallFunction(ArrayValue[x], AnOtherArray[x])
LOOP(0); LOOP(1); LOOP(2)...
Of course this is ridiculous so I would have expected that this behaviour is fixed for newer Cards/newer drivers.
If you try out please post the results here as I'm quite interested in the results.