Community Reputation

About PhillipClark

Gotcha, so I guess that should explain why the light does not appear to be on the z-axis like it should be, it is instead a bit to the left.
When you say to send the world position to the pixel shader, do you mean in place of the one I'm already sending, or as an extra?
EDIT: Just updated my code and all of a sudden everything works like a charm! Thanks, dude, and thanks to the guy who first mentioned it, I was just too thick-headed to think about passing another position to the pixel buffer. I've definitely learnt my lesson about mixing world/view/proj spaces That being said, is there a more elegant way to handle this instead of passing two positions?

Thanks for the suggestions, especially the tip about using input.norm.xyz, I feel ignorant now. So, letting HLSL align the buffer automatically should be okay? My old code was float3 for the vertex shader input, but I got paranoid and switched to float4
Is it okay for the input position to be float3, since it can only be passed to the pixel shader as a float4? Should I always set the w-component to zero when I don't need it?

I actually just discovered that if I move my light source a ridiculous distance away (z value of -5000.0) then it works properly. I'm cool with this, but I'm really curious as to why it would not work at a distance that should have given the object enough room, and of course I would like to be able to set my light position to normal values without having to multiply by 5000.

I just added the two things you said in your first post, to make sure the normal is truly normalized at all steps. Unfortunately, it did not fix the bug. Could you further explain your last post concerning world space light position vs surface? In the simple lighting examples I've seen on the internet they have made no distinction between the different view spaces when doing lighting in the pixel shader, yet their lighting was displayed correctly.
Updated normal transform in vertex shader:
float4 tmp_norm = float4( input.norm.x, input.norm.y, input.norm.z, 0.0 );
output.norm = mul( tmp_norm , world_matrix );
output.norm = normalize( output.norm );
Updated normal declaration in pixel shader:
float4 norm_norm = normalize( input.norm );
float3 tmp_norm = float3( norm_norm.x, norm_norm.y, norm_norm.z );

I have a simple model loaded and a point light source that I am trying to get to work correctly, but it's just not happening. I have probably spent around 12 hours over the past two days trying to figure out what is wrong, and I'm starting to get really frustrated, so I thought I would ask here as a last resort.
I have checked and double checked my lighting equation many times with at least 3 different sources. At the moment I am using constant lighting coefficient so that I know for a fact my object will be lit no matter what the distance. My light is EXPLICITLY fixed at the -30.0 position on the Z-axis, yet my object appears as if it is lit from a different direction. My camera is located at ( 0.0, 0.0, -20.0 ) so that I can verify the model is not swallowing the light, and it is not. The light is plenty far away from my model. I've changed the light/camera distance many times and have gotten the same result.
The entire front side of my object should be lit, instead it is lit from a different angle.
I use the DirectXMath library and I was VERY, VERY careful to store the const buffer matrices correctly and with the right alignment. I also made sure my HLSL byte alignment was correct by using float4. I use the row-major compiler options for HLSL so I do not need to use matrix transpose, and my code reflects that.
Please give my code a look and tell me where I've gone wrong, because I obviously cannot figure it out!
Bugged lighting:
[attachment=12680:bug.jpg]
Matrix update function:
[attachment=12677:update.jpg]
Vertex shader:
[attachment=12678:vs.jpg]
Pixel shader:
[attachment=12679:ps.jpg]

So my light color, which is currently ( 1.0f, 1.0f, 1.0f ), needs to be something like ( 100.0f, 100.0f, 100.0f )? I tried this but it is still the same result. I am apparently missing some of the logic here. I'm not sure what you are talking about dividing by PI. I haven't seen any diffuse/specular calculations that use PI, unless you're talking about the actual physical model, not the real time phong model. I guess I should have specified
Also my light position is at ( 5, 5, -5 ) so it's not a crazy distance away or anything. I have it coded so that I can move the light around so I should be able to move it close enough.
I got the 3d model a while ago so I have no clue where it is online but I can upload it somewhere for you (no texture, just v/vn obj file). What's a good free upload place?
Update: I can only get it to light the object if make the equation attenuation = 1.0f / ( d*d*0.00000075 ) or making the light color something ridiculous like ( 999999, 9999999, 99999999 ) but that's kind of silly, there has to be a better way right? Also, using that, I can't get the light to completely "fall off" no matter how far I move it away from the object.

I have the constructor default the z and alpha values if they are not inputted, so it should be able to support 2, 3, and 4D vectors in that sense. Maybe. O.o I'm probably wrong here.
[EDIT]
In response to your [EDIT]: That was one of the things I was thinking about, but I had no clue how to go about it. Thanks for the clarification. I'm a real noob when it comes to the format stuff.
It is now working correctly! Thank you!

Thanks for the reply. I figured that was the case.
Can you spot any errors in my D3D11CreateDeviceAndSwapChain() call? I was unsure how to handle pFeatureLevels, so that may be the incorrect part.
I chose to only include one feature level (D3D_FEATURE_LEVEL_11_0), so my featurelevel count should be 1, right?
EDIT: I fixed it by setting the &pFeatureLevels parameter to NULL and the FeatureLevels paramteter to NULL also. Any tips for how I can get it to work by specifying a feature level like I had tried in my source code?