But the compiler tells me that XMFLOAT3X3 is an invalid dimension for the constant buffer bytewidth:

D3D11: ERROR: ID3D11Device::CreateBuffer: The Dimensions are invalid. For ConstantBuffers, marked with the D3D11_BIND_CONSTANT_BUFFER BindFlag, the ByteWidth (value = 36) must be a multiple of 16 and be less than or equal to 65536. [ STATE_CREATION ERROR #66: CREATEBUFFER_INVALIDDIMENSIONS ]

However, I'm sort of new to HLSL, so I'm not sure if I set the bytewidth to 48, the float3x3 in the cbuffer of the shader will register properly. How should I handle this best? (when I try to set bytewidth to 48, my shader does not load correctly)

Btw, could anyone also please tell me how specifying these datatypes in HLSL works exactly? With this I mean: I use XMFLOAT3X3 in my actual code, but I have to use float3x3 in my shader code? Does it just check how many bits the datatype you're using have, and represent those bits with the datatype, whatever it may be? I'm just taking a wild guess here, the book I've read about Direct3D11 didn't really explain HLSL too well.

Thanks

oisyn
—
2011-09-22T23:23:02Z —
#2

It appears to be that a XMFLOAT3X3 actually contains 9 consecutive floats. However, usually vectors themselves are always 4 elements wide, no matter how many elements you actually use. Therefore, a float3x3 on the GPU is actually stored as a float4x3, but the w component of each of the 3 vectors is discarded. You should layout your matrix accordingly.