Community Reputation

About directx user

Well thats unfortunate, I dont know why enum is listed in the msdn then.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb509569(v=vs.85).aspx
Maybe they are only reserving it to implement it later.But anyway thx for the quick explanation.
But anyway I dont know, why it would be so difficult to implement something simple as enums, which is only a mater of replacing strings with numbers.

Hi,
I am building an Algorithm, which is suposed to run on both, the cpu and the gpu. It would be really usefull if I could just use my enum in the shaders header file and could just include it in both, my cpp and the hlsl shader itself (enum determines which function is going to be executed). But Unluckily I always get the error X3000 unexspected token when i try to use enums in my shader.
On the internet I read that enum is a reserved keyword for HLSL though I found very few examples of HLSL Code using enums, it should be possible or am I wrong?
Not even this hlsl file compiles:
enum { test1, test2};
[numthreads(8, 1, 1)]
void CS_NEW( uint ID : SV_DispatchThreadID )
{
}
I am using the Inbuild HLSL Compiler from Visual Studio 2012, am I the only one having this Issue?

Thank you very much, I didn´t even validated the device pointer, as I knew my other Shaders compiled fine with the device. I unluckily forgot there to be another device declared inside the class overwriting the definition of the one outside the scope of the class. The devicepointer was not assigned to that one yet and was a zero pointer...
Thank you very much for that hint, now I feel really bad having bothered you two with this laughable problem .
Besides from that, does anyone has a clue why I end up getting those redefinition warnings after transfering visual c++ 2010 projects to visual studio 2012?

Unluckily even when everything except the naked function body is commented i get the same results (the compiled length decreased so I guess its updating the file), even though the array is still pretty large with 1000 lines of code, while everything thats left is:
[numthreads(128, 1, 1)]
void BACKPROPAGATE( uint ID : SV_DispatchThreadID )
{
}
There are no erros or warnings being displayed about the shader file.
But I have extremly many warnings about makro redefinitions in the context of directx(80) which could eventually aflict that case (I have those warnings since I exported my project from visual c++ 2010 to visual studio 2012). I dont really knwo why.

I added the shader definition. I thought the Array would be fixed size, since its getting written into the header file while compiling the project, its definition is
const BYTE backpropshader[] =
{
...... tons of code......
}
so I thought I could just use its size to feed the function, relying on the msdn:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb509633%28v=vs.85%29.aspx#compiling_at_build_time_to_header_files
I thought the blob method would only be used when creating the shader at runtime using compileshaderfromfile for example?

Hello, its me again.
After i sucessfully initialised my shader code i experienced some performance issues with drawing my scene which contains spherical geometrie. The issue seems to be the seperate drawing calls which tookes about 0,3ms. first i thought my graphics card would be too slow to give me the necessary performance to render 1800 spheres but then i increased the vertex-count of my "sphere" and the rendering tooked the same time.
Now i know that the Issue has to be the cpu-gpu-communication, but everything at all i cant imagine how a single draw call can cost that much time?... if i look at modern games having hundreds of models displayed at the same time.
A solution would be to use instancing but iam not really sure how i can use it at dx11, basicly all that should be done is telling the gpu to loop my indexbuffer content n times insteat of calling the drawing such often. The problem is i have absolutly no idea how to do that? Creating the whole indesbuffer containing 1800 times the same index-sequence seams to be very messy and unnecessary.
And if you use different models at the same time i guess a drawing call which takes 200 ms for 1000 different object buffers isnt really aceptable so how do YOU manage it?

thx for your reply, but i detected the problem. The Problem was the matrix assigment, i changed the assignment to the map-unmap solution and it worked. thx anyway for telling me the fact that the shader can handle the storaging of the matrice as column major or row major i will recognize it when i will have my next transformation Issues .