I'm studying Advanced Diploma of Professional Game Development this year.It's a two-year course, although I am being shoved straight into the second year.The first year is all C++ programming (specifically using MSVC 2008 Express), and the second year is about making games in teams, using artwork provided by second year screen and media students... yay, I will be making games ALL YEAR with resources on tap!After this, I can do a bachelors degree with just another 12 month course.Basically I felt weird lecturing to students who have higher formal qualifications than myself, and decided to do something about it :)

Anyway, I kind of expect some crossover between classwork and ObjAsm during the year, with my c++ projects using OA32 core libs (for example, fast pixel operations on the cpu), and with some of the paradigms I learn migrating back into the high-end OA32 libs.

Rest assured that I'm not in any way selling out, or moving away from asm !!!However, this year I will definitely be looking to move away from MASM.

First full day at AIE , and my advanced diploma has become a 2nd year bachelors degree in gamedev..We already covered 3D vector operations (including dot and cross), homogenous coordinates, matrices, nested coordinate systems (spatial coordinate hierarchies), and implemented it all under dx10.I went further and completed a quaternion exercise By the end of the day, most of us had a 3d object being rotated and translated by keyboard control, with a camera attached to it.My lecturer is an asian guy who smiles constantly(!), and although he does know his stuff, he seems uncomfortable with lecturing, and in my opinion fails to adequately convey the full meaning and implications of the subject matter.For example, he says "vectors and points are often represented by the same class, but have very different meanings - vectors can be rotated but not translated" and so on.I strongly disagreed with this, since I think of a point as being defined by a vector from the origin to the point's coordinate... it's a vector with its origin at (0,0,0) in my book.But I bit my lip as it would be unseemly to lecture to my lecturer on the first day of class.He does seem to be a genuine nice guy, and already admits that he can't always remember all the details of left and right handed coordinate systems and stuff.

Anyway, it felt great to churn out code all day and I'm really happy with the pace of the course so far.I'll try to spend some free time tomorrow coaching a few of my classmates who struggled today.

For example, he says "vectors and points are often represented by the same class, but have very different meanings - vectors can be rotated but not translated" and so on.I strongly disagreed with this, since I think of a point as being defined by a vector from the origin to the point's coordinate... it's a vector with its origin at (0,0,0) in my book.

He is right though.A vector is a direction and a range.Rotation changes the direction, scaling changes the range... but translation doesn't 'make sense', so to say, in terms of 3d transformations.A point in 3D is just X, Y, Z, like a vector, and as such, it is usually stored in the same datatype. But the semantics are different.When you translate a point, you add a vector to it. But the result is still a point.

In other words, you can use the same rotation and scaling matrices for both points and vectors, but using a transform matrix with translation in it will give strange results when you want to translate both vectors and points.

Yes, quite true, I agree - but rotating AND translating a point also violates my definition of a vector from the origin to the point coordinate - if you rotate AND translate a point, you must have changed the magnitude (or if you prefer, radius) of the vector which previously defined it ;)It is actually possible to translate a point without violating this definition, if there is no rotation involved, and if the new position rests apon the boundary of a circle at the same radius as the original vector - which is just achieving the same result as you would if you had a rotation with no translation, yes? :)

I don't know why you boast about your study according game development using the "latest high end ... super compilers".Aren't you able to achieve your goal with any necessary basic mathematical knowledge and aren't you able to transform this to a working algorithm even with the simplest assembler?In my opinion you are pursuing a wrong way.

I meant that I will be back-porting some of the C++ objects from the classroom into our very own ObjAsm32 (which is basically MASM on steroids).

Of course it's possible to implement highlevel code using lowlevel asm - but typically that would be crazy, it would take way too long, and wouldn't be easily ported to other platforms, other projects, or other assemblers.... EVEN if we used ObjAsm32, due to the limitations of the underlying assembler...

I also stated that this year I will be looking to move away from MASM (but not ASM)... this implies that my true goal is to port ObjAsm to another assembler (possibly NASMX)...Believe me when I say that I am not a fan of C or C++, however I am certainly a fan of ASM and OOP :)

I have no intention of quitting ASM - but I am seeing MASM as a ball and chain.

Anyway, today is only my fourth day of classes, and we're writing pixel and vertex shaders for D3D10 !!!So today was the first day that I've actually learned something new - I had little experience in writing shaders, and certainly no HLSL experience.Really impressed with the pace of this course!

The number one reason that C and its newer variants are so widely accepted is simply that these languages are able to execute code on platforms other than the one they were written on, and indeed, on platforms that don't yet exist.

I would like to see ObjAsm ported to a crossplatform-capable assembler (such as NASMX) because this would place assembly language squarely in the same class as the C-variant languages.

Looking through rose-coloured glasses:

Both programmers and end-users would be able to finally compare these languages on a level playing field, ASM would gain a new respect and be more widely exposed to its potential users, c-variant language programmers would gain a better understanding of their own language of choice, and the human race as a whole would benefit from the flow-on effects of improved software stability, speed, and most importantly, choice.

I'm not a total philanthropist or whatever, but I don't really see a bad side.

Day 5 was devoted to more shader stuff, with an accent on multiple render targets and post-processing shader effects, combined with uv-animation (something that the art students use a *lot* in their classes, had to be looked at in more depth).It was loads of fun, with half the class trying to implement typical shader effects such as motion blur and sepia tone, and the other half (who clearly had no idea what they were trying to achieve) writing more or less random shader code with some interesting and funny results :)I think we're already at the end of the road for D3D stuff, we'll start looking at some game engine middlewarez during the coming week.That's a shame, there is so much stuff we didn't cover in depth, or at all... but I guess most professional game programmers don't write their own engines from scratch.

Today we were split up into random groups of three programmers and three artists... interesting days ahead ;)The artists are working in Maya 2011, and the programmers are working with the crossplatform engine called Gamebryo.Furthermore the artists have been supplied with a Gamebryo export plugin, so that eliminates the question of file format.I guess it's an industry norm to be working under some arbitrary engine, with peers that you don't personally know.All good - this is one of the main reasons that I chose to physically sit in the classroom rather than take the same course online... that level of interaction with other programmers and artists can't be achieved online... at least not generally.

Today we had to present our plans for our first collaborative project with the programmers and artists working together in groups...I ended up getting stuck with three or four different jobs, sigh. I have to develop an L-System and cpu-based procedural lightning / electricity effect with its own custom pixelshader, I am also responsible for implementing a camera/world collision detection and response, and at least one particle effect, probably also with its own shader. I have about 3 weeks, but I won't have access to any of the art assets until later - I have demanded a whitebox construction of the worldspace by wednesday (it's only three rooms and some doors).

I am one of three programmers working with three artists for this exercise, as mentioned in a previous post... the other two programmers will also have their hands full, but to a lesser extent...

Since JWASM is MASM-compatible, I would have loved to go there - it would have required no porting at all.Unfortunately, JWASM has some problems with its ELF backend which effectively hobble its utility on linux platforms.The author has been notified some time ago and seemed uninterested in fixing that.On the bright side, NASMX has been pimped recently, with additional functionality that appears to be exactly what we needed.It's now looking like the strongest option.

Yesterday, we had a lecture on lighting models, and implementing them on the GPU.We learned three techniques, and implemented one or more of them.Today was weird, I had to analyze PacMan and determine why it was so much fun, so popular, for so long.We also had a lecture on shadowing techniques on the GPU, and implemented one or more.I have to say, I finally appreciate exactly how cool GPU code is, and how much room there is for new ideas.

I have to say, I finally appreciate exactly how cool GPU code is, and how much room there is for new ideas.

How's that? You've been playing with graphics for years right? Should have known that GPU is where it's at when it comes to graphics. Has been that way since the original GeForce. The CPU code is mostly trivial.

It's not just graphics though, I finally understand how to flex the gpu muscle for arbitrary processing.The only disappointment is the lack of recursion on the modern gpu - although this can partially be emulated.And I think that's more a 'feature' of current shader compilers (inline mentality) than a missing gpu feature.. for example, you can call functions from the main HLSL function, but not any others. You can leverage this to simulate recursion, but you have to implement your own virtual stack.I'm actually curious now if this limitation applies to asm shaders.

Shaders as in OpenGL/Direct3D shaders can't do recursion because they cannot do actual function calls. The instructionset does not support function pointers (so no call/ret and similar features, the call/ret they have can only store 1 return address in a special register afaik ('return address stack'), and that register/stack can not be accessed through instructions, so you can't simulate a proper recursion stack with this). Aside from the fact that you cannot program shaders in assembly in D3D10+ anyway. Only disassembly is supported. There is no assembler, only a compiler.

nVidia goes above and beyond OpenGL/Direct3D spec (and OpenCL), and their 400/500 series DO support full function calling through Cuda. This also allows full OOP, and therefore Cuda has been extended from just a C dialect to full C++. I'm not sure if nVidia supplies an assembler for Cuda though. Not that I'd recommend using one even if they have it. The assembly would most probably be tied to a specific architecture (eg you'd have to write different code for a GeForce 200 series than for a 400 series), where the Cuda compiler will be capable of generating optimal code for any Cuda architecture (much like how HLSL can be compiled for any profile you specify, from vs/ps 1.1 to 5.0, which in some cases can yield massively different results).

Other vendors stick to the Direct3D/OpenGL/OpenCL specs in their hardware implementations, and as such, in their case it IS a missing GPU feature. The compiler has to inline function calls, as the notion of functions is just 'syntactic sugar'.