I went and got myself a blog set up on an externally hosted site: http://www.popoloski.com. Why, might you ask? Read on.

I've found that my usage of GameDev.net has slowly but steadily decreased over the past few months. Certainly the new forum software has a small and indirect part to play in this, but the real reasons run much deeper than superficial UI annoyances. To put it succinctly, the quality of content I used to expect from GDNet has fallen sharply since the precipitous leap to the new software; I place the blame for this almost entirely on the decline of activity by other iconic members who have since moved on to greener pastures.

GDNet will always hold a special place in my heart as the forum where I cut my teeth on programming. I would not be the developer that I am today without it. It's important to make a distinction here though; the forum became the resource that it was almost entirely through its amazing user-base, one that was both deeply knowledgeable and willing to answer questions, as well as willing and able to ask good and insightful questions. Even with an overabundance of brilliant gurus, without those of the latter group asking good questions, the information doesn't get out there for others to absorb. It's my experience that the best learning comes from answers to questions that you might never have even known or thought to ask yourself. Indeed, one need only look at one of the myriad of examples of this principle in action.

You need both halves of this whole in order to wring the most wisdom you can from your collective user base. While the gurus have been slowly moving away, I think the really troubling loss is that of the "intelligent questioner". While I doubt that we have any less of them today than we did years ago, the new site design seems to invite an overwhelming amount of noise and inane questions that trample and drown the good questions before they even get off the ground. That's not to say that the loss of the gurus is any less devastating; I've made my way into the games industry now, and haven't asked direct questions in years, but even so many of the users I looked to for compelling technical content, both in the forums and in the personal journals have gone silent and missing.

Examples? ToohrVyk, Ysaneya, dgreen, and Drew Benton, some of the top rated users on the old site, all have only a handful of posts since the change over. What's even more troubling is the disappearance of moderators. Promit, Ravuya, and Oluseyi are hardly around anymore, mittens and jollyjeffers have dropped off the face of the Earth, and I know that jpetrie, moderator of For Beginners, hasn't even logged in in several months, and is currently close to achieving moderator status on the GameDev StackExchange site.

So yes, the site is losing users and the general quality level of posts has fallen. That, coupled with the decline of the journals and the bizarre and unwelcome direction the staff are taking with the site, are enough to turn me off for good. It's their site and they're welcome to do with it what they will, but it's become clear to me, and from the evidence several others as well, that it's not a direction that good for the site long term.The insistence on political correctness and "play nice" attitude being forced down our throats is particularly puzzling, especially for a community that once prided itself on having a no-nonsense, blunt, and straightforward response to any and all questions.

As I mentioned, my forum usage has already been gradually decreasing itself over the past months, changing from "several times a day" to "once or twice a week" to "meh, whenever I'm bored." I unsubscribed from the mailing list after they started using it to spam advertisements, and my GDNet+ subscription runs out some time in August. That leaves only the journals as a resource I use regularly here on GameDev.net, and now I've got my external site to take care of that as well. So what's going to change? Not much. I'll still be around from time to time to look at any SlimDX questions, and I'll probably cross-link any blog entries here, but in my mind this marks the end of GDNet as my "home" on the internet; I'm leaving and headed for a new promised land.

I've been doing a lot of work with SSE-related instructions lately, and finally got fed up with the myriad of move instructions available to load and store data to and from the XMM registers. The differences between some are so subtle and poorly documented that it can be hard to tell that there is even any difference at all, which makes choosing the right one for the job almost impossible. So I sat down and poured through the Intel instructions references and optimization manuals, as well as several supplemental sources on the internet, in order to build up some notes on the differences. I figured I might as well document them all here for everyone to use.

The name of the game with picking any instruction is performance, and you always want to choose the one that will get the job done in the least time using the least amount of space. Thus the recommendations here are geared towards these two goals. Each instruction has several bits of information associated with it that we must take into account:

The type of the data it works with, be it integers, single precision floating point, or double precision floating point.
The size of the data it moves. This can range from 32-bits to 128-bits.
Whether it deals with unaligned memory or can be used with aligned memory only.
If the move only affects a portion of a register, what happens to the remaining bits in that register after the instruction finishes.
Any other special side-effects that the instruction may have. [size="4"]128-bit Moves

Let's start off with the 128-bit moves. These move an entire XMM register's worth of data at a time, making them conceptually simpler. There are seven instructions in this category:

movapd movaps movdqa *** movupd movups movdqu *** lddqu

All of these instructions move 128-bits worth of data. Breaking it down further, the first three instructions work with aligned data, whereas the next three are the unaligned versions of the first (we'll talk about the last one in a minute, since it's a bit special). The aligned versions offer better performance, but if you haven't ensured that your data is allocated on a 16-byte boundary, you'll have to use one of the unaligned instructions in order to load. When doing register-to-register (reg-reg) moves, it's best to use the aligned versions.

Each of the three instructions in each category (aligned and unaligned) operate on a different data type. Those with a 'd' suffix work on doubles; those with an 's' work on singles, and movdqa works on double quadwords (integers). This is usually a source of confusion for people, myself included, since regardless of the data type, 128-bits are still being moved, and a move shouldn't care about the raw memory it's moving. The differences here are subtle and easily overlooked, and have to do with the way the superscalar execution engine is structured internally in the microarchitecture. There are several "stacks" internally that can execute various instructions on one of several execution units. In order to better split up instructions to increase parallelism, each move instruction annotates the XMM register with an invisible flag to indicate the type of the data it holds. If you use it for something other than its intended type it will still operate as expected; however, many architectures will experience an extra cycle or two of latency due to the bypass delay of forwarding the value to the proper port.

So for the most part, you should try to use the move instruction that corresponds with the operations you are going to use on those registers. However, there is an additional complication. Loads and stores to and from memory execute on a separate port from the integer and floating point units; thus instructions that load from memory into a register or store from a register into memory will experience the same delay regardless of the data type you attach to the move. Thus in this case, movaps, movapd, and movdqa will have the same delay no matter what data you use. Since movaps (and movups) is encoded in binary form with one less byte than the other two, it makes sense to use it for all reg-mem moves, regardless of the data type.

Finally, there is the lddqu instruction which we have neglected to consider. This is a specialty instruction that handles unaligned loads for any data type, specifically designed to avoid cache-line splits. It operates by finding the closest aligned address before the one we want to load, and then loading the entire 32-byte block and indexing to get the 128-bits we addressed. This can be faster than normal unaligned loads, but doing the load in this way makes stores back to the same address much slower, so if store-to-load forwarding is expected, use one of the standard unaligned loads.

[size="4"]Non-Temporal Moves

In addition to these instructions, there are four extra 128-bit moves that require mentioning:

movntdqa *** movntdq movntpd movntps

These are the non-temporal loads and stores, so named since they hint to the processor that they are one-off in the current block of code and should not require bringing the associated memory into the cache. Thus, you should only use these when you're sure that you won't be doing more than one read or write into the given cache line. The first instruction, movntdqa, is the only non-temporal load, so it's what you have to use even when loading floating point data. The other three are data-specific stores from an XMM register into memory, one each for integers, doubles, and singles. All of these instructions only operate on aligned addresses; there are no unaligned non-temporal moves

[size="4"]Smaller Moves

Next we come to the moves that operate on 32 and 64-bits of data, which is less than the full size of the XMM registers. Thus this introduces a new wrinkle; namely, what happens to the remaining bits in the register during the move.

movd / movq movss / movsd *** movlps / movlpd movhps / movhpd

The first instruction in each pair listed above operates on singles (ie. 32 bits of data) and the second works on doubles, which is 64 bits of data. The first set, comprising the first four instructions, generally fill the extra bits in the XMM register with zero. The second set does not; it leaves them as they are. I'll discuss in a moment why this is not necessarily a good thing. movd moves 32 bits between memory and a register. It cannot, however, move between two XMM registers, which is an oddity that the rest of the instructions listed here do not share. movq will always zero extend during any move, including between memory and between registers. movd and movq are meant for integer data.

movss and movsd are meant for floating point data, and only perform zero extension when moving between memory and a register. When used to move between two XMM registers, they do NOT fill the remaining space with zeroes, which is confusing. movlps and movlpd generally perform the same operation, moving 32 and 64 bits of data respectively. They do not, however, perform a zero extension in any case. movhps and movhpd are slightly different from the others in that they move their data to and from the high qword of the XMM register instead of the low qword like the others. They don't do zero extension either.

Since the second set of instructions don't do zero extension, you might think that they would be slightly faster than ones that have to do the extra filling of zeroes. However, these instructions can introduce a false dependence on previous instructions, since the processor doesn't know whether you intended to use the extra data you didn't end up erasing. During out-of-order execution, this can cause stalls in the pipeline while the move instruction waits for any previous instructions that have to write to that register. If you didn't actually need this dependence, you've unnecessarily introduced a slowdown into your application.

[size="4"]Specialty Instructions

[size="2"]There are several other instructions that have special side-effects during the move. Generally these are easier to see the usage, since there is only one for a given operation.

movddup - Moves 64 bits, and then duplicates it into the upper half of the register.

movdq2q - Moves an XMM register into an old legacy MMX register, which requires a transition of the x87 FP stack. movq2dq - Same as above, except in the opposite direction.

movhlps / movlhps - Moves two 32-bit floats from high-to-low or low-to-high within two XMM registers. The other qwords are unaffected.

movsldup - Moves 2 32-bit floats from the low dwords of two XMM registers into the low dwords of a single destination XMM register, and then duplicates them into the upper dword of each half. Kind of confusing to describe, but the diagram in the documentation makes it easy to visualize if you want to use it.

movmskps / movmskpd - Moves the sign bits from the given floats or doubles into a standard integer register.

maskmovdqu - Selectively moves bytes from a register into a memory location using a specified byte mask. This is a non-temporal instruction and can be quite slow, so avoid using it when another instruction will suffice.

[size="4"]Conclusion

There are a lot of SSE move instructions, as you can see from the above. It annoys me when I don't understand something, and whenever I needed a move I would get bogged down trying to decide which was best. Hopefully these notes will help others make a more informed decision, and shed light on some of the more subtle differences that are hard to find in the documentation.

A few months back I detailed my work on an HLSL plugin for Visual Studio that would add syntax highlighting and IntelliSense support. I had to take a break from that for a while, but I started up again last week, and I've taken things in a new, more generalized direction.

Rather than work on the HLSL parser itself, I've focused my efforts on a language builder IDE, that provides tools to easily build complete front-ends for languages, including support for all of the features people have come to expect from a language's tools. I'm calling this project SlimLang for now because I'm unimaginative and the "Slim" moniker has a good reputation associated with it.

Work on the incremental parser has been difficult, due both to the complexity as well as lack of information on them out on the net, so one of the tools I really want to add to the IDE is a built-in debugger to allow stepping through the parser as it runs and see the output as a visual graph as it's being constructed. It should really help my implementation of the parser, which I can then provide as a separate component to use in conjunction with the parse tables generated by the IDE.

For the visual graph part, I found the library OpenGraph and Graph#, and for the text editor for specifying the language grammar I found AvalonEdit, both of which are WPF libraries, so I decided to take my project in that direction. I haven't been fond of WPF, but recent changes in WPF4 have definitely improved things for the better. Compare the text rendering quality between the old and new versions below:

You can also see from those images my work on a tab control style that mimics the property page tab found in Visual Studio. I didn't know much about styling when I started, so it was basically a crash course for me. The ends results are pretty good though:

I integrated the text editor to allow for specifying the grammar and wrote a quick manual parser to read it. Also got the error list hooked up nicely. All in all, it was a good few days of work.

Also some work using a DataGrid for editing terminal options:

Hopefully this should make further work on the HLSL plugin much easier, as well as provide a platform for anyone with a personal scripting language to create easy plugins for language support, which can really aid usage and adoption of a new language.

I was thinking about making these two things open source (the HLSL plugin and the SlimLang IDE), but I'm also leaning towards providing them for sale to try to get some actual income. Any thoughts on whether these two tools would be useful enough to anyone to be worth spending the money?

Reading the community journals these days is a disheartening task of shoveling through the mud and the muck to find rare gems. As the layers of filth get deeper and deeper, the payoff becomes less and less worth it; hence, The Downward Spiral.

Today I ran across the start of a so-called "tutorial series" on software engineering: The Begining [sic]. I tried to post my thoughts there as a comment, but in the infinite wisdom of the new forum software, users are allowed to moderate comments to their own blogs, so I had to make a separate entry. Besides being a prime example of the previously mentioned "mud and muck", it's also the poster child for Josh Petrie's seminal blog post: Don't Write Tutorials. I'll highlight some relevant sections for you in case you don't want to read it:

[quote name="Josh Petrie"]The larger problem with tutorials is that there are just so many of them, and most of them are written by people who at best have only a passing understanding of the subject. Frequently the author lacks even that basic foundation, and just thinks he or she understands the material (see "Unskilled and Unaware of It")..[/quote] Stonemetal's tutorial is not software engineering. Encouraging people to do error checking, which is a basic programming practice, is a good idea of course, but he's picked one of the lousiest possible examples to explain it with. But disregarding the "moral" of the post, which is ridiculously narrow and yet manages to be so overly broad as to be useless, let's look at the far worse transgression:

[quote name="Josh Petrie"]In short, tutorials should only be written by people who are very experienced -- indeed, experts -- in the problem domain. Even then there are dangers; just because somebody knows the subject well does not mean they have the requisite communications skill to pass that information on in a clear, understandable format.[/quote] Now, it's obvious that stonemetal isn't an expert here. But more than that, the layout, structure, formatting, mechanics, and grammar are absolutely hideous. He's misspelled several words, including the very name of the entry. Furthermore his explanations are scatterbrained, anecdotal, and filled with vague speculation and nonsense like this:

[quote name="stonemetal"]So now some definitions of technical terms: right: this method has worked for me in this situation in the past, wrong: doing this has blown up in my face before or typically it makes things harder than necessary.[/quote] Since when are those technical terms, and why didn't anyone else get the memo?

[quote name="stonemetal"]so how would we do this with exceptions why the function std::cout.exceptions gives you access to what will throw an exception and allow you to set what will throw an exception.[/quote] I can't even begin to comprehend what this... "sentence"... is trying to explain, but it's hardly even English.

Please journal writers, for the love of all that is good and decent, do not write more of this. Perpetuating more of this filth on the unsuspecting internet is not only pointless from your end but damaging to any beginners who read it and think you might know what you're talking about. You should spend your time studying real software engineering and programming practices, and work on your communication skills as well while you're at it.

So I recently went for job interviews at both ArenaNet and Microsoft. I'm really excited about both positions, and I think the interviews went well, so we'll see what happens if/when the offers come through.

Going out to Washington was cool. I got to meet up with Josh Petrie for dinner a few nights, and I liked the area from what I was able to see of it. ArenaNet is working on some cool stuff, and it'd be great to finally be able to say that I worked in the games industry.

On the other hand, Microsoft has a pretty impressive campus, and the list of perks and benefits is pretty big. It's hard not to get caught up when you're interviewing for a team of several thousand people, all of whom work at a campus that sees 30,000 people working there every day. I interviewed for the Office Exchange team, which isn't exactly the area I'm most interested in, although it was nice to see that most of Exchange is now written in C#.

If I get an offer from both places, I'm still not sure which way I'm going to go. This is only for a 12-week summer position, as I still have a year of school to finish afterward, so there's a lot to take into consideration when making the final decision.

I was playing around with prettyprint and extending it to colorize HLSL source for my tutorial web pages, and I got to thinking that Visual Studio should really have a fully-featured extension for HLSL. Unfortunately, there is no such thing. There are two extension that I know of which attempt to do this: NShader and IntelliShade. IntelliShade was a good start a while back, but it's old and no longer being worked on, and isn't even open source so nobody else can pick up the mantle on the project. NShader is an ok colorizer, but that's all it does, and there are spots where it messes up. I want more.

I started looking into writing my language service for Visual Studio 2010. The new extensibility SDK has a ton of stuff on adding features like that, so I decided to dive and start "SlimShade", an HLSL extension for VS 2010.

Parsing
I knew that if I wanted to add all the features I wanted, I was going to need a full-fledged parser. I knew almost nothing about parsers, so I started looking around and doing some research. I stumbled across Irony, which is a fully managed generic parser that takes a grammar (also written in C#) and parses a given file. The simplicity of the project appealed to me over the larger and more obtuse parser generators like Yacc and ANTLR, so I decided to start messing around with it.

Irony is pretty cool, and the source code is fairly easy to follow, with a lot of sample grammars to look at. I used one as a base and started constructing the HLSL grammar. Unfortunately, HLSL doesn't have an official grammar published anywhere, so I had to "guess and check" a lot, looking at the (sometimes blatantly wrong) documentation and try to figure out all the rules. In the end, the best method I stumbled on was writing a quick utility to run through all of the shader and effect files in my DXSDK and NVIDIA SDK folders and try to parse them. When I got an error, I knew I had something else to fix in the grammar.

HLSL has blown up in complexity in the last few versions, and now has everything from namespaces to classes and interfaces. There are also a lot of little undocumented quirks in the language that I only discovered by running into them in official example source. For example, did you know that the following code is valid in HLSL?

float2 a, b; float4(a, b) = mul(input, matrix);

Normally you can't use a constructor as an l-value like that (and the compiler throws errors in almost every case I tried), but in this case you can, presumably because it's acting tuple-like to update a and b with the appropriate values.

Eventually I managed to piece together a fairly complete grammar. An interesting thing to note here is that Irony is a LALR(1) parser, which means that it only has 1 look-ahead to determine the purpose of a given token. This makes it a fast and compact parser, but also makes it difficult to construct a grammar that has no ambiguities. Luckily, Irony comes with a Grammar Explorer app which will analyze your grammar and show you all the conflicts, and then give you a trace of the parser state to help you walk through and determine how to resolve things.

Optimization
Irony is great as a stand-alone parser, but I need to be able to run it very quickly, in response to user input inside of Visual Studio. I'm not sure if Irony is even still being maintained anymore, but regardless I started to work on fixing a few issues I had run into and in improving the general performance.

Profiling showed the biggest hot-spots in the scanner portion of the parser, so I started focusing my efforts there. Most of the things I changed weren't huge issues by themselves, but were being called so many times during the process that they were slowing things down. For example, a certain section of code was calling HashSet.Contains a few million times during the parsing process to determine applicable terminals. I changed this to cache all possible sets before hand, and just use them straight from there. More memory, but faster during parse time, which is what I needed.

When I started, it took around 300 ms to fully parse a 15000 token file. Taking out a bunch of Irony features I wasn't using, I was able to knock that down to 250 ms. Switching to .NET 4 got me another 50 ms for free, which was a nice surprise. Since I do so many dictionary and hashset lookups, I went around and cached all the hash codes for my objects, which resulted in another couple dozen ms shaved off. Caching possible sets before hand, as I mentioned earlier, reduced it by another 40 ms.

At around 140 ms to parse now, I started to scrape the bottom of the barrel on things I could do. I inlined many frequently used properties and converted automatic properties into fields instead, which saved me around 10 - 20 ms. At this point garbage collections are showing up in the profiler as the bottlenecks, so I went around and reduced a bunch of allocations and changed a few objects into value types, which knocked another 30 ms off the time. Eventually I got the parse time for the same file down to 75 ms, which I decided was good enough, since most shaders won't be nearly that long (I think), and I have some plans to help split up and reduce the load later down the road.

Colorizing
At this point I started looking at example code and putting together my colorizer. The code is pretty straightforward, but getting it all to work the way you want is far from it. Since your extension is running inside Visual Studio, it's extremely hard to debug when something goes wrong, and a lot of the extensibility interfaces aren't documented that well or are completely unused anywhere else on the web, making figuring a lot of it out a painstaking process of trial and error. Eventually though, I was rewarded for my efforts:

I'm still thinking about and tweaking the colors and tagging, but the basics are all there.

Intellisense
Colorizing is nice, but I want really like Intellisense too, so I started work on that. Visual Studio actually recognizes four different types of Intellisense:

QuickInfo - For tooltips when you hover over a variable, function, or type.
Statement Completion - When you start typing, shows a list of possible symbols or keywords for the given scope.
Parameter Info - Shows function signatures and overloads when you open a function's parenthesis.
Method Info - Shows a list of members when you press the period key to access a member.

I've only worked on the first one so far, but getting QuickInfo to work was relatively painless, at least from the extension point-of-view:

Of course, getting the symbol information to display is another matter entirely. I still need to go through and collect user symbols and push them onto a the correct scope, and display them when necessary. For now, I've created an XML file that describes intrinsic symbols and gives them the nice info you can see in the image above. A snippet of that file:

"1.0" encoding="utf-8"?>

"Submits an error message to the information queue and terminates the current draw or dispatch call being executed." profile="4"/> "Returns the absolute value of the specified value." profile="1" subset="vs_1_1 and ps_1_4">

"Blocks execution of all threads in a group until all memory accesses have been completed." profile="5" subset="Compute shader only"/> this call." profile="5" subset="Compute shader only"/> "Determines if any components of the specified value are non-zero." profile="1" subset="vs_1_1 and ps_1_4">

Next I'll be working on statement completion. The parser already has a list of expected terms at any given point in the parsing process, so I'll just have to find a way to plug into that list and filter out anything unnecessary (such as delimiters).

Other Cool Stuff
There are a bunch of other things that go into building a language service, some of which I've played around with. The first is bracket matching, which ended up fairly easy to do since the parse is tracking this information already. Another feature was real-time error detection, which displays as squigglies in the editor view and in the Error List pane. Instead of relying on my parser to provide syntax errors, I opted to use the D3DCompiler to simply compile the source on a separate thread and parse the error messages returned. This lets me get warnings and semantic errors along with syntax and grammar errors, with less work for me, so I count that as a win.

Other ideas I had but haven't even started looking into yet: formatting, auto-indenting, supporting the navigation bar, symbol browser, showing real-time disassembly of the code, and even perhaps integrating a visual shader designer, although that would be a long way down the road at this point.

The End
I still have a long way to go before I have something I can release for people to try out, but I think I've gotten a good start on this project. I'm hoping this will be useful for a lot of people, not just me. Anyone out there think they would use something like this? Someone already inquired about GLSL and Cg support, which should be possible in the future by simply plugging in a new grammar and tweaking a few things, so we'll see how that plays out.

PIX is a great tool for profiling and debugging Direct3D applications. A lot of developers swear by it. There is slight problem though: the 64-bit version of PIX doesn't handle managed applications properly. When you go to launch a 64-bit managed app, it complains that it cannot start the process. We've reported the bug before, but there hasn't been much enthusiasm in seeing it fixed (understandable since MS doesn't have a 64-bit managed offering of DirectX at the moment). This can be quite annoying for users of SlimDX though (and possibly the Windows API Code Pack as well). So I set out to build a workaround.

My idea was thus: to build a native x64 application that would in turn launch the x64 managed target and ensure that PIX hooked up with it properly. It turned out to be simpler than I thought. Microsoft provides the CLR Hosting API, which lets you hook into and run managed code inside your process. To start with, I tried simply initializing the CLR, cleaning it up again, and exiting, expecting PIX to fail. Surprisingly, it did not. It makes me wonder what exactly the default CLR loader is doing differently.

Unfortunately, executing a managed program isn't so easy. The hosting API only provides one main function to launch a managed executable in-process, and the signature requirements for the target method are rather odd. Instead of accepting one of the signatures of Main() as the target, you have to provide an entry point that takes a single string parameter and returns an integer value. This of course matches none of the possible C# entry points, and I didn't want users to have to modify their programs to make them compatible with my fix.

The solution I used was to insert an additional C# DLL in between my native loader and the target application. This assembly provides the required entry point function, taking the path to the target app as the single parameter. It then runs the executable within the same appdomain, ensuring that we keep this all in a single process so as not to confus PIX:

This post contains the answers to my C# quiz. If you haven't taken it yet, go do so now.

First, I need to say that the point of these types of quizzes isn't to explore commonly used code or exhibit best practices. It's just interesting, at least to me, to see the murky corners of a commonly used language or API. With that said, here are the answers to the C# quiz I posted a few days ago.

Question 1
I am particularly fond of this question, as it has several layers of trickiness to it. Many questions in quizzes like these can be deciphered based upon the very nature of the fact that they're in the quiz to begin with. Naturally, people first thought "well, normally I would say that wouldn't compile, but since this is an esoteric C# quiz, it must compile." The answers were split down the middle, but I didn't find anyone who got the answer right for the right reasons.

The snippet as posted doesn't compile. The error, however, does not deal with identifier names, the @ symbol, or the Unicode escape sequence. The error stems from a small line in the C# standard:

I'm not sure exactly why this is, as it's not true in C++, but once that innocuous looking comment is taken out the snippet will compile and run.

Next, we look at the preprocessor directive there. Using C# keywords is valid as preprocessor tokens, so the class is compiled into the program. The @ symbol allows keywords to be used as identifiers, but it doesn't contribute to the identifier's actual name. Several people comment on this as being a rather contrived example, and while I agree that I got a little carried away sticking it on everything, I have run into this symbol in the wild when interoping with a C++/CLI project that used "object" as a parameter name, so knowing this isn't completely useless.

Next, there is the function containing the Unicode escape sequence in its name. Unicode is indeed allowed in C# source, and C# doesn't permit Unicode in keywords, so the resulting syntax must be an identifier for a function called int. So we now have two functions called "int". The real heart of this question boils down to function overloading rules. Many people missed this part completely. Both methods will contribute to the set of candidate function members for possible overloading, as the integer literal '5' can be implicitly converted to both short and byte. The question is, which one?

Considering two conversion targets T1 and T2, T1 is a better conversion target than T2 if an implicit conversion from T1 to T2 exists, and no implicit conversion from T2 to T1 exists. Since one does exist from byte to short but not vice-versa, the method taking a byte is selected as the overload, giving an output of '4'.

Question 2
The answers to this question were rather intuitive once you realized that there was both a type and a variable named A, but it exposes an interesting part of the language. From the standard:

For that reason, a rule was introduced that requires that for the expression to be considered a cast, the token immediately following the closing parenthesis must be an opening parenthesis or an identifier. Since in the case of (x)-y the immediate token is a '-', it resolves to an expression instead of a cast.

The output for this question is therefore -1 and then -5.

Question 3
A lot of people got tripped up on the assignment to this in this question. In a value type constructor, this is treated identically to an out parameter of the function, and is therefore legally assignable. In fact, you are required to do so in one form or another in order for a value type constructor to be legal. Usually though, code just assigns the individual members rather than the whole instance at once.

Moving on, we come to the behavior of static initialization. For all value types, the static field is first default initialized to all zeroes. The initializer is run at some implementation-defined time prior to the first use of that field. Therefore, we enter the Foo() constructor and assign 2 to the I field, and then proceed to assign the static field F to this instance. In either case, this will wipe out the assignment we just made, so the output should never be '2'. Once we access F, we are guaranteed that the initializer has run, so we will take on whatever value F currently has for I.

When the initializer for F runs, it calls into the Foo() constructor and assigns 5 to the I field. But then it goes and takes the value of F, which has been default initialized to all zeroes, and overwrites it. When the initializer completes, F has been initialized to all zeroes. Therefore, the output of the program is neither '2' nor '5', but '0'.

Question 4
This question is similar to Question 2 in that the output becomes easy to guess when you see the code in a simplified form like this, and also because it addresses another case of ambiguity in the C# specification that warranted explicit rules to resolve.

To resolve this, the parser looks at the token immediately proceeding the closing angle bracket '>'. If it's one of "( ) ] } : ; , . ? == !=" then it is evaluated as a type argument list (aka. a generic function), otherwise it is not, even if there is no other possible parse of the sequence of tokens. In the first case of our example, there is a '(' token proceeding the closing angle bracket, so it is parsed as a generic function call. In the second, there is a '-' operator, which is not in the above list and therefore results in an expression instead. The resulting output is therefore -7, False True.

Question 5
This is another question that relies on the rules of static initialization. Besides the rules mentioned previously for question 3, this question also relies on knowing that the initialization order for static fields is the order in which they are defined textually in the source file. Some people thought that the fields would be initialized in order of use, but this is not the case. Once again we rely on having the fields zero-initialized, and then a is initialized first to b (which is 0) + 1, giving it a value of one. Final output for this question is therefore a = 1, b = 2.

In the spirit of Washu's C++ quizzes, I've decided to try a few based on C#. While not as heinous as C++, C# still has plenty of little quirks to make things interesting. Resist the temptation to break out your compiler to cheat, and don't rely on the answer of someone else as there is no guarantee that they're correct. For reference, I'm working off the C# 4.0 Specification.

Question 1: Does the following program compile? If not, why? If so, or if compilation errors are corrected, what is its output?

Currently in SlimDX our Matrix class exposes a large set of methods to create projection matrices. This closely mirrors the methods provided by D3DX, which was the original choice behind the design. While moving everything to C# and away from D3DX, I have a chance now to do some refactoring, so I thought I'd try it out and see what came of it.

Note that I haven't listed the ref overloads here. With ref overloads, the new method would have 8 total functions, whereas the old method has 20. Now, ignoring the actual creation of the matrix in there, since I haven't quite tested it yet, which design do you think is better? Here is my thought process so far:

First, the pros. The new method has drastically less functions to maintain, and everything boils down to the one actual method for implementation. Additionally, it better follows the design guidelines of .NET by not encoding functional information into the name and instead allowing users to specify it via enum. I find it to be a slightly cleaner method over all.

Now the cons. The new method is slightly less optimal due to the extra branches occurring. This is less bad than it seems, however, since creating projection matrices is never done more than several times over the course of a frame, and usually much less than that, so it's unlikely to make any sort of impact on performance. If this were determined to be a huge issue, I have a slightly more hackish version of Projection that avoids the branches by making tricky use of the enum values [grin] Also, the new version breaks all existing code using SlimDX, since we'd be relying on these slightly new names. As said before, we have a new "2.0" version coming that will encapsulate all of these breaking changes, but it could still be considered an issue.

Ultimately the pros and cons nearly balance each other, and I think it comes down to a style issue, so I'm trying to take a poll here. Which do you prefer? I've talked to several people already, and the responses have been all over the board. Some people like the new methods and say the style is cleaner, some people want the old ones back, and a few people have said that the new method is better but they still prefer the old one due to stylistic reasons. How about you? What's your opinion?

There's no doubt about it, SlimDX has finally caught up to DirectX in terms of features. Sure, there are still little things here and there, but for the most part we cover every major part of the DirectX SDK, as well as several secondary libraries deemed beneficial for multimedia and game development. Once our next release hits (probably around February, if our covert intel is correct), things should be pretty stable in D3D11 land as well. Of course there is still plenty of bug fixing, documenting, and sample writing to do, but you don't plan future library development based upon maintenance tasks like that.

So we started postulating as to future plans for our little brainchild which is now all growed up. We've gathered up a list of changes we'd like to make, big ones that mean breaking changes for just about all our users, which we had held off on before in favor of library stability. These changes will be part of what we are internally referring to as our "2.0" release. The SlimDX version number is already higher than that, but this will be the first major shift in the library since we came out beta several years ago.

What might this "2.0" release entail? Well for starters, Josh has been chomping at the bit to switch us over to using interfaces instead of concrete classes as much as possible. This facilitates unit testing not only for ourselves but for our clients as well. While this requires a huge set of changes across the library, users will face only mild breaking changes where we end up returning an interface where previously we had a concrete class, which will require an additional cast to fix.

Ever since things slowed down on SlimDX, the team has been branching out to other side projects. Promit started SlimTune, a free profiler for managed applications. Josh started an overhaul of the sample framework, and has only recently started work on SlimBuffer, which will contain a refactored version of the DataStream functionality currently in SlimDX. "2.0" will depend on this new library for its native memory management needs.

Washu and I started work on SlimGen, which is a tool that injects ASM into a managed assembly at runtime, allowing us to use hand-optimized SIMD instructions directly from managed code with no interop overhead. This will play hand-in-hand with SlimMath, which is an all-managed implementation of the math functionality currently living within SlimDX. SlimDX "2.0" will rely on this library, and external projects can make use of the math functionality without needing the rest of SlimDX, or indeed even needing to be on Windows at all.

Finally, mesh and model handling is a common complaint I see across many different APIs, so I started work on SlimMesh, which will handle the loading of 3D models from several different formats in an API agnostic manner. This will include D3D9, D3D10, and D3D11 renderers via SlimDX to make it usable out of the box, but I suspect users of libraries like OpenTK might find it useful as well.

So yes, we're all still hard at work. The SlimDX group is rapidly becoming a multimedia middle-ware group [grin] SlimDX might be all growed up, but the rest of us are just getting started.

PS. Promit was a genius picking out the "Slim" name for SlimDX. It's a great branding tool and easy to slap on to almost any project.

Most anyone who's written a windows application in .NET has seen the funny [STAThread] attribute adorning their Main function. The more curious of these folks have done a little digging and found that it has to do with "Advanced COM Magic" and left it at that. Others, not satisfied with magical explanations, have dug still deeper and found that it has to do with COM, and the two different, mutually exclusive ways it handles threading.

For the uninformed, COM objects can be accessed in two different ways: single threaded apartment and multithreaded apartment. Many components are designed to work in only one mode or the other, such as several Winforms GUI components. Because of this, the default program generated by Visual Studio attaches the [STAThread] attribute to tell the runtime to initialize the main thread as using single threaded apartment mode.

So far, so good. I'm writing a bit of managed glue code in C++/CLI that uses a COM component that requires single threaded apartment mode to run correctly. I check to make sure that my test application has the appropriate thread set, and then run my first test, with good results. Awash with the euphoria of victory, I walk away to go eat a brownie.

When I return, I make a few adjustments to my DLL, and run the test app again. Lo and behold, I fail with a cryptic HRESULT return code from CoCreateInstance, which is in charge of creating the necessary COM component. Frowning, I rollback my changes and try again. Still no good. This makes no sense, right?

After a lot of digging, I do a little test. I insert the following code right before CoCreateInstance:

ApartmentState state = Thread::CurrentThread->GetApartmentState();

I put a breakpoint there and run the debugger. Sure enough, the apartment state for thread is set to MTA (multithreaded apartment). That's odd. I'm sure it's set to STA. I check back to my Main method to be sure. Yep, it's still got the attribute on there. I insert my debugging code as the first line of Main. You can see the hilariousness for yourself here:

So apparently, the CLR is ignoring my STAThread attribute and initializing the main thread as multithreaded. Not very considerate if you ask me. On an off chance, I try running my application without the debugger. Success! We now have an STA thread. I try a few other configurations, each with mixed results.

I did a little Googling, and while there is some confirmation of this peculiar behavior, there was little on how to actually fix it. I fought with it for a bit before giving in and adding the following lines before the creation of the COM component:

Compile and run without issues! It's a little heavy handed I'll admit, but I'm not really sure what else to do. Has anyone else ever experienced this behavior? It's really quite bizarre. Leave a comment below to let me know what you think!

Ever since things settled down in SlimDX, we've been turning our attention outward towards new projects that we've been naming by humorously tacking on the "Slim" moniker. Josh has been working on both an SVN client and a home-made CLR, which right now is targeting the PSP. Promit has his SlimTune profiler project that you no doubt have seen, and I have my mini SlimLine line counter project.

Washu and I originally started a separate SlimMath project back in March, when the new XNAMath library was release that made heavy use of SIMD for blazing fast performance. We wanted to bring these performance gains to SlimDX, but it was obvious that the normal method of wrapping wouldn't suffice for these small and quick bits of code; the managed/native barrier was too great to cross. I came up with the idea to batch math calls and send them all across the marshaling barrier at once, therefore only paying the cost once in each direction. We started work on that, but even with this case, the costs paid were so large that you had to do a monstrous amount of work for it to be worthwhile.

We abandoned this idea, and SlimMath was left to die. Just a few weeks ago though, Washu came up with the idea of modifying the .NET assembly after it had been compiled with NGEN and injecting our own implementations of certain functions that used hand-written ASM. The idea immediately appealed to me, and we set off on such a project, dubbing it SlimGen. While Washu is no doubt preparing a series of blog posts on the subject (he's quite excited about this, something that should amaze and scare you at the same time), I just wanted to talk a little bit about how this could be used.

The way we see it, our SlimGen client program runs on the end user's computer, at install time. After the .NET target assembly (for example, SlimDX.dll) has been NGEN-ed and installed into the GAC, we run our program that takes compiled object files and inserts the raw code into the native assembly image. This has a few major benefits for us. First, you can avoid all interop costs by injecting the code directly. Second, you can make use of any instructions you wish in the raw code, which means that math methods can now take advantage of SIMD extensions. The third benefit, which is mostly derived from running the program at install time, is that it can customize the assembly based upon the extensions supported by the target processor. This means that if a chip is detected as having SSE4 support, we can inject our SSE4 version of a function directly, without any runtime checks.

This scales well too, since for functions where we don't care to provide a particular SSE4 version, we can simply downgrade to the next available version, or if we don't provide any optimized ASM for a method, simple go on using the one provided in the assembly. This new tool, once finished and properly tested, should allow .NET library and application developers to squeeze every possible performance benefit out of using native code, while still retaining many of the niceties and usability benefits of managed development.

Promit announced the SlimTune Profiler today on his blog and it's made me antsy to formally announce my own offering to the fledgling line of software from the SlimDX Group. It's a line counter / software metrics tool that I've dubbed SlimLine, in keeping with the "Slim" moniker. It's not as flashy as a profiler, but still useful nonetheless.

I'm sure many of you have run across the startling lack of basic line counting and source statistics in Visual Studio. There is also a surprising lack of free tools out there to fill the gap. Project Line Counter seems to be thrown around every once in a while, but a quick glance at it shows it to be almost dead at this point, without even support for Visual Studio 2008. Seeing this, I quickly threw together a command line program that would parse a Visual Studio solution and do some counting. It was quick and dirty, but it worked, and after posting it here in my journal and on IRC, I found it was spreading amongst people who liked the simplicity and usefulness it provided.

For the official SlimLine, things are going to be taken to the next level. The entire source code counting code will be pushed into its own library that can be consumed by any other .NET program. Utilizing this library will be three official front ends: a command line program, a graphical Windows app, and a Visual Studio add-in. Initially, it will only be doing some simple line and file counting by project and filter, but I hope to eventually expand into other software metrics, such as cyclomatic complexity.

I like the idea of taking our "Slim" brand and applying it to other areas of development, and the line counter stuff I already had seemed like a natural fit. Like I said earlier, it's not as cool as Promit's profiler project, but it's still useful and some of you may want to keep an eye out for it in the next few months.

Some of you may remember my entry on the custom allocator we wrote to try to take advantage of stack allocation for small temporary arrays. Yes, it was Premature Optimization, and as no good deed goes unpunished, we botched the job and caused all sorts of problems. We ended up scrapping the whole thing and just using std::vector for our temporary array needs.

A few weeks ago, we got an issue filed on our tracker that claimed that a particular shader method in D3D10, which was called many times per frame, was spending 80% of its time diddling with std::vector. I was skeptical to be sure, but a few quick tests proved that at least a good portion of the method was in fact being eaten up. I figured it high time I resurrected our old stack allocation code, but this time I had the benefit of hind sight to help guide me along.

Our old attempt had basically involved using a custom allocator for std::vector, which seemed good on the surface until we realized that allocated memory from the stack inside an allocator wasn't going to be of much use outside of it. To that end, I began thinking of ways that we could reliably allocate memory on the stack of the calling method, but still wrap it all up nicely so that the user didn't have to worry about cleanup or any other nonsense. I hit upon the idea of using a macro that would discreetly allocate a chunk from the stack and then forward the call on to the actual stack_array constructor. Since the macro is a simple text replacement, the actual stack allocation call happens in the calling function, which is exactly where it needs to happen.

It's a very lightweight template class that really only exists to hold temporary values while we marshal between .NET and DirectX. Notice the stackalloc macro, which is where the magic happens. If the user fails to use this macro to set up the array, it will go ahead and use a standard new/delete, which means we don't get unspeakable errors from a simple typo. Here's an example of using it:

I'm pretty happy with the way it turned. Benchmarks place stack_array at around 3x faster than std::vector, and even slightly faster than raw memory allocation, so we've definitely done some good work there. I'm not sure why std::vector is so slow in this case; I've turned off every security and debugging feature I can think of; maybe there's some quirk when it comes to using it in C++/CLI.

I was fooling around with SlimDX the other day and decided to try to make the nicest looking minimal initialization program that I could. We've been adding little utility classes to help make things smoother, so the end result is, in my opinion, quite pretty to look at.

Direct3D9:

using System; using System.Drawing; using SlimDX; using SlimDX.Direct3D9; using SlimDX.Windows;

MDX has been dead for quite some time now, and these days most people recognize the futility of trying to use them for any new project. As a developer of SlimDX, I find myself still glancing at the internals of MDX from time to time, especially when contemplating how to wrap a particularly cumbersome portion of the DirectX API. That's what I was doing today when I ran across a real gem that I couldn't resist sharing with GDNet journal land.

DirectInput exposes an effect interface for dealing with force feedback effects. Each effect has a set of additional type-specific parameters, which turn out to be rather difficult to encapsulate correctly for consumption by managed code. Just as a quick primer, in native DirectInput IDirectInputEffect exposes a GetParameters method to get the effect parameters. The returned parameters structure contains a pointer to the extra type-specific parameters, along with a byte size. It is assumed that the C++ programmer will know the correct type for this data and then cast the pointer appropriately. In the managed world, this isn't so easy.

Ideally, the managed wrapper would construct the appropriate type and then return that to the user, as playing with pointers isn't quite kosher in the managed world. Unfortunately, the returned parameters contain no identifier to specify which type of data is being represented. I scratched my head at this one for a bit, and then took a look at how MDX handled the situation. I was quite surprised to find this little snippet of code hiding in the bowels of MDX:

Yes, it's ugly, but a lot of that is due to the compiler having stripped out constant and structure accesses. That's not really the point here though. What's really crazy is how this snippet of code is determining the correct type for the extra type-specific parameters. Let's look more closely at the last portion of this method, cleaned up to remove compiler funkiness.

In case you don't quite understand the hilariousness that is this code, let me give you a little guidance. First, the function checks the number of bytes returned, and tries to match it up against the size of possible type-specific structures. As you can see, there are four possible force types. If the size doesn't match any of those, it checks to see if it's aligned on a Condition array boundary, in which case it interprets the data as an array of Condition structures. That's fairly hackish on it's own, but as you can see by that first if block, it gets better.

Unfortunately for the original MDX developers, both DIPERIODIC and DICUSTOMFORCE both have a size of 16 bytes, so how would you know which structure was being returned? Simple! Since DIPERIODIC contains four integers, and DICUSTOMFORCE contains three integers and a pointer, simply take a guess to see if the data at the correct offset could possibly be considered a valid pointer. If so, we'll pretend that we have a DICUSTOMFORCE and everything will be hunky dory!

The only question is, how do you know if a given set of bytes represents a pointer? Simple! Use the handy-dandy IsBadReadPtr and IsBadWritePtr functions to figure it out for you. For those of you in the know, IsBadXxxPtr should really be called CrashProgramRandomly or perhaps CorruptMemoryIfPossible. I haven't done much research on these two little functions, but before I'd even read the aforementioned article, I had already guessed that they weren't quite standard C++, if you catch my drift.

Now, this little... what did I call it... gem? that I've found here is quite out of scope for most normal applications, but I can't help but wonder if anyone had ever attempted to use this bit of functionality and found it horribly broken.

Such a cool word for a math concept. Anyways, I've been writing a Sudoku game lately, and I've been delving a little into set theory, with a little dash of combinatorics thrown in for good measure. I needed a method that would return all possible combinations of a given sequence, and the Enumerable class has finally failed me.

So here's my implementation of a combine function. Note that while this is similar to permutations, it's slightly different in that I only take a certain subset in each pass (controlled by the count parameter). I'm hoping someone, perhaps one of the functional guys, can show me a more elegant way of doing this, but for now it works. I think the use of the HashSet in the Filter function provides a performance boost , but I don't know for certain. HashSet.Contains is an O(1) operation, but the cost of creating a new HashSet every iteration may cancel out that performance gain.

In lieu of the tutorial that I've been stalling on for a while now (Evil Steve's magnificent D3D9 tutorial has me depressed), I'll be discussing friend assemblies, and how something that works quite well with C# got absolutely and totally screwed up by C++/CLI.

OK, so, friend assemblies. What are they? Well, suppose you have a library that has a few public types that consumers of your library use, and few internal helpers that assist the others in performing the library's tasks. This is great, and how encapsulation is meant to work. Hide things from the end user that he doesn't need to see, both so that things are simpler and so that he won't inadvertently break something. What happens, though, if you want to write a another library that extends the functionality of the first library, but in a completely separate DLL? You run into problems if you need to use the internal utilities provided by library A. This is where friend assemblies come into play.

The CLI allows library designers to mark assemblies with an attribute called InternalsVisibleTo, which allows them to specify a list of assemblies that can see and use the internals of the marked assembly. This is straightforward, and is also tightly controlled, since the only person who can specify other friend assemblies is the original library developer. It's probably not ideal Object Oriented Programming, but that doesn't really bother me. Now, in C#, this functionality works great and as expected. Library A marks itself with this attribute, and lets the compiler know that library B can touch its private parts. All is right with the world.

Enter C++/CLI, stage left. When it came time to implement friend assemblies for C++/CLI, the language design team had a collective brain malfunction, because things get very crazy, very fast. Marking library A with InternalsVisibleTo still follows the same procedure as outlined for C#. The problem comes with the consumer assembly, which is library B in our case. It isn't enough for the C++/CLI compiler to see that A has declared B a friend. It wants more proof that the two assemblies really want to be friends. OK, I can live with that; maybe there's some strange goings-on under the hood in the C++/CLI compiler that require this. However, somebody on the design team thought it would be hilarious to require the reference to assembly A to be made using a #using statement, with a magical as_friend modifier stuck on the end. Apparently, the syntax team didn't get the memo, as this magical keyword isn't identified as such by Visual Studio.

OK I say to myself. It's weird, but I can deal with this. Just stick "#using "AssemblyA" as_friend" at the top of a source file and everything should work. Right? Wrong. When the compiler hits the #using statement, it says to itself (and me as well, how nice of it) "hmm, you've already added AssemblyA as a reference using the project settings, and it isn't marked as_friend there, so I'm just going to ignore the fact that you said as_friend here. I'm so helpful!" Of course, there's no way to mark an assembly as_friend from the Add Reference dialog, so you have to remove the assembly from the list and stick with the #using statement.

But wait, there's more. The #using statement, confusingly, only applies to the source file in which you've declared it. That's right. Even though your assembly B has told the compiler that you're using a reference to assembly A, it will still only let you use types from assembly A if you've made the special #using statement in the current source file. And since we've removed assembly A from the project-wide list of references, you now need to make sure that every required source file can see the #using statement.

But wait, there's more! The #using statement can only refer to ONE specific DLL, with a hard coded path. That's right. You need to hard code the path to the DLL. That means that if you want to refer to an assembly contained in the same solution, you need to refer to its actual physical location on disk. Also, you end up needing to use preprocessor magic to make sure you load the right assembly based up on the current platform and configuration of the project. In SlimDX, this preprocessor garbage ends up looking like this:

So now you need to add this to a common header and make sure you include it from every source file, making compile times that much worse and leading to weird and obscure compile errors if you forget to add it to one. I cannot fathom what they thought they were doing when they designed this system, but I do know that it's sick and twisted. Wherever you are, C++/CLI designer, know that I shake my fist at your evil shenanigans.

SlimDX has a Twitter now, that I'm sure you'll all want to follow. The first juicy piece of news on there is that we have confirmed that "Spiderman: Web of Shadows" DID in fact ship with SlimDX, and uses it in its launcher for D3D display settings and DirectInput button configuration. Unfortunately, they didn't see fit to credit us. :(

I've started the new Direct3D10 tutorial, after Drew called me out on it. I should have it up sometime in the next few days. [grin]

Classes start on Wednesday. I'll be happy to go back to school after spending a long month at home. I always love the first few days of classes, although they shortly thereafter become boring. Oh well, such is life.