I’ve been away from programming my projects for awhile. I’m not going to disclose why, ’cause I like to keep some mystery surrounding my microscopic internet presence. Or maybe I’m just paranoid, how are you to know? Anyway…

Since I’ve been away for so long, some cob webs have cropped up. CANINE is still just a basic little rendering engine (still entertains me, though!) and Pizazz has some bugs due to incompatibilities with a slight (SLIGHT!) change in the Zazzle RSS feed. Also, I should give Secret Maryo Chronicles some love…that game helped me get through a week of nothing-to-do (or a month? I can’t remember…) and I want to ensure its continued development. So, here’s my game plan:

Fix the bug in Pizazz where image resizing stopped working. Make a hotfix release.

See if I can get my secret TinyBASIC compiler to a little more than just tokenize source code.

Add in a way to embed Zazzle product feeds into WordPress pages with Pizazz via shortcodes. [pizazz-listing query=asparagus store=HolidayBug]

Hack on the secret platformer I’ve been working a bit on with a new friend of mine.

Get some basic scripting going in CANINE so we can get some interactivity out of the billboards. (Exploding barrels here I come! Later.)

Add in a script editor to Secret Maryo Chronicles to help out Quintus’s new scripting subsystem.

Hack a bit more on CANINE to get some bullets and enemies going.

Get that TinyBASIC compiler I mentioned to output something that I can compile with Netwide Assembler (hence marking my first successful attempt at a compiler).

Work a bit on that beat-em-up I made some mock-up-graphics for.

I’m going to try and stick to this as much as possible, only diluting it with Team Fortress 2. (Why must I have friends to play Mann vs. Machine? I don’t have enough! I don’t need Valve’s social pressure. >:[) If anybody gives me compelling reason, though, this order may be subject to change. It has to be really compelling, though.

Before I get into what progress I’ve made with CANINE, I’d like to start out with mentioning some stuff that I forgot about in the last post. I had learned something very important at the time, and I feel that I should spread the word so that this mistake is made less often: the C math library does not handle degrees. I had so many problems because of this, so many times, and I keep having it. The C standard math library takes and outputs radians. It’s not that its hard to convert from one or the other, degrees to radians is d*M_PI/180 while radians to degrees is d*180/M_PI, it’s just that the math books I’ve grown up on always handled degrees and rarely even mentioned radians. Turns out that’s pretty lame, because I’m constantly encountering radians in the word of applied mathematics. Yet again, OpenGL runs with degrees. What’s up with that?

The only other thing I think is worth mentioning is how to convert from Vandevenne’s direction vectors into an angle. atan2(player_direction_y, player_direction_x) gives it to you in radians; take special note that y comes first! Also, it’s useful to execute gluPerspective(field_of_view_angle, display_width/display_height, 0.2, longest_dimension_in_level) on the projection matrix when rendering the 3-D objects. Vandevenne’s raycaster runs at a field of view angle of 66 degrees — and OpenGL, as I’ve said, does take degrees — and the longest level dimension is used to prevent parts of the game from being clipped by OpenGL’s z-buffer.

Now, back to the present.

It’s taken me a week to figure out how to, but I’ve finally got the sprites to render. In OpenGL terminology they’d be called “billboards,” polygons that always face you, like in Wolfenstein. All of the tutorials I found through Google failed and confused me. At one point I just grabbed Wolfenstein iOS’s source code to see how they did it, and I tried to implement the same technique. It didn’t work…at all. My engine’s implementation seems to be largely incompatible with theirs; not even our coordinate systems are compatible. (Mine’s better, by the way. =P)

Today, I deleted all of the billboard code that I had and started from scratch, clear-headed and focused. Guess what, I figured it out in half an hour. *facepalm*

The first thing I did was just trying to get the sprites to render as northern walls so that I could see them. They rendered all right, but in the wrong coordinates. *sigh* I forgot that my coordinate system is not 100% compatible with OpenGL’s. In order to convert from mine over I have to do -map_width+y. Goody, now I have sprites in the right locations, sort of. As supposed to being in the middle of a tile like the should they’re aligned to the northern-most Y axis and they don’t even rotate.

This turned out to be a simple problem to solve. First off, centering it was as simple as adding 0.5 to the X coordinates. Next, Vandevenne’s usage of vectors for the player location made rotating it towards the player easy, and it’s even cleaner then Wolfenstein’s — probably faster, too! As opposed to dealing with sines and cosines, the polygon coordinates are as simple as:

Ka-bam! I’ve now implemented a completely functional billboard sprite system. Ah…it feels good. My modification of Vandevenne’s raycaster also let me cull out the invisible sprites by checking if they were standing on visible tiles. Satisfying.

Now for the YouTube-hosted video example. You might notice the walls have a brown edge at the top. I added those to make sure that they were rendering right-side up, and for some reason felt the reason to keep them for this video. Without further ado…

Argh, wait! Just as I started preparing the video I found a problem: in a ring of sprites, at certain angles some of them will only render part of themselves. It seems to be a problem with the transparent border around the other sprites or some such. A quick Google says that I might have to sort them and draw them in order, but I’ll save that for next time. I’ll still show you the video, though. Enjoy!

Back in the first post where I announced CANINE, all I had was a 2-D grid representing the level that highlighted the parts of the screen that were visible to the player through a theoretical frustum. I have since implemented a three-dimensional world using the same data model. It wasn’t really that hard using OpenGL: the walls can be positioned at real 3-D coordinates, and the graphical processing unit (GPU) will take care of the rest. The main difficulties are figuring out how to store the polygon data and avoiding polygons that won’t even be visible at the player’s current angle. However, this was mostly solved in the 2-D raycaster from my previous post. The only additional polygon-culling I’ve done is glEnable(GL_CULL_FACE), which tells the GPU to ignore polygons drawn in counter-clockwise order (thus only showing polygons that are facing you) and this pseudocode:

This ignores any polygons that would be facing away from you and, thus, hidden by the rest of the wall. I think this actually kind of eliminates the use for glEnable(GL_CULL_FACE), but I don’t think it’s doing any harm to double up here. I’m thinking of having 3-D sprites in the future, in which case glEnable(GL_CULL_FACE) should really help.

Just yesterday I added two new cool features to this, both of which reassured me of the superiority of hardware- over software-rendering systems. The first is the addition of textured walls. Under Vandevenne’s software renderer, using only 64×64 textures on a 512×384 display, it runs at about 50 frames-per-second on my Dell Optiplex 745 with Arch Linux. My hardware-accelerated engine, using 256×256 textures at 1024×768 runs at about 70 frames-per-second. Porting back those dimensions to Vandevenne’s engine, his only runs at 10-20 frames-per-second! Pretty impressive, eh?

Of course, Vandevenne’s limitations are only important for developers who actually want those dimensions, but who wants to be limited like that? Also, OpenGL is automatically stretching textures as needed, while that would have to be directly implemented in Vandevenne’s renderer. This means that under my engine, you could have 64×64, 256×256 or even 100×42 textures all in the same level without having to worry about it.

The second thing is that you can look up and down in my engine now. Albeit, that’s kind of pointless when everything in the game such short walls and items are always at the same level as you, it can be pretty cool (especially if skybox’s come into play — (^-^)).

Disclaimer: I don’t mean to insult Lode Vandevenne and his renderer. Without his tutorial, I wouldn’t have understood raycasting so quickly. His renderer shows all the nitty-gritty details of how raycasting works. It’s an amazing way to speed up hardware rendering, and is required in the equivalent software renderer. It also has some interesting implications in enemy logic (can the enemy see me?).

Do you remember that game from the 90s, where you’d massacre a-hole Nazis in an attempt to escape a castle, and for some reason all the walls were the same height, and everyone was always looking directly at you? Yeah, I’m talking about Castle Wolfenstein 3-D, one of the earliest “3-D” games for MS-DOS, ported to a variety of other platforms. Why do I say “3-D” in quotes, you ask? Technically, it wasn’t 100% 3-D. Although everything was drawn in a way that looked 3-D, it was all represented as a 2-D grid. That’s why you never found anything above anything else and you couldn’t look up or down. This is generally referred to as pseudo-3-D or “2.5-D” technology. It wasn’t until Quake that true 3-D came to the first-person-shooter genre.

Have you ever wondered how they got Wolfenstein to render so quickly on such old computers? It was pretty simple, actually: they used a technique called “raycasting.”

First, think of the game as a 2-D grid, where each cell represents an optional cube, where each side is a wall. Now place a player in one of the empty cells, looking in some random direction. The question is, which walls are on screen and at what angle and distance should they be rendered? Wolfenstein’s process went a little something like this:

For each column on the screen…

Generate an angle to project a ray towards. At the center of the screen, the ray should project perpendicular to the player. It should lean more towards the left the more left the column is and towards right the more right the column is.

Project the ray until it hits the side of a cell. Repeat this until the cell has a cube in it.

Calculate which column of the cube’s wall was hit, calculate its distance and render it to the screen at the current screen column with the appropriate stretching.

Remember the distance of the wall rendered at that column of the screen in a so-called “z-buffer.”

Next, the sprites are rendered:

Sort all of the sprites in order of distance.

For each column of each sprite in closet-to-furthest order…

Determine which point on the screen the sprite’s column would be drawn to.

Ensure that the column actually exists on the screen (i.e. if it’s too far to the left or right).

Ensure that the sprite column is not behind a wall via the z-buffer.

If the previous two conditions are false, then draw the line with appropriate stretching.

Special thanks to Lode Vandevenne for his excellent tutorial on the subject. If you want to see exactly how to implement this technique yourself, his tutorial is a powerful resource.

So, this is all well and good, and I made my own Wolfenstein-style game engine this way. I had one problem with it though: it was a software renderer. Software renderers are inefficient and, by today’s video game standards, almost deprecated. So, I wanted to figure out how to bring this to OpenGL.

Thanks to id Software, who was nice enough to release the source code for the original Wolf3D along with their iPhone port, I discovered a cool way to do this. After a few minutes examining the iPhone port, I discovered that they continued to use a raycaster, but instead of rendering each wall as it was hit and drawing the sprites later, they kept a secondary grid that stated which cells were visible.

At the beginning of the process, every cell was marked as invisible. Then, whenever the raycaster passed over or collided with a cell, that cell was marked as visible. Afterwards, they would just go through each wall and sprite and render them if they were in one of these visible cells. They didn’t have to worry about what was in front of what because OpenGL automatically takes care of z-buffering.

Not only does this use hardware-accelerated graphics, but it also means that you don’t have to go through the rendering process for every single sprite in the game! If the sprite is not in a visible cell, it can be ignored completely. In the old algorithm, even if it was completely hidden by a wall you would have still had to check per-column.

I’ve begun the process of implementing this technique. For the software-rendered version I had used the Simple DirectMedia Layer, but for this I’m using Allegro 5.x. I’m using a modified version of the raycaster I built from Vandevenne’s tutorial, and it’s going pretty well. So far, I’ve managed to get a 2-D map to represent the visibility grid by highlighting cells that the raycaster hit. From what I can tell, it’s showing everything that can be seen and nothing else, exactly as it should. Here’s a video of me playing with it:

This game engine will become the basis of a retro, Wolfenstein-style video game I’m working on, code name CANINE (all-caps for awesomeness). Keep checking the blog or subscribe if you want to see how it goes. I’ll be posting about my progress and all of the cool little programming tricks I’m using on the way.

Navigate

Tags

Copyright

Copyright (C) 2010-2012 Entertaining Software, Inc.

All website content, unless otherwise noted on a case-by-case basis, is protected under United States copyright law with all rights reserved. Infringement of such copyright will be penalized under the maximum extent possible.