I am happy to bring another update to the virtuoso project. More and more, this is becoming a more user-friendly application. Here is the changelog for this update.

Beta 0.4.6

Added play/stop buttons

Added time slider

Sound playback enhanced and made more realistic

Lots of optimizations and bug fixes

Actually counted the number of changes and categorized them…

You may have noticed the vague title for this post. This is actually referring to the nice fade that all notes have that give a much more lifelike feel to the song playback that this program provides. This is definitely my favorite part about this update.

I’ve got a major update for you guys. Virtuoso has come a long way and can do several things in order to make it easier to view your midis.

Currently, Virtuoso can now play most songs without trouble. It can also visualize the playback of songs with a keyboard as well as a heads up display for several things. These include the time signature, tempo, key signature, as well as upcoming notes.

Even better, now it is possible to use samples other than a square wave, and the application currently uses a sound of my own design which I have appropriately dubbed ‘e-piano.’ However, it is currently not possible to select a sample for playback, but that will be coming in the next update.

Here is a video showing off the latest version at the time of writing.

I have been doing a lot of work on my web application Virtuoso, and if there is one thing I know, it is that midi is a pain in the butt! However, there are its pluses. Such as being an extremely compact file format which means quicker parsing/load times. The next update is going to be a major overhaul of the current engine, and I wanted to share something interesting with you before I released it.

Now, I am sure that most of you are familiar with basic sound concepts. Just as a refresher, sound is best described as a vibration or wave with some frequency that travels through a medium (e.g. air, water, etc.). We hear these waves when they make contact with our eardrums.

Of course, there are lots and lots of different kinds of sounds and that is due to the fact that sounds are not “pure.” A sound can be considered pure when there is only one wave that makes it up. A sine wave is an example of a pure sound, and so is a square wave. If you were to combine multiple you would get a sound that is a different, but is no longer pure. Even if the sounds are the same type, but have different frequencies, you are making an entirely different sound.

So without further ado, let’s start looking at the formulae for these simple waveforms!

Now when coding to generate usable raw sound data, there are two things you need. One is the frequency. In the equations that I will be showing, frequency will be represented by the variable f.

Two, is sample rate. This is what defines how many samples are in one second. This will be represented by the variable s

Also, when generating the waveform you need something to represent time. For that I will be using x

Sine Wave

By far the most simplest waveform is the sine wave. Even if you are not a sound engineer, the concept of a sine wave should be fairly familiar.

So what does it sound like? It is very soft, and seems as though it is almost muffled.

In order to generate a sine wave with the correct frequency, however, we need to multiply by 2PI. This will give us a frequency of 1 * f / s, as you will see in the following formula.

This will give generate a sign wave with the specified frequency and sample rate

Square Wave

This is one of the most recognizable of all waveforms. Those of you who have played NES games will definitely know what I mean. Every time I think of this sound the word “Doot” pops into my head. A square wave is comparatively louder than all the other waveforms, even when they are at the same amplitude.

There is not much to be said about this waveform in terms of its formula. All you really need to do is take a sine wave and use the sgn function to get +1, 0, or -1.

I will use brackets”[]” to represent sign for this equation.

So yeah, it’s the exact same formula, but now has sgn to lock it into having only three unique values.

Saw Tooth Wave

Another simple wave form. The saw tooth wave sounds kind of like if a bug was buzzing around your ear! Although, that doesn’t make it any less of an interesting waveform. All you really need in order to generate this is the modulus (remainder) operator, and some subtraction, and you’ve got yourself a saw tooth wave.

Triangle Wave

This was by far the most difficult for me to figure out. Eventually, I gave up and decided to just look up how to create a generic triangle wave. Now that I think about this one is not much more complicated than the saw tooth wave. Just a couple of extra operations.

Triangle waves kind of sound like a mix between a sine and square wave. So the sound is not to harsh, but it still has some interesting qualities.

Well, I hope you enjoyed learning about these basic waveforms. With these, you can easilly make some very interesting sounds, as you will see in the future updates of Virtuoso.

We use technology everyday. Whether we are conscious, just about everything has a processor (something that handles inputs and outputs) and an electric supply. This includes appliances, speakers (most of the time), phones, computers, and cars.

However, how we use these things can vary quite a lot. There’s buttons and joysticks, knobs, switches, and other more complicated methods of input. In fact, the amount of information that an electronic device receives is only belittled in comparison to the amount and complexity that we process everyday.

Now what if I were to ask you this, in terms of controls which phone would you choose: A phone with a screen and keypad, or a phone with a touchscreen and a single button to turn off/on? Most of you probably prefer the latter case, but why? Well it is a much more natural way to control whatever the phone is doing. There is no need fiddling with tiny buttons in order to get to the one option all the way at the bottom of the menu.

There was actually an interesting video done by The Game Theorists where they discussed with Nintendo of America’s Reggie about whether motion were good or bad compared to older methods of controls.

Overall, both sides had interesting arguments, but I think the future of electronic interfaces will move to a more natural/ergonomic method of providing input to machines.

Whether it is wanted or not, our methods of controlling technology (for most scenarios) will move to some form of motion or touchscreen control. Additionally, advances in technology will always provide the drive to attempt more efficient methods of some natural method of interfacing with technology, rendering more traditional methods of control obsolete in the process.

One example of this advancement of technology is the Oculus Rift. It took quite a long development time for this to be commercially marketable to public audiences, but it provides insight into how technology is evolving. Besides providing a VR headset, users can also get a peripheral that allows them to use their hands in a natural way to interact with virtual environments. Of course it is not without its flaws, but eventually advances in technology will provide an accurate and reliable method of interfacing with technology.

Of course that’s just my prediction on the matter! I would love to hear what you guys think about how the future of interfaces will evolve with time.

It looks like I have started another project! I call this one virtuoso, and basically it is going to be a sort of all-in-one midi player in order to help those who want to record songs being displayed on a piano, or learn to play songs.

I have made quite a lot of progress behind the scenes with this one. The second update for this was literally jumping from just loading a midi file all the way to playing and visualizing said playback. There are still a few kinks that I want to work out before I add more features, but it will not take too long since it is simple stuff.

I have even recorded a video of this being used, so for all of you mobile users who are itching to try, here’s a nice little preview before I optimize this for mobile devices.

Everybody, no matter what job they eventually settle on doing, will have a story behind their decision. I figured it is about time I finally put this story to bytes, on why I chose to be a programmer.

The one thing I really enjoy when I am programming is that feeling of the unknown: I literally have an almost infinitely complex puzzle to solve with numerous solutions. Ever since I was a kid I was interested in puzzles. Especially the 500+ piece jigsaw puzzles. I am still capable of finishing most puzzles in under 20 minutes. Although, I am definitely not a genius in the strictest terms, and there have been many times when I got stuck figuring out the math that needs to go into my programs. Instead, it is usually my pure dedication (or is it OCD?) to solve a problem that usually pulls me through in the end.

Now let’s go way back to a time when I had no idea what I wanted to do. I was a gamer, well I still am, but that’s besides the point. Anyways, the thought of making my own game had crossed my mind a few times, but I always regarded it as some kind crazy magic that I would not be able to understand.

Then what do you know, Nintendo releases Wario D.I.Y. for the DS and I was just dying to see what I could do in it. After going through the tutorial game design didn’t seem like such a farfetched idea. Besides the more artistic side of game development, all I really needed in order to express how a game should work is through strictly logical statements.

After creating a few dozen games of increasing complexity, I decided it was time to move on to something less restricting. I started looking for something to make games on computers and I came across TheGameCreators DarkBasic and FPS Creator. I did not get very far with either. It was a big jump in complexity from what I had known. I tried my hand at a few programming languages namely Java, Python, Lua (Minecraft pc mod), and HTML (not a programming language). I finally had a breakthrough when I started using C and made my first major demo programs beyond “Hello World.”

With everything that I know now everything that I did before seems trivial, but it did not come easy. I think everyone should give programming a try when they have the opportunity. A lot of the skills I have learned can be applied to any number of professions, most notably problem abstraction/solving.

I would love to about how you got introduced into your dream job. Even if you do something else for a living it would still be interesting to hear the experiences that you had when making that decision.

After a long wait the final part of the skeletal animation tutorial is here! I actually took a while, because I found a bug in my code, but I will talk more about that later. Make sure you are comfortable, and maybe have a calculator on hand, because this is going to be the most math intensive part in the series.

So what exactly is being done in skeletal animation? Well, unlike something such as the md2 file format, a model that is “skeletaly” animated only stores the initial the initial position of vertices and it is the up to the application to figure out how the vertices are affected during an animation.

The above image gives a nice summary of how the two methods of animation are different. With vertex animation, every single unique position for the vertices are stored. This can make the files particularly bulky if the animation is complex, since a position is made of 3 floats (or whatever data type you use) to represent the x, y, and z axis. This makes it particularly easy to implement since all of the positions are already calculated and all you really need to do is interpolate between the key positions of the vertices.

Skeletal animation on the other hand offers a lot of flexibility on how it can be implemented. It requires a little more work to handle, but has the advantages of much more compact size. For example, the game Super Smash Bros. makes use of skeletal animation in conjunction with predefined bone movement data as well as more specific animation data to make the characters unique to saving space. So in the above image, we are given the initial position of four vertices and the initial position and rotation of the bone (or joint). Then in subsequent frames, anything that happens to the bone (sigh… or joint) will be applied to the vertices that are parented under it.

Anyways, let’s end this lengthy introduction and get into this code.

New Data Types

Now before we can load the new information that we will be handling, we have to create the data types. Now if you have been extra diligent you may have noticed that we never loaded the materials, and this structure comes before the animation and joint data. So we are going to have to write this up as well.

Pretty simple overall, you are probably familiar with most of these parameters which tells how the model should react to lighting. There are also two character arrays which will point to the location of the texture or alpha-map if there are any.

We actually need to create to data types, one for the joints themselves, and another for the keyframes which we will be using to animate them. There is one important detail that I want you to notice, and that is the parentName/parentIndex variables. In order to properly animate our model we are going to need to parse through the joints and set the parentIndex value otherwise it is going to be a hassle and a waste of processing power to find the parent every time we animate the joints.

There are also two matrices which are not stored in the file. This took me the longest to figure this out when I was first experimenting with loading the file type. I will explain this more later, but basically the entire skeleton has to have all of its positions and rotations reoriented.

Finally, we need to add the new structures to our class, as well any new functions we will be using.

You probably noticed that there are several new variables between the material and joint variables. These variables are specifically for animation purposes such as the number of frames in the animation or the frames per second. Only one of these variables are not actually in the file and that is animStartFrame. This will make our lives a lot easier by completely avoiding the case of the current frame being less than the earliest keyframe.

Now before we start writing the code that will load the animation and joint data, it is important that you know something before hand. All the data for the times of keyframes and stuff(except animTotalFrames) is stored on a per second basis. This can become a real hassle when we have to see if the animation has played all the way through. This also makes the numbers a lot messier (at least for me), so it is usually better to convert all the values to frame time as you are loading. However, if I have not stressed this enough, how it is implemented is entirely up to you. So with that let’s continue with the tutorial.

Boneless (or joint-less) No More!

You are probably thinking to yourself, “Finally! I get to see my model animating.” Although if you are using the file that I gave you in the previous tutorials, it does not have animation or joints for that matter. So I will be providing you with a new model that has animation and a texture if you want to try and implement that part. See if you can tell me what the model is : )

It is not going to be too different writing the loop to load the joints and materials so there will not be much of an explanation for it. Just remember that before loading the joints, you have to load the animation parameters of the model (fps, nframes, etc.).

The mathy part that no one likes

It’s okay, there is not going to be anything to complicated here. You don’t even have to understand it, but it will be good for you to walk through it on a piece of paper and write down verbal phrases for each step to ease into it. Another thing that sets skeletal animation apart from vertex animation is that the bones/joints will influence any child joints with its transformations.

So, if we have joint A and B, and B is a child of A, the transformations will look something like this.

A = Atransformations
B = Btransformations * Atransformations

I also want you to learn a concept that really simplifies the calculations involved with skeletal animation. I like to call this concept “State Zero”. Basically “State Zero” is the initial transformations of an object which is only calculated and applied at initialization. However, the initial position of one bone does not get applied to its children. So in other words, “State Zero” should always be maintained and should only be broken by the progressing animation of a model. Lucky for us, all animation data is based on the offset from the original position of the model. Now enough theory,

In the new joint initialization function we have to do a number of things. First, we have to find the indexes of the joint’s parents (if they have any). This will make our lives a lot easier when applying parent transformations to a child joint.

After that we will calculate all of the initial transformation matrices, and apply them to the keyframes and vertices. The reason we do this is, because first of all the keyframes must be put in the correct orientation or you won’t get the correct result. Second, the vertices are in model-space which means that their origin is (0, 0, 0). However, jointed vertices should have their origin set to the position of the joint, hence, in joint-space.

Finally, we call the jointsRecalc function which we are about to write.

...
if(numJoints > 0)
jointsRecalc();
}

SLERP? I’m a bit thirsty myself

Nope, not that kind of slerp. For those of you who may not be familiar with the term, it is actually an acronym that stands for Spherical Linear intERPolation. Basically just like linear interpolation except that it wraps around itself at some point. Well it looks like you are in luck, because we get to implement both in our little animation function!

The bulk of this function is held within one for-loop which contains two sub for-loops. Let’s take a look at our function when it does not have the two inner loops filled in.

As you can see, in order to animate our model we create to vec3 variables for position and rotation. Then two matrices are constructed from these vectors and finally, we multiply the matrices and store the result in each joint. Notice that unless the joint has a parent we start with a simple identity matrix; this is what I mean by “State Zero.” You should also be aware of the fact that if a joint has a parent, we subtract that parents starting position in order to maintain “State Zero.”

Before we start writing the code for position and rotation we have to write some helper functions that we will be using.

The first function is a special modulus that takes into account the sign of the dividend. Instead of just tacking on the sign, we truly consider it in the calculations which will give the desirable result. The next function, smallest_rad, is what will be used to interpolate between the rotations of a joint. As you might guess, the inputs and returned value is in radians.

Finally, interpolateL is just the standard formula for linear interpolation. We start at point A and we want to move to point B in a certain amount of time which determines the factor. Just remember that the function accepts floats NOT vectors. Now then, we’ll take a look at how we calculate the position of joints.

Pretty simple overall. The loop will search for a keyframe time that is greater than the current frame. It will make sure that j is greater than zero so that we don’t run into memory access violations which could end up being really hard to track down.

We get the two key positions and we store the interpolated values into the pos vector. Finally, we break from the for-loop in order to save on CPU cycles. This might not matter much in our small example, bu could make more of an impact if an animation is sufficiently long.

The only real difference is that we have to find the smallest angular distance between two radians. On a circle there are always two parts of the circumference that connect two points. These are be called the major and minor arcs. You might be able to guess that we want the minor arc since this will be the arc with the smallest angle.

Before we can look at the result a few adjustments have to be made to the genBuffers, clearBuffers, and draw functions. Since we now have to account for the bones in vertices as well as send the matrices of the bones to our shader. We’ll start with the first two.

Not much different from what was already in there. Each vertex has one bone so all calculation are not multiplied by anything (i.e. multiplied by 1). Notice that even though all of the bone indices are integers I send them as floats. This is due to some weird conversion that takes place when sending the data to our shaders which causes undesirable results.

Also in the clearBuffers function, we will be freeing all of the memory associated with our joint data along with everything else.

We also have to update our vertex shader in order to react to all of this new information that will be sent. Now instead of just multiplying by the modelview matrix and then the projection matrix. We have to multiply the joint and modelview matrices together which will correctly position all jointed and unjointed vertices. This is what the new shader looks like.

Horay, I have a new project to share with you! This project is already looking to be really cool. Basically, the finished project will allow you to smoothly slow down audio (real-time), as well as apply a number interesting effects. What makes this really cool is that I am designing it to be used in web browsers using the sort of new Web Audio API.

So far, I have this loading local audio files (that the user picks) which are then rendered (i.e. played) using the API. It is doing pretty awesome and the only major speed bump that I have hit is figuring out a way to slow down the audio playback without changing pitch.

This project is licensed under GNU GPL v3.0 which means that this is completely open-source and you can use the code in virtually any of your projects!

I’ll let you be the judge on whether or not this project is pretty sweet. You can find the links to the page for it, which can also be found under “projects.”

This is going to be by far the easiest part in this tutorial. In order to display our model we just need to parse through the structures so that we get every vertex in the order that it is referenced. We will be using modern OpenGL in order to render a model with color based on vertex position.

In order to follow along you will have to setup some code in order to open a window. If you are using SDL you can download the starter files that I have included which will open a window, load the GLSL shaders, and rotate the model, but does not have the code to actually compile the OpenGL buffers.

Even if you have avoided GLSL up to this day, what better way to apply it then through loading a model. Even without bones a MS3D file can still store static geometry that you can use in you applications. For the most part GLSL is really similar in syntax to the C language. The most important part to our shaders is this chunk of code which can be found in “vertex.glsl”

This tells OpenGL the version that we are coding for as well as what data we are going to send to our shaders. We are going to be sending the positions of the vertices, the UV coordinates, and the normals.

If you have set up your own code to open a window, but do not know how to load the shaders you can look through my code to see how to do it.

Equally important are the two variable declerations.

uniform mat4 projection;
uniform mat4 modelview;

This is what will position the model and project it onto the screen. If you find the your model is not being draw, I want you to make sure that you are uploading the matrices with glUniformMatrix4fv. Now let’s see how to actually get our model on the screen.

The most obvious one is draw which will have the code to draw our model in the correct position and orientation. The class is not responsible for fetching the shader location for the modelview matrix, so we have to make sure that we give it to the function.

The genBuffers will compile all of the OpenGL buffers so that we can quickly draw our model as many times as we need. Variables for the position and rotatation have also been added as well as functions so we can actually modify them.

We will store the precomputed value for the TRUE number of vertices in our model in totalVertices. If you remember from the previous tutorial, that a flat square plane with four countable vertices actually had two triangles with 3 vertices each.

Finally, we have variables for the OpenGL buffers that we will need in order to draw our model.

Onto the functions!

Let’s start off with the new genBuffers function. For the most part, the bulk of it is a for-loop which we collect all of the data that we need. However, we need to generate some information before the loop begins.

The first thing that is done is figuring out the total number of vertices that makes up a model. All we need to do is take the number of triangles and multiply by three. After that we need to allocate the memory for some temporary buffers so we can send the data to OpenGL.

I want you to pay close attention to the amounts being allocated. There are three position axis for every vertex (x, y, z) so we multiply the total number of vertices by three. Similarly, there are two texture coordinates for every vertex so in that case we multiply by two.

If you want, you can replace these with vectors so that you just push the data to the array. I personally like to only handle the exact amount of memory that I will be using, but the choice is yours.

I also want you to take special notice of the processed variable. As the models that we load become increasingly complex, there may be several meshes that make up the entire thing. The catch is that there can be a variable number of triangles within any particular mesh, so we need to make sure that we add the amount that was in the mesh that was just “processed.”

Now we can look at the for-loop that we use to copy the data to our temporary buffers.

The code is fairly straightforward. We start at the top with the meshes, and we make our way down until we get to vertices. This is when we can start copying the data to our buffers.

Even though the UV coordinates are stored in the triangles, we have to make sure that they are stored in the proper order so that it looks like, “U1, V1, U2, V2…” We can not just copy the two arrays one on top of the other. The normals on the other hand, are already in an OpenGL friendly format and we can just copy the whole chunk into our buffer.

Finally, we add the number of vertices that was in the mesh to the processed variable. Just like with totalVertices we multiply the number of triangles by three.

Uploading the OpenGL buffers

Now that we have organized all of the data into our temporary buffers, it is time to give it to OpenGL so we can draw our model.

Before you start uploading any data you have to make sure you bind the vertex array, and then bind the buffer that you will be giving the data to. In this case we bind posBuffer. After that we can send the data using glBufferData, and send the same size that we used to allocate posData. After that we call glEnableVertexAttribArray, and if you go back to the top where we looked at the beginning of “vertex.glsl” you will see that it cooresponds to the number of vertex_Position. Finally, we tell OpenGL the number of elements that make up a single vertex which is three (x, y, z) as well as the type that it is.

Once again, notice that for all of the calls for the UV data, we use two instead of three. Since the data for these buffers will never change, we use GL_STATIC_DRAW when sending the data.

The last thing we need to do is free the temporary buffers that we allocated at the beginning of the function.

free(vertData);
free(uvData);
free(normData);
}

and that brings us to the end of the genBuffers function.

I can haz draw?

We are now at the final section of this tutorial. Drawing the model once the buffers are all uploaded is extremely simple. All we have to do is calculate the transformation matrix and send it to our shaders, set our class’s vertex array, and then call the draw command.

Most of this is self explanatory. The function is sent the location of the modelview matrix which can be obtained with a quick call to glGetUniformLocation. It is our responsibility to provide this so that we avoid hard coding it directly into the class. Next, it checks that the total number of vertices is greater than zero. After the transformation matrix has been calculated and uploaded, we bind our models vertex array which was generated by genBuffers. Then we call glDrawArrays and supply the total number of vertices.

Of course, if you try to run the code right now as it is, you won’t be able to see anything since we have note coded the functions to position and rotate the model. So let’s get those out of the way.

If you are using SDL to open your window and have downloaded the starter files then you can compile and see your results. Even if you are not, you may still want to use the glsl shader files that I have provided.

Finally, in your main code you can set the position and rotation with the new functions that we created. Once you are ready to draw it, you can call our brand new draw function. This is what my code looks like in order to draw that handsome little set of triangles.

I have highlighted the important lines of code that will be used no matter what you use to open and handle your window. The first thing I do is send the projection matrix to the shaders as well as get the location for the modelview matrix so we can send it to our drawing function. I set the position of the model, and in the main loop I set the rotation and finally call our draw function that we made.

If you are using the same sample file that I provided in the previous tutorial, You should get something that looks like this.

This brings us to the end of the second tutorial for loading and animating a MilkShape 3D file! In the next tutorial, we will get to actually have a true skeletal animation system.

Here are a few things you can try on your own.

Add a function to apply scaling to the model (Be careful with your normals!)

Try drawing the normals of the triangles

Load and display a more complex model. You may have to adjust the position and/or scale in order to see it

Modify line 10 in “fragment.glsl” to see the model with double sided lighting (remove “1.0;”)

I am going to be dividing this tutorial into a series of parts in order to avoid overwhelming those that are new to 3D models/animation. I am going to make sure that I clearly explain every part of the process, especially anything involving math so that you can truly learn how everything works. So, to start here is what you need before I start, here is what you’ll need.

An understanding of programming

A compiler of your choice

GLM or some other matrix/vector implementation

An OpenGL function fetcher (GLEW, GL3W)

The following link will take you to the GLM download page if you do not already have it. It is super easy to install, because it is a header only library and does not need to be compiled.

I will also give you the link for gl3w, which will allow you to easilly take advantage of OpenGL 3/4 features. I find it to be very user friendly and it is not outdated like GLEW. You just need to include the header and make sure that you compile “gl3w.c” with your project.

Getting Started: Structure of a MS3D File

Now that you have that setup, it is time to start coding. For this part we are going to be printing the contents so that we know that we have loaded the file correctly. The structure of an MS3D (binary) file is fairly simple to read. It consists of 7 major structures that define different parts of any particular model.

Header

Vertices

Triangles

Meshes a.k.a. Groups

Materials

Bones

Extra Data (version dependent)

Finally, Some Code

So, We’ll start by coding each structure that we are going to need. You should create a new header file and give it a good name like “ms3dloader.h”. Let’s make sure we’re on the same page by starting out with a good base.

At the time this tutorial was written, the latest version of MS3D is 4. All we really need to do with this is make sure that the checksum matches with what is in an MS3D file, and maybe stop from loading old versions of the file. Other than that though, there is not much use for it.

Next we have the two most important structures in the entire file: Vertices, and triangles. Triangles are heavily connected with the vertices so we will be looking at them two pieces forming a larger whole.

We start of by defining two types that we are going to be using a lot, the Byte, and the Word. Most of the names are pretty self explanatory. Vertices have a position and an index to the bone that controls it (if any). Then triangles will point to each vertex and supply a little bit more information about them. Take this image for example.

This plane is made up of two triangles with four points in total. This is the number of vertices that will be saved in the file, but the triangles are only referencing the vertices that they need and then supplying unique information such as normals or maybe UV coordinates. So, if we were to draw one of the triangles and then perform a transformation and draw the next one, would would not end up with a stretched out plane, but two individual triangles.

A couple of things that will probably elude you are the Flags, referenceCount, smoothingGroups, and groupIndex. Except for materials, every structure has a “flags” variable. These are more for editor purposes and for these tutorials we will ignore them.

referenceCount is most likely how many triangles actually use any particular vertex. If someone can confirm this that would be helpful, but we don’t need to use this information.

smoothingGroups organizes the triangles into groups. Triangles in the same group will have smooth edges while triangles that are in different groups have sharp edges.

Finally, I am pretty sure groupIndex refers to the mesh/group that the triangle belongs to, but there is no use for it here so we can safely ignore it.

We are now onto the final structure that we will be looking at for this part: Groups/Meshes. For the remainder of the tutorial I will call them meshes since they are distinctly independent parts that make up an entire model. Meshes provide a convenient way to divide our model based on material usage. In a model without any bones, this is the topmost structure and requires very little processing in order to get something on the screen. This i how we will define it our code.

Let’s Get Classy!

Now it is time to start creating the class that we will use for any models that we load. I want you to try and think about what we will be doing with our model: Loading and printing the contents to a console. We also need variables to actually store the information for later usage. So with that in mind we can figure out what functions and variables that we will need. This is what your class will probably look like at this point.

If you thought ahead and knew that we would need to delete everything that we have loaded, that’s great. Good job on thinking ahead and planning for some memory management, in fact you deserve a badge for your thoughtfulness!

Now you are going to have to create a new file that will be used for all of the functions in our class. We have to make sure to include the files that we need for this code.

Pretty simple stuff so I won’t get too involved with explaining this. Just remember that you have to actually check that there is actually data in the pointers. Also, since the mesh structure has a pointer in it, we have to make sure that we free it before freeing the whole array.

Now let’s write some code to load a model. The first step to loading (almost) any file is to check that the header matches the file that we are loading.

After doing this we can start loading all of the real information in the file. You can use the list at the top as a reference of the order that everything needs to be loaded. All of the members of each structure are also organized in this way.

You will notice that there are a bunch of else statements. These just set the variable to a value if our function failed at some point. For example, if the file could not be found, the return value will be one. If the return value is not zero then we just call clearBuffers to make sure everything is clean. Also, Before the end of the first if statement, you want to make sure that you put “fclose(f)” there. The reason we put it there is that if the file failed to open or could not be found, we would not need to close it then.

Time for the first test: Printing the files contents

All we are going to be doing is printing out important data like the position of vertices, and which triangles use them. We will also print the names of meshes all the meshes in our model and which triangles they use.

If you loaded everything correctly your output should look something like the following. If you want to make sure you loaded the model correctly, you can download the sample file that at the top of the post.