~ Technical Artist in development

For the small game project in the second year at the Technical Artist program at BTH, I tried out to work with Bullet Physics to implement simple collision tests and gravity to our game world. Each platform has its own rigidbody that collides with the player rigid body, in which we can grab a transform matrix every frame from the player rigid body to use as the world matrix for the player object.

Even though I had to add some force in the player movement, it will later help us with knockback from enemy units. My biggest mistake in the beginning was not to update the player rigid body position as well, so it stayed in the center platform and collided while the player object never seemed to fall down when walking over the edges. Many thanks to my group members and Henrik Vik for helping me out with linking the libraries, it can be a real hassle sometimes 🙂

For this assignment, we were instructed to create a keyframe transfer script in Python in which the two different joint hierarchies had different orientations. While gathering the keys for the two skeletons, there is a bit of challenging math behind the transition of rotation values as we step out of the SOURCE joint’s coordinate system into the world in order to be able to step into the TARGET joint’s coordinate system. To make the transfer a bit easier and more manageable, a user interface with two lists to manage each hierarchy was created in PySide. The user can reorganize the joints by moving them up or down in the list, delete them or reset the TARGET skeleton back to its original pose.

It is important to note that all of the components do not always share the same space, which would result in wrong calculations. In hierarchical character animations, it’s common to use a “parent-child” relationship to replicate the behavior of a moving body in the real world. On top of the “parent-child” relationship, each joint has its own local transform that is relative to its parent.

“A” represents a “to-parent” matrix. To take the last joint in the hierarchy into the world space coordinate system for example, the global transform matrix must be created by multiplying together A0 * A1 * A2 *A3 from the right.

It’s therefore required to go between the different coordinate systems to make all transformations in the skeleton relative to the world. A joint can be transferred to its parent’s coordinate system by using a “to-parent” matrix and further on into the world. In order to accomplish this transfer between the different coordinate system, a recursive function was written to gather all the transformations from the current node all the way up to its parents and ultimately reach the root node. From here on, the global transformation can be calculated.

To isolate the keyframe rotation, we also apply the inverse bind pose which in this case represents the values of rotation at the very first frame of the SOURCE joint’s animation (resembles a T-Pose). A change of basis takes place and when this process is over, we can extract the euler rotations from a final translation matrix and set it to the current time where there was a keyframe on the SOURCE joint and set it to the corresponding TARGET joint.

I’m glad that I had already implemented Skeletal Animation for DirectX as preparation for this assignment, since it enabled me to further practice on my understanding of the relationship between different coordinate systems. Though, I still think of animation as one of the more challenging tasks in creative mediums both technically and aesthetically. I’m not a math genius, but I’m not completely incapable of taking on bigger tasks as I continue to grow. I’ve gained so much valuable knowledge over the last few weeks, so there is no stopping now. I’m just going to keep moving forward, despite my mistakes that I perform on a daily basis. Until next time, have a good one 🙂

Progress on my Skeletal Animation implementation in DirectX11, I’m finally starting to get somewhere. Sorry for the clipping at some places, I’m using motion capture data with HumanIK in Maya and it doesn’t totally sync up with my model. I’m currently working on the Delta Timing and interpolation between keyframes, not quite there yet, but it’s not breaking on me anymore. I will in the near future write a long post on my webpage about my first skeletal animation system, discuss the whole process, talk about the wrong stuff I did and what I did good etc etc.

My next task is to replace the section where I read keyframe data to instead access animation curves in the FBX format rather than creating keyframes from the FBX root node. I still find it gruesomely hard though, so it will probably take me a couple of more weeks to finish this system. I can only thank the older Technical Artist students, my project group and teachers for guiding me in the right direction and continue to have patience with me. Until next time, have a good one 🙂

After a long week of struggling, asking a lot of stupid questions and nearly giving up at some times, I finally managed to set up a basic skeleton hierarchy and animate it with all vertex weights intact in my FBX Parser. Next goal is to be able to support more advanced skeleton hierarchies and hopefully be able to import our own rigged 3D characters.

Until then, I’ll keep experimenting and reading through all the material I can find on the subject. This is just the beginning, I’m going to continue improving this technique and learn as much as possible in these few months we have left of the project.

Goals in the coming update is to be able to load more than one mesh and also combined meshes with each of their distinctive UV maps and textures. It’s going to be a bit challenging I think, but I’m having a lot of fun right now so I’ll approach all my problems with a positive mindset.

Phong Shading is coming along nicely as well. I’m currently in the middle of writing a recursive deep search algorithm for loading the bone hierarchy of my character, in which I’m writing a report about that will be available on my webpage in the coming months. Until next time, happy holidays!

It seemed that if you had enough of reference pictures with matching color tones, you could create amazing skin textures with the exception of a little bit editing here and there. I couldn’t believe it was that easy, so I looked into it myself and picked out an image of my favorite actress Olivia Wilde to use as a reference picture for the projection.

ZAppLink enables users to transfer a still-image of the mesh from any angle in ZBrush and transfer it to Photoshop, in which the actual editing takes place. Here you can use the silhouette from the mesh and match your reference picture into it, then send it back to ZBrush. Once you return to ZBrush, it will project the new layer onto your mesh from the angle you selected in the beginning and create a texture based on the UV-coordinates.

A couple of minutes later, I ended up with this as my first result. I’ve only applied a standard blinn with the texture on it, nothing else. It was quite remarkable, so I carried on and did this multiple times to try various settings. Since I used the base head mesh from the Samus model, I had to change the shapes of the face a bit but not that much surprisingly.

Play a little bit with the scene lighting and voila, there you have it. Not that I think this should be an excuse to not practice drawing your own texture from scratch, but this could hopefully be quite useful in the future. I believe Valve did something similar when creating Half Life 2 for giving the characters in the game a lot of detail. Anyway, I’m just thrilled by this and I thought it would be fun to share it. Until next time, happy holidays!

Recently we’ve been working hard with DirectX11 and learning how to create our own small rendering engines from scratch. This week I tried making a smaller FBX importer, which is far from complete, but this little test here today gave me a bit of inspiration for all the challenges to come.

Next task on the list is to apply textures and sample them for final rendering. The camera features simple walk and strafe movement, as well mouse control for looking around in the scene. I decided it would be fun to document my progress in DirectX and share it 🙂

A showreel of my most recent animation project in which I was assigned to create four different animations for my orc character. All animations were later exported to Unreal Engine 4 and connected to an animation blueprint. This blueprint is then used in the creation of a character controller to move around in the scene. I still consider the in-game jumping animation to be the hardest part about everything, the anticipation of it all and the trade-off with player control. I just have to keep trying and one day I will get it right.

I plugged in my orc character to MotionBuilder and used a reference skeleton from Brekel Pro Body, a program used to create motion capture animations. It was really fun to goof around in front of the camera, but not so fun to clean up the animation data afterwards in the Graph Editor back in Maya. However, it was worth it and now I have somewhat decent animations.

I’m fairly satisfied with the final result, but I’m always looking to improve myself and I still consider myself to have a long way to go. I don’t even know if I want to become an animator, it doesn’t feel like my kind of thing yet. Never say never, I guess.

I decided to make this post to demonstrate my work process when rigging a character, using a hybrid pipeline with techniques I’ve learnt form a lot of different people. This is just my approach to the entire thing but if there is anything you might find valuable here, feel free to try it out. My general approach is to finish the entire rig and all its features before I even attempt to start weight painting the mesh. I will follow up this post with another rigging session, hopefully in the next week if I got the time.

1. Placing joints, naming convention and creating control objects

One thing I started with recently was to throw in everything I needed into the main controller and start rigging inside it. It’s easy to forget a main controller to move the character with the rig attached and it can be troublesome to do it later.Without any joints, it’s hard to even start doing anything. I’ll create separate hierarchies for each major body part before connecting them together.

I use a simple naming convention with a capital letter in the beginning to identify the side of the character and underscores to separate terms such as “middle”, “lower” etc etc. One thing I started with recently was to throw in everything I needed into the main controller and start rigging inside it. It’s easy to forget a main controller to move the character with the rig attached and it can be troublesome to do it later.

2. Joint Orientation (and why it’s a no man’s land)

Joint orientation can be really confusing and I still have a hard time to fully grasp the concept, but as long as you’re consistent with your joint behavior throughout the rig, then it’s fine. The general idea is to have x as the primary axis, and then there are some different debates about whether z-axis or y-axis should be pointing forward. I decided to go against the stream and have the Y-axis point backwards, and thus it is so throughout the entire rig. Main thing here is that x is my primary axis.

Essentially we want to avoid negative values where we wouldn’t expect them, but I try not to get stuck for too long here. It’s the same rule for negative and positive values, as long you’re consistent, there is no worry. It’s a no man’s land as my graphic teachers used to say, due to the many different widespread ideas of the ideal orientation. I guess it’s different depending on what we are trying to create and I think I know what I’m doing. I hope I know what I’m doing is the correct term to use here.

This is pretty much the general expression I’m met by when asking about this topic:

3. Arm FK rigging

Here I start by creating transform groups by gathering the joint positions from a parent constraint that is immediately deleted after it’s used to place the group. Then I grab the corresponding control object and select the orient constraint. This process is repeated all the way down to the wrist.

4. Leg IK rigging

This step can be a little bit tricky, but I’ll try to be as clear as possible. Generally it’s not that hard, it’s just a matter of structure and hierarchy.

For each separate leg, I’m using a total of 3 IK handles. The first and major one, the IK.stretching from the thigh to the ankle, uses a rotate plane solver. Later I connect one IK handle between the ankle and the “ballfoot”, then from the “ballfoot” to the end of the foot. These are single-chained.

The two transform groups inside of the main foot control takes care of each separate part of the foot, upper area and lower area. By placing the rotate plane solver IK together with the single chain IK ankle to ballfoot, we recieve a good foot roll without having to use a reverse control, which has its own set of benefits. The single chain IK ballfoot to foot end is later place in its own group, ensuring full control of the rotation by the toes.

I also put in a pole vector for each leg, so I can twist and change direction of the knee

5. Back IK spline rigging

For the back I’m using a back spline IK handle with clusters controlled by their own control objects. In the advanced twist controls, we get to see one of the first instances where the names of our controllers truly matters since we need to specify a Start/End for our twist. These settings vary between different joint orientations and can easily break if you’re not careful.

I finish this process by creating another control object to include both my Start/End controllers, so I can seize full control of the upper body and keep the IK control for the legs a bit more separated. I still consider this part to be the hardest, but it’s worth the effort to get the spline IK to work properly.

If you were ever doubtful of my decision to have the Y-axis point backwards, I can assure you that the bend from the clusters is working.

6. Arm IK rigging with switch

Main problem with the switch, since it’s only the second time I’ve created such a system, is that we suddenly have three joint chains with separate control setups to connect together. While their setups aren’t that complicated, getting them to work together is a whole other thing.

The very first thing I’m starting to do is to copy the arm joint hierarchy and place them in the outliner in such a fashion that I name them by their system:

R_FK_Arm_Group

R_IK_Arm_Group

R_Arm_Group <—– (Export skeleton)

After setting up both systems, I then create a locator which I’ll place at the same position at the shoulder joint using a transform group. We have to point constraint both the FK and IK joint chain to this locator, but also parent constraint the FK system to it. Otherwise, none of the joints would follow properly. The locator is simply used to keep everything in the right place.

Both systems goes into separate layers in which I can give them a color, making it easier to identify which one I’m currently working with. To switch between the systems, I need to create a driven key and it will be a single curve letter “S”. On this control object, I add a new attribute that I call “SwitchBetweenIkFk”. Next, we have to move into the outliner since we need to remember the order of how we execute the following connections.

Ctrl-select in the following order…

Select the R_FK_Arm joint

Select the R_IK_Arm joint

Select the R_Arm joint (Export skeleton)

Go ahead and create an orient constraint with the “Maintain offset” box unchecked. Repeat the same process for all joints in the different hierarchies.

When all the constraints are in place, we start to set the driven key:

The switch controller is loaded as the driver, while the previously created constraints for the export skeleton is going to be driven. The FK system is set on the value of 0, while IK is activated upon setting the switch value to 1.

Note that I’m saving the hand group and its controls for later, since you want to work your way down into the arm when you’re rigging, not the opposite. We invert this approach when weight painting, when we want to make sure if any weights are removed, they are pushed into the center of the character.

I also set up so the inactive control is hidden by switching between on/off on the visibility attribute. Yet still, there is a slight problem about the transition between them. We need to make it smoother, so we jump into the graph editor and fix that right away:

X Wrong transition

✓ Correct transition

And here is the final result of the FK/IK switch:

7. Hand controls using driven keys

Same thinking with the fingers, I’m using the rotation of the Z-axis on the control object and rotate the joints to later set different intervals of keyframes.

Especially with this control, I want to make sure there is a limit so we can’t brake the fingers in opposite direction.

The transform group for the hand is now also following the different systems since I left and extra joint in the hand to act as the parent for the group. And there we go, that settles everything with the RIGHT arm. Same procedure remains for the left arm, which will be the entertainment for the rest of my evening.

In the next session, I’ll finish up the control rig and start weight painting. I’m already looking forward to start animating 🙂