Just a little tutorial about how to start with a MakeHuman figure, use it as silhouette model in Blender, model your own model and finally export IQM character model. The model used in this tutorial will be released soon as Creative Commons ShareAlike licensed material, but NOT here.If you want to use directly MakeHuman model as IQM you can follow this guide that explains how to Use MakeHuman, BVH animation files and Blender, excluding the last part about exporting to UE4.This tutorial uses MakeHuman model only as blueprint to model your own model, create your own skeleton and animations.

Once you're satisfied with gender/proportions/tweaking settings, go to Pose/Animate and choose in the right pane, under Rig Preset, game.json. This rig is perfect for our intentions, because it doesn't have too many bones and just 2 bones for spine. But, in this tutorial, we'll use a custom skeleton

Optional step, only if you want to use MakeHuman model directly in game. Usually I skip this step because I use Makehuman model only as modeling guidelines - Press in the header menu Geometries -> topologies and choose in the right pane male1591 . The model will have low poly topology

Export as mhx, default settings are fine. Note: you could even check "Export for Rigify" and makehuman would export an entire rigify skeleton ready to animate(I hate skinning Rigify skeletons, though)!

Import MakeHuman model in Blender and project setup

Open Blender and setup grid settings (press N key in 3d view to make appear left tab bar) like this . Ctrl + U to save default start file. Now, to make a cube of 128x128x128 Quake units we'll make in Blender a 12.800x12.800x12.800 cube, and then export it scaling it 10 times bigger in the IQM exporter, right? I found this approach very easy because you can model at a really comfortable size, so we'll use it!

Go into the downloaded MakeHuman folder, and then in the subfolder blendertools.

Copy all the 3 folders (makeclothes, maketarget and and makewalk) + mhx_importer/import_scene_mhx.py(WITHOUT mhx_importer folder) in C:\Users\youruser\AppData\Roaming\Blender Foundation\Blender\2.XX\scripts\addons NOTE: if you press N key in 3d view, you should see all the MakeHuman tools, divided in categories (Make Taget, Make Walk, etc.). Anyway, on my Blender 2.7.3a, MakeWalk tab IS NOT PRESENT! So I cannot help you about creating animations with MakeHuman skeleton. In this mini-guide we'll use a simple skeleton created from scratch.

Press "N" in 3d view to display view properties, and, with makehuman mesh selected, deselect lock icon on scaling/position/scale. Do the same for makehuman skeleton. Now select both and rotate 90 degrees on Z axis. Why? Because Quake has different coords and rotating the model AFTER animated could be tricky if you accidentaly add keys to entire skeleton / mesh in object mode(a bad bad approach). Anyway, if you learn to use this coords system it won't need to rotate model+skeleton before exporting. So, X is Y and Y is X. Numpad3 is front and Numpad 1 is side.

Just an introduction to character modeling in BlenderThis is a little overview, NOT A TUTORIAL about creating character models in Blender. There are tons of videos on YouTube showing principles of modeling in Blender

Using MakeHuman model imported earlier, start with a plane, a ShrinkWrap modifier added and face snap enabled. You can now start use MakeHuman model as shape contour.

Following MakeHuman's silhouette you'll be able to define a perfect low poly figure. Don't add too many details here. Just follow the contours and you'll get a nice model.

When complete figure is shaped up you can delete mirror modifier (you'll use the mirror modifier later) and apply ShrinkWrap one. You can turn off face snap too.

Next step is the enjoyable and creative one: your model ends to be a "MakeHuman silhouette clone" and starts to be YOUR model. Add all the details it needs a low poly model and stay under 5000 tris for a Quake based engine. For Unity and UE4 you can reach 10000 tris for lod0 with no problems but in a Quake engine I'd not risk to flood the rendering process too much.

High poly model creation: in Blender this is a tough tough process because Blender is terribly slow when it has to manage more than 500.000 tris and it's a problem for those who wants to add micro-details to model. If you can use zBrush, well this could be the right time to use it! I always tried to learn it but UI really annoys me. Anyway, during this process, add as many details you can. The more the better. For mechanical / tech models, I suggest to add bolts, panels divisions, gears and so on. For humanoid / organic models study the subject a lot and try to reproduce it in its shapes and details.

Normal map:You need the half (without mirroring) high poly model and the half low poly model UVmapped. Make Sure you unwrap good low poly model and no 3d faces have overlapped uvs. Note: The current "literature" says that you should not overlap UV islands but I found a tricky method very useful: imagine that you have a group of quads that repeat on and on your model (in mine could be the domes on front and back). Now you could resize the other copies of meshes to a tiny micro uv size in the UV editor and leave just one mesh istance at normal resolution. Now, after normal baking process, you could re-size the other mesh copies and match their UV coords exactly to the normal-sized ones. This will save UV space and make texturing process smoother and quicker. Create a new texture in texture editor with low poly model selected. I suggest to use a resolution of 2048x2048 or 4096x4096. Blender Bake is in the rendering tab (photo camera icon). Select first high poly then low poly. Check "Selected to active". Adjust margin in correlation on how much margin there is between a uv island and the other. I set 2px. If you leave bias to 0, probably your map will be quite good but bumped elements will be washed out. Instead, increasing too much the bias value could cause the normal map ray to intercept geometries that "are on the same way", like torso and arms. At bias 2.0 my normap map is quite good. So, it could be useful save both normal maps and patch weird results with the other map. Export it as textures/characters/player_norm.tga

You could make another approach: usually, if your not a normal map master (I'm not) and your low model doesn't perfectly surround high poly one, normal maps aren't as perfect as they should. So you can try import UV maps in your 2d vector software (Illustrator,Inkscape or whatever) and create from scratch a 2d greyscale image where black is the deepest point and white is the highest point Now, with a filter for photoshop or gimp like the Xnormal one for Photoshop convert greyscale texture to normal maps. Results can be surprising

Diffuse map. My suggestion is to use 4096x4096 normal map as base, export low poly model UVs as transparent png and start texturing with these 2 textures as base. For a metal surface, start with a mid-gray background color and, with multiple brushes, start adding a pattern with dark and bright strokes. Now, with normal map as blueprint, start adding details with black strokes. Add color to differentiate parts and try to add dark lines to separate metal parts. Export it as textures/characters/player.tga Specular map.Create a blank layer in your 2d app and start add very subtle white strokes where light should be more intense. When you finished "first pass", add white details to obtain a more strong effect on some parts. Export it as textures/characters/player_spec.tga Luma map. For luma texture start with black layer and color only the parts which should be visible in the dark too. Export it as textures/characters/player_luma.tga NOTE:I've seen that luma textures don't work in FTEQW. Spike, can you enlighten me about this? Model with textures

Adding a skeleton and animationsTime to create a simple skeleton to use as armature for our model.Note: I noticed, after finished my skeleton, that turning legs in place was impossible without a Hips bone, so I created it. In the next screenshots you'll see skeleton poses WITHOUT hips bone because poses were created before hips bone addition, but I didn't want to re-take all screenshots again.

Before starting adding animations to model, we need to understand how skeleton will react in CSQC. I created simple skeleton, where root is the main hub, and superior spine[spine2 bone] (and all its connected bones for the upper part) and hips (and all its connected bones for the lower part) are connected to main spine bone [spine bone]. Hips bone controls rotation on Z axis for turning legs and X axis for rolling. Weapon bone diagonal position was a trial and error. Thanks to OneManClan and Spike finally found a solution. Spine is connected to root and only root translates the entire skeleton. The other bones use ONLY rotation keys. I use only Forward Kinematics for character movements and not Inverse Kinematics for my animations. I know, it sucks, but I was never capable of produce decent animations with IK handlers and FK never lied.

For the animation part, create a new animation selecting skeleton and pressing the + button on Blender Action Editor on the right (did you setup windows as I said earlier, right?). Name the animation walk and press the F button. If you miss this step, when you have more than one animations, if this animation is not selected (currently targeted to any skeleton) it will be deleted at the next restart of the Blender file, no matter if you save the file or not(so many good animations lost in the past for this! )! Resize your timeline (on the bottom pane) from 0 to 45. I suggest to start the 0 pose with a leg (left or right is the same) already forward and up and then adding rotation keyframes of thigh and leg on 26 and 45 frames for left and right foot respectively. Keyframe for now just foor down. Then you can start keyframing foot up at 14 and 38 for left and right foot respectively(or reverse if you inverse the order in the foot down sequence). Now add a location key on Z-axis for root (remember? I gave root bone location keyframes, the other bones have just rotation keys) at frames 16 26 37, respectively up down up and frames 0 and 45 both down. This will add a realistic uneven step. Don't exaggerate too much on moving root position on Z-axis. Start at 0 frame with spine2 (the upper spine bone) already rotated in the opposite part of the forward leg (if left leg is the forward one, rotate spine to right and vice-versa). Place a rotation keyframe for spine2 at frame 21 (rotating in opposite way) and at frame 45 (re-rotating almost to the 0 frame's rotation) Now arms. Rotate left arm at frame 0, at the opposite way of right leg and right arm same thing but opposite direction, of course Add a "nodding" head effect if you want, but, for a male character, I'd not exaggerate too much

Run animation is trickier because you should complete the animation in much less frames to make the movement quicker and fluid but you should also check feet position carefully to avoid cartoony effects. Create a new animation, 22 frames long, named run and press F button. Start at 0 frame with both legs up, one of them more lifted than the other. At frame 4 make both legs spread, in the maximum jump pose. At frame 9 you should make one feet land. At frame 15 legs should be spread again and at frame 20 land again. Root Z axis position key at frames 0, 4, 9 15 and 20, respectively(down,up,down,up,down). Arms work same way as walk animation: starts with opposite rotation of respectively leg and rotate at middle animation.

Idle animation seems easy but it could be annoying seeing a model that just repeats an action over an over. The secret of a good idle animation is duration, the more it lasts, the more it will be realistic. Create a new animation, 100 frames long, named idle and press F button. Add, at frame50 , subtle rotation keyframes to upper arms, lower arms, spine2, neck and head. Scatter every key frame as every bone doesn't not stand at frame 50 (for example head at frame 58, upper arm at frame 45 and so on)

Fire animation: in this tutorial there will be only one weapon, a pistol/rifle (huh?). So, our fire animation is just 8 frames long. At frame 0 rotates spine2, and arms upper to hold the pistol with the right hand. Left hand should be help keeping weapon steady. At frame 5 rotate a little spine2 on Zaxis and a little on Y axis to emphasize the shot and maybe head (just a little) to showing recoil. At frame 8 restore normal aiming position created at frame 0

Fire idle animation: just copy the entire animation from fire animation (select fire animation, press + button and rename copied animation - remember to press F button) delete all key frames except frame 0 and copy all the keyframes to frame 100. At frame 50 add a key frame for arms and finger and make subtle rotations. Scatter keyframes as you did for idle animation

Holster down animation. This is a very short animation that blends movements during lowering the weapon and posing it to holster. Create a new animation, 12 frames long, named holster_down and press F button. Start with right hand in a similar position of fire animation, rotate a little spine2. At frame 8 rotate spine almost to normal and rotate right hand to reach the holster, but not too close. Rotate, at frame 12, right hand to reach the holster

Holster up animation. Just copy holster_down and name it holster_up. Invert all keyframes at frame 0 with frame 12 and vice-versa but make it lasts less, scaling it to 10 frames instead of 12, because a character that draws his weapon he's in a hurry!

Skinning: Now that your animations are ready you can put the skeleton in pose mode and press Rest Pose button on Skeleton figure icon. Add an armature modifier to the low poly object, select armature and go to edit mode to select faces to weight. I suggesto to give X-rays look and force wireframe in skeleton view properties (cube icon) Ctrl + Tab to switch to Weight Paint mode and select face selection masking icon(the cube with red/white checker icon) to display only selected faces (this will speed up the workflow a lot). Just keep attention to particular parts like conjunction between thighs and torso, shoulders and arms and fingers.

Exporting to IQMInter Quake Model exporter exports meshes, skeletons, material, animations and bones order. You now have to write down, in the text area on the top area of the screen, all the animations created earlier in this format(taken from IQM GitHub home page):name:X:Y:Z:L", where X is the start frame number, Y is the end frame number, Z is the frames per second (floating-point), and L is 0 or 1 to indicate looping.The cool things of this method is that you can change animation speed directly in Blender without the need of retouch frams in the Action Editor!Let's say you, as example, create an animation 60 frames long but, once displayed in game, it's too long and boring. You should now be forced to scale down frames but it will be a mess.Instead replace in the export frames field myanimation:0:50:10:0 with myanimation:0:50:25:0. Changing from 10 frames per second to 25 will speed up the animation! Cool, huh?For bone order, unfortunately, original IQM exporter doesn't work(it should open a file created by user with one bone per line but it practically does nothing), so I was forced to do my own modfication to enable bone order directly inline.This patch works but (that's the proof I'm not a programmer! ) it will be impossible to export more than 1 material, so if you have 2 materials or more per character, this won't work. (If you want to contribute and solve this issue, you're of course welcome! )IMPORTANT!! This modification is VITAL to use advanced CSQC skeletal animation features, like splitting animations based on bone order(legs for running and torso for firing, for example) or ragdoll, so you're forced to copy my modified exporter over yours (make a backup, just in case).Why? Because, IQM, instead of DPM, exports bone order scattered so you should have, once exported to IQM, a skeleton with a bone order like: root,spine,head,leftfoot,rightshoulder and so on. So, engine, when code calls skel_build (we'll see it later), it cannot understand where to "split" upper animation from lower animations

First of all create, with low poly model selected, a new blank material (leave any setting to default. No need to add textures to material) and change its name to textures/characters/player. This will be our SHADER name path. Iqm needs only this. Textures will be specified in shader file.

Write in the top text area a list of all animations(remember to paste in the exporter field everything except "ANIMS:") ANIMS: idle:0:100:25:1,run:0:22:25:1,walk:0:45:25:1,fire:0:8:25:0,fire_idle:0:100:25:1,holster_down:0:12:25:0,holster_up:0:10:25:0 Be sure to check the last parameter (Looping: 0 = no, 1= yes) for every animation!

Write, below anims, bone order(again, when exporting, remove "BONES:" from bones list)to match linear skeleton hierarchy: BONES: root,spine,hips,thigh_r,leg_r,foot_r,thigh_l,leg_l,foot_l,spine2,neck,head,clavicle_r,armupper_r,armlower_r,hand_r,weapon_r,weapon_r.001,thumb1_r,thumb2_r,finger1_r,finger2_r,holster_r,clavicle_l,armupper_l,armlower_l,hand_l,weapon_l,weapon_l.001,thumb1_l,thumb2_l,finger1_l,finger2_l,holster_lUpdate: I changed bone order because I noticed that left and right part must be two hierarchies on their own. For example thigh left must be followed by all its descedants, then thigh left and all its descendants, than arm left and so on.In this way, if you want to rotate with skel_mul_bones only arm_left and all its descendants, you can do it. With earlier bone export list that would be impossible because rotating arm right would have rotated also arm left because arm left was immediately after amr right, understood?

Copy both anims and bones list in a separated temp text file because we'll use them soon

For scale factor, MakeHuman model is something about 17.0 tall (170 units In Quake engine with a scale factor of 10), definately too tall. So you can: A) export it with a scale factor of 6.5/7 or scale both skeleton and model in OBJECT mode to a 11.0 height or something

note that by specifying 'program' in there, you override most of the fixed-function logic of the shader in one fell swoop. This will enable hardware skinning where available, while the entirety of the rest of the shader provides you with fallback logic for where glsl is not available.if you wish to have the shader use a different set of images, change the $diffuse to something else and it'll swap the textures the defaultskin glsl uses to match that base path. alternatively if you wish to be explicit about the images used for each stage, you can add these to your shader at the same level as the 'program' command:

the default skin/rtlight/etc glsl will then be able to use the specified textures correctly.is the theory, anyway.

this really ought to be mentioned elsewhere, but the problem with shaders in fte right now:q3 shaders are not converted to glsl in gles2 contexts and thus vanish completely in such situations. they are not normally drawn using any glsl and thus do not do any hardware skinning, thus the cpu does lots of blending etc work - and you do NOT want that on a high-poly model.dp shaders lack any and all overrides, syntax-wise, they've very much a subset of q3. people using dp expect the engine to ignore most of them anyway, which makes them problematic for other engines, especially as there's no obvious way to decide if the engine should just go and make stuff up instead (like adding specular).fte shaders on the other hand have an extra 'program' term, which basically overrides the q3-shader syntax completely, except for the blend modes... and possibly the texture maps... which makes them weird, and results in lots of { and } and not much else (and then because you omitted the blend modes, it doesn't work when glsl isn't available, or if someone stripped the program term like in your example).rtlights and other multiple passes are an additional complication - fte uses an entirely different shader for those (inheriting the $diffuse from the original shader, hence why $diffuse was first supported in fte). you can specify your own multi-pass overrides with "bemode rtlight my_rtlight_shader", but expect to be limited to only glsl in 'my_rtlight_shader'. on the plus side, the existance of $diffuse means that .forceshader can almost be used for any entity without needing to edit the map lines, but it won't necessarily get blend modes right if you have some transparency on part of the model unless you assume ALL are transparent (which can result in some z-fighting and thus has its own issues and can be slower at least on oldish hardware).

tl;dr: if you're just using custom textures without any special logic, then just omit the shader entirely. this is the best way to support the engine's favoured fast-path+lighting system, but does mean that your diffusemap will(should, if dp) be opaque.

in the shader you posted, it doesn't work anyway.Am I making something wrong?

Regarding rtlights

rtlights and other multiple passes are an additional complication - fte uses an entirely different shader for those (inheriting the $diffuse from the original shader, hence why $diffuse was first supported in fte). you can specify your own multi-pass overrides with "bemode rtlight my_rtlight_shader", but expect to be limited to only glsl in 'my_rtlight_shader'. on the plus side, the existance of $diffuse means that .forceshader can almost be used for any entity without needing to edit the map lines, but it won't necessarily get blend modes right if you have some transparency on part of the model unless you assume ALL are transparent (which can result in some z-fighting and thus has its own issues and can be slower at least on oldish hardware).

So, the only way to react to rtlight is to use a custom rtlight_shader? And what should be the content of this file(the simplest method)?

Regarding omitting shader

if you're just using custom textures without any special logic, then just omit the shader entirely. this is the best way to support the engine's favoured fast-path+lighting system, but does mean that your diffusemap will(should, if dp) be opaque.

No, I tried to omit shader and model is always fullbright. And I NEED specular factor. Really there isn't somewhere a simple shader for FTE that show diffuse+normal+specular+rtlight?

Do you think there could be the chance to have a talk on irc to discuss all the obscure point I can't focus? As you saw, I'm spending a lot of energies on studying FTE and, every time I understood a thing, I share it with the community.So it would be my pleasure to write some documentation for FTE shaders too!