Tuesday, 13 August 2013

Blender is a fairly nifty, free 3D modelling application. The learning curve is a little steep, but that's the price you pay for the vast number of features. This is what I've been using to make some models for Offender.

It's also got a built-in Python interpreter, and that, gentle reader, is what I want to talk about today. Python scripts can be used to script various tasks, and can directly access Blender's data through the plug-inAPI. This is just what I need to export meshes in a form that I can hook up to my C++ code, so it's time to learn Python.

Great, another language to add to the confusion. It's not just for Blender though, Python is also the language of choice for the Raspberry Pi - it's what the official GPIO API is written for. Fortunately I've already got 15 years on-and-off experience with Perl and 3 or so years of dabbling with Ruby, so it should be a seamless transition, right? Well, at least I'm used to interpreted languages and stuff like duck-typing.

Third time lucky

Let me start, as is the custom, with "Hello, world!". The picture shows the Windows Python shell and my first attempts at getting one line of code working. I've found my first real annoyance here - Python 2 and Python 3 have different syntax for print statements among other things. My version of Blender uses Python 3.3.0, a lot of the tutorials are for Python 2. Great.

Next up, a simple sequence generator - let's go with Fibonacci, as it's pretty simple and there are usually multiple ways of doing it. Putting the proverbial sledgehammer to it:

Indentation instead of braces? Ugh.... I'm all for code readability, but I'm not a fan of enforcing it through syntax. And there's another discovery - no "++" operator, you have to use "+= 1". Initially it seems annoying, and it means you can't do things like "array[i++]", but after reading around a little it makes sense. Python rejects the Perl paradigm "there's more than one way to do it", and instead says "there should be one - and preferably only one - obvious way to do it". But more importantly it's just not necessary most of the time. The code above is a fair illustration of this, instead I could have done away with the while loop and used range.

I experimented with lots of different ways of implementing this, seems there is more than one way... Eventually I settled with this as a final attempt:

I've put it on codepad - a great online interpreter - so you can see it in action if you feel so inclined. But it's not about the result really, it's about learning the syntax and buying into the paradigm/philosophy.

Next, write an export plugin for Blender. Seems like a big step, but how much Python do you really need for that? It's basically just array manipulation and file I/O - bread and butter for scripting languages - the most complicated bit's learning how to use the API. And there I'm having the same problem I had with Python - the API has changed dramatically over recent versions, so a lot of the tutorials and existing plugins don't work. In fact some of the API tutorials use a 2.x version of Python. In general code isn't backwards compatible at all, though with a bit of intelligence you can convert most of it without too much difficulty.

By default, recent versions of Blender store meshes as polygons of arbitrary size. Unfortunately OpenGL's streamlined API can only draw triangles, so you need to convert those to triangles. Triangulation is a well-known problem, it can get quite complex if you can't assume convex polygons, but thankfully Blender can do the hard work for you and convert everything to what it calls tessfaces, which are triangles or quads. Quads are fairly trivial to triangulate, assuming they're all convex for now. There's also a separate Blender API called BMesh, but that's mostly used for importing and creating meshes rather than export.

Perfectly shaded object in Blender becomes...

I've set up my export plugin as a panel on the thingy that comes up when you select an object, with a textbox for a class name and buttons to export a .h and a .cpp file. I tried combining them into a single button which would export both, but I couldn't get that to work. My output is a huge array of triangles, which works OK. The thing is that graphics cards prefer triangle strips or triangle fans to discrete triangles, but I can't see any way of extracting strips from Blender. Someone will have come up with an algorithm for building strips, but I've not found a suitable one yet. I have discovered that optimally dividing a mesh into triangle strips is in np-complete problem, but I'd settle for any sub-optimal solution that's better than discrete triangles.

... dodgily shaded object in game

My exporter isn't ideal and could do with a lot of improvement, but it works so I've been able to put something into the game. I've also added specular highlights, but they don't quite look right. I think it may be because I'm still doing Gauraud shading, and switching to Phong shading might make it look better. Though Phong shading everywhere would be nice, it's a lot more demanding on the GPU as you have to do calculations for every fragment instead of every vertex. Probably not an issue on my monster desktop, but more of a concern on Raspberry Pi.

Next I'd like to move on to getting some kind of physics model set up. I've continued to read up about aerodynamics, and one thing I discovered was that a real-world flying saucer prototype did exist - the Avrocar, which utilised the Coandă effect to generate lift. Unfortunately it proved to be too unstable, could only get a few feet off the ground, and wasn't very fast either, so it was discarded as dead-end technology.

The problem with a flying saucer is that the shape is inherently unstable. With no tail it's essentially just a wing, albeit a funny-shaped one. The aerodynamic centre of a wing is about a quarter of the way back from the leading edge - as the leading edge of a flying saucer is curved it's likely to be a bit further back, but certainly still forward of the centre of gravity. That leads to a negative longitudinal static stability, i.e. the aerodynamics tend to make a small deviation in pitch worse, rather than correcting it. I suppose I could rationalise some alien technology which stabilises it? Making it a half-disk might work. I kinda like the idea of making something which looks like a classic flying saucer, but cut in half with massive engines slapped on the back. Or maybe I'll just fudge the physics, as accurate aerodynamics are going to be nigh-impossible anyway.

My biggest problem at the moment is focus. My chum Thom suggested clouds for better context, and that set me off on a journey of discovery. I've been reading about Perlin Noise, which can be used to make nice fluffy clouds and is often used for terrain too. Volumetric clouds seem to be the way to go, which lead me on to volume rendering, a completely different 3D paradigm to the usual polygon rendering methods. I could use the same method for explosions too. Then I started wondering about water that ripples, is reflective and refractive, crepuscular rays, lens flare, and so on and so on. What I'm coming to realise is quite how bleedin' awesome shaders are. I've barely scratched the surface of what vertex and fragment shaders can do, let alone more recent innovations like geometry, tesselation and compute shaders. Effects which were previously only possible in pre-rendered movie sequences are now possible in realtime. The possibilities are nigh endless, and any self-respecting geek loves possibilities.

I've got to keep the ambition in check now, so I'm thinking of splitting this into two separate projects. One will be the original anti-Defender, much simpler than it currently is with really basic wrapping terrain, maybe resort to a fixed height map, cut back the draw distance so I don't have to worry about that, etc. Then I can focus on that until it's done. I've got the basic infrastructure, I've got a lot of the geometry and OpenGL stuff sorted, I've got terrain and a path to get objects on screen. Now I need physics, AI and explosions and I'm pretty much there - the rest is just bells and whistles. I'll think about a Raspberry Pi port, then go to town on pretty effects for the desktop version.

Then once that's done, I'll pick up the complicated stuff in a new project. I was thinking of doing something a bit like Mercenary. But given how much time I'm spending on the first project, it may be a little while before I get around to doing that.

Thursday, 4 April 2013

I've not really done much on Offender over the last couple of months, instead I've been playing a lot - Deus Ex: Human Revolution, Crysis, Mass Effect 3 and a bit of Fallout: New Vegas. Kind of interesting that the more time I spend working with OpenGL the more I look at these games and think "how would I do that?"

I've started putting lighting into my terrain. For the uninitiated, 3D APIs generally use three types of lighting. Ambient lighting is applied evenly to all surfaces, modelling light that scatters and bounces all over the scene. Diffuse lighting takes a direct path from the source, but scatters when it hits a surface, so surfaces facing the source appear brighter no matter where the viewer is. Finally specular lighting reflects off shiny surfaces, creating highlights at points which reflect light back to the viewer. Fixed-function pipelines typically provided all of these, these days they all need to be implemented with shaders.

For my terrain I'm just using ambient and diffuse lighting. Specular highlights are more complex than the other two, and grass and rocks generally aren't all that shiny so it's not really necessary. Water is shiny, but I was thinking of doing that as a separate entity to the terrain with its own shader. With the default per-vertex Gouraud shading specular highlights can look a bit iffy - you really need Phong shading, but that's per-pixel so it inflicts a lot of computation on the fragment shader. Having said this, bump-mapping might be worth doing and that's also per-pixel - but that's one for later methinks.

Also most of the sample shaders I've seen convert everything into eye coordinates (i.e. relative to the viewer), whereas I've just left everything in world coordinates (i.e. relative to the world origin). This saves multiplying everything by the view matrix, and as ambient and diffuse lighting are viewer-independent it shouldn't make any difference. With the only light source being the sun at a fixed infinity, I've been able to get away with really simple shaders

However the lighting has emphasised a problem with my terrain generation, as there are prominent "ripples" across the surfaces. That'd probably look great on sand dunes, not so good on grass and rocks. This is almost certainly an effect of using an LCG for random number generation, though I've not done the maths to try and explain it properly - something to do with serial correlation I guess?

The reason I'm using an LCG is that it's fast, and I'm reluctant to move to a better algorithm if it's going to be prohibitively slow. I experimented with a CRC32 to get rid of those ripples, it looked a bit better but symmetrical - again there's probably a good mathematical reason for this. However combining an LCG and CRC produced decent results.

LCG-based terrain. See those ripples.

CRC-based terrain. Strangely
symmetrical (and a bit ripply)

LCG/CRC combo. Much better.

To see if moving away from an LCG made terrain generation noticeably slower I added some crude timing info to my debug build. The total time for calculating the vertices alone went from about 5ms per tile (pictured terrain is 3x3 tiles) with just an LCG to around 50ms with the LCG/CRC hybrid, a 10x increase! That kinda vindicates my decision to go with an LCG in the first place. Switching to the release build it took about 5ms for LCG/CRC vertixes. Timing the rest of the initialisation for comparison, the only other significant block was the bit which copies data to OpenGL buffers at around 8ms. So very approximately the vertex generation with LCG/CRC is a third of the time taken to generate the tile, and the total time for the tile is around 16ms - a frame at 60Hz. I can live with that.

Incidentally, I made a few discoveries while doing this. Extracting textures from files takes a long time - ~220ms for 3 textures, about the same in release and debug. I was re-loading the same textures every time I generated a tile when I should be sharing the textures, so fairly obvious room for improvement there. I also found that if I invoked the release build from outside Visual Studio the textures didn't load. Some path issue I guess? I really must learn to put in helpful error messages rather than flippant remarks or swear words.

My next step was going to be expanding the area by generating terrain on-the-fly. Given that it's taking around a frame to generate a tile and the CPU has plenty of other things to be doing, I'd need split the generation across multiple frames in any spare time remaining before the end of frame. Unfortunately OpenGL's syncing options are fairly limited, so it'd be best done in a separate thread... and I really don't want to go multi-threaded yet, because threads are evil. Honestly, I'm still not great at C++ and having enough trouble debugging strange behaviour without threads introducing a bunch of concurrency issues and non-deterministic behaviour. So I'm going to park this idea until version 2.0. For now I'll either stick with a load of terrain generated at initialisation, or create some height maps.

Speaking of strange behaviour, I noticed that at certain points the player object would start to vibrate violently on screen. I realised it was actually the camera vibrating, it's just that the player is the only thing close to the camera, and the cause was numerical instability in my matrix inverse. That was solved by reordering the rows in the matrix, such that the element the algorithm pivots on is the one with the largest absolute value. Ultimately it was a simple fix, but it was an interesting problem as it illustrated the practical implications of a mathematical phenomenon. I chose to procrastinate on putting this pivoting in, so it just goes to show that taking shortcuts comes back to bite you in the long run so you're better off doing things properly in the first place.

Next up, time to divert my attention back to the player object as it seems faintly ridiculous to have vast swathes of detailed, accurately lit terrain with nothing but a matte purple wedge cruising over it.

Friday, 21 December 2012

Most of the mini-milestones I set myself in my last entry have gone pretty well:

Collision detection is one thing, collision handling is another

I decided against switching matrix libraries. The boost one doesn't have a constructor, and that's the bit that's most iffy about my own matrix class. Once I'd implemented Gauss-Jordan elimination, pretty bloody well if I may say so myself, I had everything I needed. I also discovered that the uniform matrix functions in OpenGL have a built-in transpose flag, which is useful for overcoming the row-major/column-major issue.

Quaternions - not very intuitive, but dead easy once you get the hang of them. A single "orientation" quaternion can be directly translated into the model matrix, or can be multiplied by an arbitrary vector on the model to find where that vector lies in the current orientation. For example, if vertical thrust is in the Y direction in the model, multiply that by the orientation quaternion and you've got the new direction of thrust.

Mouse input seemed simple at first as I used WM_MOUSEMOVE, but the problem with that is it's bounded by the window or screen. It took me a while to find the right solution, many people seem to advocate moving the cursor back to the centre of the window, but I reckon the best way is to use raw input. Once you know how it's pretty simple and works beautifully.

A chase camera, as expected, was very easy once I had the stuff above in place. However it caused a lot of grief as I forgot to give it an object to chase, and I started getting some very weird errors - unhandled exceptions in crtexe.c. Turns out that's Visual Studio's special way of saying "segmenation fault" or "uninitialised pointer". Still, I got to the bottom of it fairly quickly and learned a lot about VS's heap debug features in the process.

Vertex buffers were again much easier than I thought. You just have to be careful to unbind the buffer when you're done or it'll confuse any subsequent OpenGL code which doesn't use buffer objects, and careful not to do out-of-bounds memory accesses or it can crash the video driver. I'm also using index buffers, they make my code a lot simpler and take up less memory. All in all I'm now able to have many more triangles on-screen without any creaking at the seams.

Collision detection is really quite hard. I'm just doing the most basic test - player collisions with terrain based on the player's "bounding sphere" intersecting with the terrain tile. Once again the coding isn't the problem - it's remembering all of the maths. How do you find the distance between a plane and a point again? Oh yeah... find the plane normal, scalar projection, Bob's your uncle. There's a lot more work to do here - I'll eventually have to do BSP trees I guess - but it's usable for now.

I still don't have much, but all the pieces are gradually coming together now, and it means I can go into more depth on specific things...

At the moment, the thing I'm getting most excited about is the terrain. Initially I thought I'd have a terrain map, wrapping at the edges as Lander did. But say I have a terrain map made up of 1024x1024 tiles, and only have one byte of terrain data per tile - that's a megabyte straight off the bat. For height and colour it's going to be at least 5 bytes per tile, and if I have multiple maps it could build up to quite a lot of data. I'd also like the possibility of large, open spaces where you can really build up some speed and not wrap too quickly, which probably means much bigger maps than that.

Wireframe terrain maps: an 80s sci-fi staple

Big terrain maps mean lots of storage, potentially a large memory footprint to cache it, and a lot of design too, so I'm drawn to the idea of procedural generation. Here terrain is generated algorithmically from a pseudo-random sequence. Rescue on Fractalus! used this idea, but that was a bit too craggy and random. I could have a mix of designed levels dotted over the world, with generated terrain covering the gaps - much like Frontier, where core systems were scientifically accurate but the rest of the galaxy was procedurally generated. This is gradually turning into a homage to David Braben...

But back in the real world, the terrain doesn't warp around existing sites - structures are located in suitable sites in the existing terrain. So I think that's probably the way to go - generate large amounts of terrain randomly with procedural generation, then scout for suitable sites to put the levels and apply some "terraforming". I'm not sure how easy that would be in practise, and if I changed the generation algorithm then everything would have to be re-done. So for now I want to concentrate on the algorithm itself and get that nailed down.

A commonly-used method for generating terrain is the diamond-square algorithm. It's a pretty simple iterative method which is described very well on this page, so I won't repeat the explanation here. To generate pseudorandom numbers I'm using a Linear Congruential Generator, with the same parameters Donald Knuth himself uses for MMIX and an "Xn" formed by combining the x and z co-ordinates.

A mountain floating in the air kinda ruins the illusion of realism

The results are vaguely realistic-looking. I've applied some stock textures with transitions based on bands of height, and some very crude blending between them - it doesn't look brilliant but it's good enough for now, and it showed up a bug in my depth buffering which I hadn't noticed with wireframes or flat colouring.

The next thing to look at is what to do at the edges. The weedy solution would be to wrap, but because I can use use procedural generation to map out an essentially infinite area it'd be better if I generated more terrain. The problem is that I don't really want to have to draw an infinite area every frame, so I need to find some intelligent way of only storing the terrain for the local area and generating more terrain on-the-fly as the camera moves. Easier said than done, and it's going to get worse when I add diffuse lighting and need to calculate vertex normals for every triangle. but an advantage of the diamond-square algorithm is that because it's iterative you can easily generate some terrain in the distance at a low level of detail and apply more iterations to increase the detail as it gets closer.

Ideally I'd map out an entire planet. That'd be fantastic, but it's going to be tricky. The tiles that make up the terrain will no longer be relative to a horizontal plane, but the curved surface of the planet. The horizon will naturally limit the required draw distance at low altitude, but it'll need to increase at higher altitudes to the point where I can fit the entire planet on screen. This'll probably mean I'll have issues with depth buffer precision, which can lead to z-fighting, so at the very least I'll have to change the clipping planes as I zoom out, but I'll probably have to do multiple passes.

Still no physics, still no lighting, still using a placeholder for the UFO, still no sound whatsoever. Then I'm getting crazy ideas for little touches, like using GLSL shaders to model atmospheric refraction. And one day I'll port it back to the Raspberry Pi again. Plenty of stuff to do, so little time.

Tuesday, 27 November 2012

The aforementioned freezes are back and getting a bit ridiculous now. The problem's not limited to OpenGL, it sometimes happens shortly after boot before I've run anything. Fortunately I now occasionally get useful error messages, so I've been able to do better Google/forum searches and apparently this is quite a widespread issue. Setting the USB speed to 1.0 seems to help quite a bit, performance still seems acceptable, but it's making the whole Raspberry Pi experience a bit frustrating at the moment.

I don't see any point in working with the Raspberry Pi in this state, definitely not any hardware project where there are likely to be power issues obscured by the USB problem. So it's with heavy heart that I'm moving my OpenGL coding over to Windows, which is a crying shame. I'll come back to the Raspberry Pi one day, hopefully soon, but for now I'm left feeling that I got mine a bit too early, more so now there's a rev 2 board and more recently it's being shipped with 512MB as standard. Maybe I'll blow mine up with a hardware project and have an excuse to buy a new one?

OpenGL is intended to be cross-platform, and in past projects I've had it up and running on Windows and Linux very quickly. The first problem with OpenGL in Windows is that the maximum version supported is OpenGL 1.1, which was released way back in January 1997 when the likes of the 3dfx Voodoo, Matrox Mystique and PowerVR series 1 were all the rage, as indeed was the Rage. v1.1 has been fine for me in the past, but if I want to use the same features that are mandatory for OpenGL ES 2.0 (primarily shaders, introduced to desktop OpenGL in v2.0) then I need something more up to date.

You can't upgrade Windows to a newer version of OpenGL as far as I can tell, to get more up-to-date feature support you have to add individual features as extensions. Thankfully this can be handled by the GL Extension Wrangler Library (GLEW). It's a bit of a pain to set up, and when I thought I'd managed it both the static and the dynamic library refused to link no matter what I did, so I ended up importing the GLEW source into my project.

And then I think I found a bug in Visual C++. I've got a square matrix class template which takes a value type and a dimension. Its only member variable is an array which contains the elements, and there are member functions to assign values, do multiplication of two matrices, etc. The default constructor does nothing and, as I'm not ready for the brave new world of C++11 yet (given that VC++ has enough trouble getting C++98 right), I assign values with a redefined operator= which copies data out of an array, or another constructor which takes an array. When I created some arrays to do this, and then declared the matrices, I found some really weird stuff going on. If I just did the matrix declarations, no copying, all of the matrices had the same pointer. If I passed the arrays to the matrix constructors, or assigned them with operator=, then each matrix would have the same pointer as one of the arrays, but not the array that was assigned to it. If I made the arrays static (which is perhaps the right thing to do anyway) then everything was fine. What on earth could cause this? Just my own incompetence? The same code worked OK in g++. As soon as I've found a minimal example of this going wrong I'll submit it to MS.

After I'd worked around that, and remembered to actually call the function which initialised my OpenGL shaders (took me two days to work that one out), worked out how to use a class method as a custom message handler, tried GDI+, failed to get it working and reverted to OLE (about a month on that, admittedly much of it spent being too frustrated to progress and playing Skyrim instead) I was back to where I'd got to on the Raspberry Pi. I was doing a simple rotation about the X-axis, but when I set up the perspective projection matrix properly I got oscillation in the Y direction in time with the rotation. This didn't happen with orthographic projection, so surely I'd done something wrong with the projection matrix? Turns out it was fine, but GLSL stores matrices in column-major format whereas C arrays are effectively row-major. Transpose the final Modelview-Projection matrix and hey presto... everything working beautifully.

Sorry Chloe, Little Teddy turned out to be an intergalactic
criminal mastermind so we had to send him to the Phantom Zone

I've now moved away from "yayy, it works!" and started structuring things a little better for Offender (still need a better name). Rather than continuously drawing a load of triangles, I've got object classes with drawing and moving methods, and separate drawing routines for terrain. Now I can build up a list of objects, it's actually starting to look like the beginnings of a game. However, now I'm putting in more stuff I've found that it bellies-up and dies at around 17,000 triangles. At 60Hz that's about a million a second, which seems a bit low. Admittedly there's still a lot of room for improvement - I'm not using vertex buffers for example - but sorting that out is secondary as I don't need huge numbers of triangles on-screen (yet). All I really need is a single object and some terrain for context, hence the rather psychadelic effort shown here.

In spite of being a lot more complex under the hood, on the surface it's still a bit "Hello Triangle!". Next steps:

Maybe use someone else's matrix library, for all the usual reasons people use standard libraries. Why go to the trouble of implementing a matrix inverse when someone's got a tried-and-tested implementation already? The ever-dependable Boost has a matrix library, but I don't think it's quite what I want.

Do object positions by coordinate and rotations by quaternion, rather than matrix, so it's easier to move things around. I've already got much of the code for this in my OpenGL screensaver.

Add mouse input and player control. Easy for Windows, I'll leave Linux to another day.

Add a chase camera to follow the player object. Should be dead easy once I've done all of the above.

Add collision detection. Though it's not hard to knock together a crude algorithm, it's difficult to do collision detection accurately and not slaughter your CPU in the process. I've had loads of ideas about this, found a guide on the subject and looks like I was definitely thinking along the right lines. I'll start with something pretty crude though - if I could just make the terrain solid so I can't fly through it, that'd be a start.

Tuesday, 25 September 2012

It's been a bit of a disjointed week for my geekery, lots of little bits and pieces.

Screenshots - Having had no luck with existing apps, I asked on the Raspberry Pi forums and the only suggestion I got was to use glReadPixels. This requires screenshot dumping code to be written into the app generating the framebuffer, which is perfectly doable and the code should just be boilerplate. With libjpeg to compress the raw pixels, it works a treat. I'm wondering if it's worth writing a standalone capture app, assuming that would actually work, or if a portable function is adequate, perhaps even better.

Freezes - Since I started doing more and more complex stuff with OpenGL ES 2.0 I've been getting regular freezes. These were so bad that everything locked up, the network died, no debug was dumped anywhere that I could see, even the keyboard died so the Magic SysRq keys were of no use. I was just about to start the old comment-things-out-one-at-a-time trick, when a Raspbian update was released and sort of fixed it. It now seems to run indefinitely without freezing, but sometimes the USB hub spontaneously dies even though the graphics still keep going. I've plugged my keyboard directly into the RPi now, and it seems to be OK.

3D modelling - While manually-constructed vertices are fine for hello triangle, they're not really feasible for bigger things. So I've downloaded Blender and started learning how to use it. It's not hard, there's just a lot to learn. Thankfully there are some excellent tutorials to get started with. The biggest problems I'm having are my lack of artistic ability, and trying to avoid making my alien craft look like anything from any movie or game I've seen. At the moment it looks like it came right out of Elite. I'll get better, hopefully.

Flight physics - For my anti-Defender (working title: "Offender", better
suggestions welcome) the centrepiece is going to be the alien craft.
When I think of the archetypal UFO, I think flying saucer - something
which doesn't look terribly aerodynamic and just hovers in the air,
better suited to interstellar travel than air-to-air combat. The kind of
craft I'm picturing is based on that, but has been adapted to fly at
speed in the earth's atmosphere. I want something which flies like
nothing on earth, but obeys the same laws of physics that earthly craft
are bound to and depend upon. I'm going to have to work out the physics
with little-to-no knowledge of aeronautics. Here goes then...

Whereas a fixed-wing aircraft uses its wings to generate lift, the craft I picture will have some kind of anti-grav thing propelling it upwards. How would that behave differently to wings? It'd make lift more or less constant, not dependent on velocity or angle of attack, and there'd be no ceiling. The thrust would have to be manually varied with the angle of climb or descent or there'd be a kind of lift-induced drag - in a vertical cimb it'd fall backwards. Hinged ailerons or a rudder wouldn't be practical so the anti-grav would need to vary to generate pitch and roll. If there were multiple upwards anti-grav thrusters, then increasing thrust on one side while decreasing on the other should accomplish this and maintain stability. Yaw would require horizontal thrust, and maintaining the ability to roll in a vertical climb would require downward thrust.

With a half-decent physics model I think I can get that to work, and also have human aeroplanes, helicopers and missiles behaving with a moderate degree of relism. The trick is going to be getting the level of complexity right so it's accurate enough but isn't computationally intractable, especially if I want this to run on a Raspberry Pi. I'm hoping that by modelling a few simple laws of physics, higher level effects will just drop out - for example, modelling angle of attack should correctly should result in stalls. After thinking through the basics it's clear that the most difficult and important bit is going to be aerodynamics and drag.

When an aircraft stalls and tailspins, why does it turn into a nosedive? It can't be its weight distribution as net weight acts through the centre of gravity and imparts no torque. It's got to be drag. When an aircraft rolls, why does the roll not accelerate? It's got to be some kind of angular drag. Also, if my craft is going to be capable of entering the atmosphere from space, drag would determine how much heat gets generated. As I understand it, for an accurate drag model you need to be able to assess the drag coefficient for every possible angle of attack. I basically need a virtual wind tunnel. That's going to be fun... there's got to be a way to simplify it.

I'm kind of looking forward to trying this out, I don't think it would take a huge amount of effort to get to the stage where I have a craft (even if the model is just a placeholder), some control and some basic physics to try out. I'd need to provide some terrain for context if nothing else. No proper collision detection for a while - that's a whole new can of worms - but I should be able to add something which causes a splat or bounce at zero altitude. So far I'm still plucking bits of boilerplate from the stuff I've done so far, making it as generic and reusable as possible, and building up some library functions. Might as well do it properly from the start, eh?

And finally some linkage, as I thought this was pretty cool: Kindleberry Pi

Sunday, 16 September 2012

I'm just a dabbler in OpenGL really. Come to think of it, all I've really done is the most basic geometry and transforms, I've not even done texture mapping (though I've done that in DirectX). As I've had an interest in 3D graphics since college, it's something I want to get more practice at.

The Raspberry Pi supports OpenGL ES 2.0, which is also used in some of the newer mobile phones and tablets. The main difference between ES 2.0 and the OpenGL I'm used to is that it's built around a programmable pipeline, which in practise means that a lot of the core functionality I've taken for granted has been removed.

OpenGL uses little programs called shaders, which are used to configure the graphics hardware or software to apply various transforms or effects. They're written in the imaginatively named GL Shader Language (GLSL) - put the code into a string and pass it to OpenGL, it'll get compiled at runtime and applied to the appropriate data.

I've never done anything complicated enough with OpenGL to warrant writing a shader - simple stuff is taken care of by the core functions - but in ES 2.0 even the simplest of tasks requires a shader. For example, to project the 3D image onto a screen - something anyone doing 3D work is going to want to do - I'd normally set up the projection matrix. But that's gone in OpenGL ES 2.0, you have to put together the matrix yourself and manually apply it to each vertex with a vertex shader. Apparently this isn't unique to OpenGL ES - "desktop" OpenGL 3.1 has got rid of the projection matrix too - so this is something I'm going to have to get used to.

There are good reasons for this - it makes the API simpler and more flexible for advanced users - but it does make it harder for the beginner who has to do a lot of work to get the simplest thing up and running. It also means that the code isn't backwards compatible with OpenGL ES 1.1 and OpenGL 3.0, which is a pain as I like to move my code around onto different platforms.

I've not found a really good guide or tutorial for OpenGL ES 2.0 in C++ (perhaps I should write one?), but by taking snippets of code from various webpages I've managed to cobble together a "Hello Triangle!" It's quite epic at over 300 lines, but there's so much to set up I'd struggle to make it shorter. In the middle of doing this my HDMI->DVI adaptor finally turned up, so I've been able to plug my Raspberry Pi into a monitor and get my red triangle in glorious 1280x1024, instead of the crappy interlaced PAL which as we all know is 720x576.

After that initial hump, getting something a little more complicated working was relatively easy. The tutorial code was all in C, so I made it a bit more C++-like with some juicy classes, call-by-reference, iostream instead of stdio, and getting rid of explicit mallocs where possible. "Hello Triangle!" was using the native display (i.e. no windows), so I added the option to use XWindows instead where it's available. Turns out there's a problem with the Raspberry Pi implementation of X which prevents this from working, so I've abandoned that for now. Then I learned how to do rotations with the vertex shader - which was fairly easy once you have the right matrix and can remember how to do matrix multiplication - and texture mapping with the fragment shader - which is far more complicated than I expected.

The end result was something which mapped a picture to a diamond shape and flipped it over and over, which I'm calling Phantom Zone. I've not worked out how to do screenshots without XWindows, so no pretty pictures this time. It's not much, but it's been useful for picking up the basics. Unfortunately there's a bug where it crashes the Raspberry Pi so badly that all remote connections are killed. I've no idea how I'd even begin to debug that one.

Now I've got the basics working, I'm coming up with ideas for something a bit bigger. When the Acorn Archimedes first came out it was bundled with a demo called Lander. Written by the legend David Braben, it later became the full game Zarch and was ported to other platforms as Virus. I think something along those lines would be fairly simple to do with the benefit of hardware accelerated graphics, at the very least I could get the terrain and craft working.

If that goes well I thought about turning it into a kind of reverse Defender, where you pilot a lone flying saucer and have to abduct people while avoiding increasingly aggrivated human defences. That's the kind of idea I can pick up and run with all day, indeed I've already lost a few hours of sleep thinking through the physics alone... but I'm not going to reveal any of the ideas here yet, I'll see how many of them I can put into practise.

Six entries in and I've written a lot about what I'm going to do, and time spent learning the basics, but I haven't actually achieved much. I'm kind of enjoying playing with software for the time being, though a part of me is itching to do the robot and knows that once I've drawn out some schematics I can start buying parts.

Monday, 27 August 2012

It seems so trivial now, but way back in the proverbial day the shortest BASIC program felt like a triumph.

Nearly 30 years on, the language may have changed but it's still satisfying proof that you can get something to work. In this instance it took seconds to write some C++, a couple of hours spread over half a week to work out how to cross-compile from Ubuntu to Rasbian. The relief at finally seeing those immortal words was immeasurable. Could be worse I suppose - it took researchers two years to write the first non-trivial program in Malbolge.

To cut a long story short, the ARM cross-compile toolchain in the Ubuntu repos is set up for ARMv7 onwards whereas the Raspberry Pi has an ARMv6. With just a few parameters in the makefile you can tell it to compile for ARMv6 with hard float, but you also need to get hold of some ARMv6 static libraries - I just copied them over from the RPi. There's a toolchain on the Raspberry Pi repository on github which already has the correct parameters and libaries, but it's 32-bit so I had to install some 32-bit libraries to get it to run on my 64-bit Ubuntu virtual machine (libc6-i386 and lib32z1 if you're asking). It worked on 32-bit Ubuntu with no fiddling at all, but it took me a week's worth of evenings to figure that out.

This is all well and good when I'm only using the most basic libraries which come with the toolchain, but to do more interesting stuff I'll be needing more libraries. I still don't think I've found the ideal solution to this: at the moment I'm rsyncing the Raspbian libraries over from the Raspberry Pi. Compiling libraries from source seems a bit of a wasted effort. I wonder if I can set up APT to get the Raspbian libraries directly from the repo?

I've managed to get the core part of my OpenGL screensaver working on the Raspberry Pi now. It's incredibly slow, and I think that's because I'm using the OpenGL libraries which aren't supported by the GPU, so it's using the Mesa software drivers instead. To use the GPU I'm going to need to switch to Open GL ES and EGL, and that's a whole new can of worms but it's ultimately what I wanted to do anyway.

So it's been a bit of a frustrating fortnight, I've broken my update-a-week guideline, and spent too much time floundering around trying to understand the infrastructure with too little tangible progress. Having said that, floundering around for a while is fine so long as something is learnt in the process, and I think I've learnt a lot more about the GNU C++ toolchain, in particular the linker. Hopefully I'll start getting my head round OpenGL ES soon, I think it's mostly the same as regular OpenGL and it's just a matter of appreciating the differences.