Blog

One of our assignments for the MEng course this year was a 2D ball physics simulation written in C++. We were allowed to use any graphics API we wanted, so I opted for OpenGL 4 as I fancied getting some experience with the latest version.

This coursework offered some different challenges from the golf ball physics simulation I developed last year because there was a focus on realistic collision detection and response between multiple objects at the same time.

The simulation also had to be networked using the Berkeley sockets library (C sockets). The server would perform all physics calculations and send the latest object states over a network to each of its clients. The clients would render the scene and send user input data back to the server to be used in its calculations. This meant you could have multiple people interacting with the simulation across many computers simultaneously.

The fun part of the project came near the end when I added a rope into the simulation. It looks quite nice from the video, but as you can see an energy gain in one of the springs will sometimes lead to the rope spiraling out of control as the extra force is applied to the adjacent springs.

You’ll notice some weird behavior towards the end of the video: balls falling through the enclosure, rope self-destructing, terrible collision detection/response etc. This is due to me intentionally lowering the number of physics steps that happen per second. This leads to better performance but less realism, so the right balance must be struck when deciding your physics step in a real-time application.

Last semester’s big coursework was a DirectX 11 graphics project written in C++. The project was to visualize a desert island within a globe. The software had to cycle through four seasons and show off a bunch of advanced graphical techniques, which we learned how to implement as we went along.

The trickiest parts of the project (or at least, the parts I spent the most time on) were the particle systems and the shadow mapping technique. Both required a lot of fiddling to produce a result I was happy with. I’m also pleased with how the water turned out; initially I went for a Wind Waker-like effect that added reflection, refraction and a distortion effect on the surf, and I also threw in a wave movement at the last minute for good measure.

The weakest effect is probably the sea mist, as it isn’t really visible when the camera is inside the globe. The solution would be to add a screen-space effect that adds a certain amount of grey to each fragment depending on their distance from the camera, but only when the camera is inside the globe.

Update: A couple of months later I went back and made the visuals look more appealing, and also added some ambient sound to the video. You can see the original video on my YouTube channel.

Bit late posting this! Back in October myself and three others spent a week developing a video game for the Three Thing Game competition, which includes a 24 hour long crunch time session at the end of the week. Because we wanted to gain some experience with a new technology so that we had something to take away from the competition, the team opted to build a game that made use of the Xbox Kinect.

The three things we were given to base the game around were Grunting, Spring and Light Cycles. After a long discussion about how best to approach the words, we came up with an idea that the whole team agreed on: manipulating the environment around a moving pig to help it reach its destination.

Above is a short gameplay clip. You may notice some small glitches such as the fruit not being collected early on. This is a collision detection glitch that didn’t happen on the lab computers at university, so we missed it.

The idea of the game is to use the Kinect to manipulate objects in the environment. The player could punch to shatter boulders, jump to loosen springs the pig was standing on, and swipe away rain clouds to scare away house cats (it was supposed to be redrawn to look like a bear, but we ran out of time and therefore had to stick with my rushed programmer art).

The time of day changes as the game goes on. At night, the pig gets frightened and speeds up, increasing the difficulty of the game for the player. Because the number of obstacles increases as the level progresses, the game has a rising but varying difficulty curve.

The competition was fierce this time around. Out of 40 teams you needed to make the top 8 to progress to the second round, but we only made the top 12. Despite this we are very proud of what we made as it turned out to be very fun to play.

Although the Kinect briefly let us down during the demonstration it was very well behaved for the most part. Even though we didn’t have access to a Kinect until the weekend, the SDK was friendly enough that I could write the majority of the code we needed the night before, and all we had to do at the weekend was tweak numbers until it recognised our actions in the way we wanted.

All of my fellow team members were absolutely fantastic, giving it their all and focusing smartly on the things that needed doing the most so that we ended up with a playable game that had a start and finish. Even being the victim of a midnight home invasion during the 24 hour session wasn’t enough to stop one of our team members from coming in to work on the game!

For the past six weeks I’ve been working at SEED software, working one day a week in a team of four to develop an asset coverage map. Using the Scrum development methodology, we’ve just finished our second sprint and started our third. Here’s where we’re at:

What the image above shows is the parts of the Middlesbrough area that can and can’t be reached by fire station assets within 15 minutes. Green areas are accessible within the government-specified time, and red areas are not. Don’t panic though: the above image is generated using limited test data. More areas should become green when we get our hands on a real snapshot of fire brigade data.

The process of generating this map image is rather complex; we’re building on two years worth of previous work. The base project that we started with was a routing system that selects the fire brigade assets closest to an incident and suggests to a command and control centre officer that they should be routed to the incident. It does this by using an A* algorithm on each asset to find the quickest route to the incident, and then it compares the travel times to see which asset is closest.

The most useful part of the routing system for us was its ability to load an entire database over a network into RAM. This allowed us to write a form of Dijkstra’s algorithm that, given an asset location, searches through a linked list of route nodes until the travel time of the asset has exceeded 15 minutes. This gives us a list of every reachable route node, the locations of which we can then render onto a map.

There’s a lot more to it. Rendering the map is a complicated process that involves a grid system, colour weighting, and a bit of Minesweeper tech; and I haven’t even mentioned that we’ll be sending the data over a network and rendering the map on a website with the use of OpenLayers.

Disregarding technical stuff, the project requires a great deal of oversight and management. As I mentioned previously we’ve been following the Scrum development methodology, and so far it’s working brilliantly. Using Microsoft Team Foundation Server we can keep track of our progress from wherever we are, but we’ve also dedicated our whiteboard to the task:

It always surprises me how much I enjoy the project management aspect when working in teams. Getting an overview from each team member and being able to see the big picture is very satisfying. It also aids the decision making process, especially in the event of the customer changing the project requirements. Something that happens very frequently in our line of work.

Yesterday we presented the results of our second sprint to our customers in the form of a demo. The customers were able to play with the rendered map in the OpenLayers interface on a simple webpage, and they were extremely pleased with what they saw. So much, in fact, that they felt it would be ready to demo to the Cleveland Fire Brigade service after Christmas, once we’ve spruced it up for such a purpose.

None of this would have been possible without the fantastic team I’ve been working with. Knowing that we all put our best effort in each and every day is immensely gratifying, and that knowledge keeps me going despite the challenges that lay ahead of us. So cheers guys. 😉

I added a projects page where people can see my handiwork. It just has one thing on it at the moment though.

The Three Thing Game competition is rapidly approaching. It’s a 24 hour game development session where the game you create is centred around three things chosen by the organisers. In March our team was called For One Night Only, as we (jokingly) expected the night to be so horrendous that we’d fall to infighting and never do it again. As it turned out we had great fun.

Our three things were Juggling, Birds and Mayhem. Because a team member and I had been playing too much Advance Wars/Fire Emblem recently, we decided to go for a turn-based strategy RPG game that incorporated our three words. We weren’t completely crazy; we knew that strategy RPGs do not get developed overnight, we just wanted to get a basic grid fighting system in place.

Unfortunately our idea was still too ambitious. We encountered a gameplay-breaking bug with the character movement system around 1AM which we couldn’t fix, tired as we were. We started implementing as many other features as we could, but as 6AM approached we realised we were running too far behind to catch up for the 9AM deadline, and threw in the towel.

Despite the failure I’m happy to report that every team member was still smiling at the end, and no arguments had taken place. Part of the reason I think was that we went in with two intentions.

1) Have fun.
2) It isn’t about winning. It’s about learning how to use the PlayStation Suite SDK (now known as PlayStation Mobile).

As we gear up for the next TTG, it’s important to keep these intentions in mind. Our new team name, The Runners Up, reflects this. This time around we would like to finish our game, and so we’ve chosen an idea that is scalable while still giving us plenty of experience with our chosen technology, the Kinect.

As a fourth year on the MEng course, we get to obtain some industrial experience by working at the university-owned Seed Software. On Friday I was selected to implement a route visualisation system for the fire service’s command and control system. It was my first choice, so I’m very happy that I got it.

The current command and control system can estimate the time it takes for a fire engine to reach a specified address. What our team has to do is work out which areas are reachable within 15 minutes and visualise them on a map, so that the fire service knows which parts of their county don’t have proper fire engine coverage. Such a calculation is (I imagine) very computationally expensive, so the team will need to figure out a fast way to calculate it without being too inaccurate.

I can’t get much more specific than that as I don’t know the details. That’s what Monday is for. We’ll be working in Seed one day a week until the Christmas holidays, which when you think about it isn’t really much time at all. We have therefore been recommended to follow a software development strategy called Scrum, which is actually really interesting. You can read the essentials of it here: http://www.scrum.org/Portals/0/Documents/Scrum%20Guides/Scrum_Guide.pdf

Whether we can finish this project or not remains to be seen; we were signed onto this project with the full expectation it wouldn’t be finished by next year. I’m confident nonetheless that the team will give it their best shot.