Post navigation

As the creator of the Unity asset Gravity Engine I was curious about how well Unity modeled orbits without all my extra work. In investigating the way that Unity is doing physics I came across a very dramatic difference in performance based on the order of two lines of code and a numerical method I should have known about!

The “natural approach” is to use Unity do the physics with a rigidbody AddForce() to provide the gravitational interaction. This requires a script with a FixedUpdate() that determines the force of gravity due to some other object based on that objects mass and distance. For simplicity we’ll consider just one “heavy” object in the scene at the origin. The result can be seen below. The trail of the object does not form a closed orbit (as it should for Newtonian gravity) and the orbit axis shifts over time. Does this provide a clue about the way Unity is doing physics?

In order to investigate how Unity is evolving the force (i.e. what numerical approach it is using) we can now create some mini-physics engines of our own and see how they compare to the result from AddForce().

Creating a simple physics integrator for a single body is straight-forward. In a FixedUpdate() method we can update position by a time step, calculate acceleration due to gravity and use it to advance velocity by a time step.

The simplest algorithm (Euler) updates position and velocity each time step (r, v and a can be Vector3 or this can be taken as pseudo-code for array operations):

Wow. Euler is terrible; the orbit is not even closed. The “Not Euler” method is pretty good and is quite close to what Unity does. Leapfrog is also pretty good and looks to be about the same as “Not Euler”.

The “Not Euler” method is more correctly called the Semi-Implicit Euler method. The key is that the velocity is advanced to the next time step before it is used. This seems like a weird mix – using the velocity from a time step ahead to update the position in this time step. What surprised me was given how well this works, why it does not come up in the scientific introductions to N-body integration (e.g. Moving Stars Around). I assume they go direct to Leapfrog because they are headed to more complex algorithms and Leapfrog is a useful segue. The Semi-Implicit Euler method shares an attribute with the Leapfrog integrator. They are both good for systems in which energy is conserved, like gravity. Technically they are called symplectic integrators due to the way the preserve area in a phase space of position and velocity. Semi-Implicit Euler is the simplest symplectic integrator, Leapfrog/Verlet is a second order symplectic integrator.

What is AddForce() doing?

It appears that Unity AddForce() is keeping up well with Semi-Implicit Euler and Leapfrog.

Can we figure out which one it is using?

Let’s take a look at how well the integrators are conserving energy. It’s straight forward to determine the kinetic and potential energy at each time step and with a second camera in the Unity scene we can do a very basic plot of energy vs time by using energy and time to position three new objects in the view of the second camera. Here’s the result with the default time step for FixedUpdate:

The energy errors for AddForce() and Semi-Implicit Euler are overlaid on each other, while the Leapfrog method has a more stable energy. All of them maintain the initial energy except during the close approach to the central mass. Unity appears to be using Semi-Implicit Euler.

Getting Better Orbits

What steps do we need to take to make the orbits show a single ellipse that does not precess? [pun intended, changing the number of steps is exactly what we’ll need to do]. The Unity sample scene allows the numerical integrators to choose a smaller dt and then interate more times per fixed update to stay on the same time scale as the AddForce() approach.

Let’s try a factor of 10 reduction:

Those orbits look good. Notice that the energy error for SI Euler is much reduced and the error for Leapfrog is barely detectable.

The specific time step choice will depend on the maximum velocity of the orbit.

There is one detail about symplectic integrators worth knowing: the time step must stay constant otherwise they do not conserve energy. There are other higher order integrators that allow the time step to change based on the maximum velocity in the system. Gravity engine provides one such integrator. More sophisticated scientific code maintains a heirarchy of time steps and allows e.g. fast moving inner planets to use a smaller time step than much slower outer planets. This is on the roadmap for Gravity Engine.

Other Considerations

The numerical code in the sample project is useful to do some simple investigations but does not really scale. A scene with 10 bodies would require each of them to know about the other 9 to compute the total gravitational force. There also be wasted work, since the force of 1 on 2 is equal and opposite to the force of 2 on 1 and both would be calculated. If you plan to do many-body gravity then having a central engine to do it is more effective.

I recently spent a week as the dumbest guy in the room at the 2017 Atlantic General Relativity conference in St. John’s, Newfoundland. The last relativity meeting I attended was in 1996 when I was a graduate student finishing my PhD in general relativity at Queen’s University. Since then I have collected and browsed GR books – but only in the last six months have I made a more serious effort to re-learn the subject. Upon seeing an announcement for the conference I decided I would take a week off from software development and go see what the GR crowd was up to.

I did have an “in”. As a graduate student I co-developed GRTensorII, a software package for Maple that GR researchers found useful. Recently, I updated this package so that I could use it as I re-learned the subject and because I had heard others were struggling with the stale software on more modern versions of the Maple computer algebra platform that it runs on. Hopefully, this would give me some credibility and something to talk about at the conference.

Then I found myself in a lecture room with equations galore. The first day had introductory lectures by post-docs to provide background knowledge for the graduate students attending the conference. These were a great refresher for me. I was able to follow the ideas in each of them – so it seemed things might be okay. At one point a post-doc pointed out that GRTensorIII had been updated. At the coffee break several of the post-docs figured out who I was and thanked me for updating the software.

The personal high point was during the invited lectures when Eric Poisson was giving his talk on new characteristics of tidally distorted neutron stars. He remarked “At this point I have to thank Peter and GRTensorIII for making this calculation something I could do in afternoon – since otherwise it might not have been completed”. That one sentence made the whole trip worthwhile. I continued to get positive feedback over the five days of the conference and my talk about GRTensorIII was well received.

It was interesting to see where the research frontier is and to get some perspective on some of the recent controversies such as firewalls and inflation (short answer, neither had much support). I asked Eric what had been “big” in the last 20 years. He pointed to the proofs of black hole stability. This conference included three lectures on the foundations of this work presented by Stefanos Aretakis. He outlined the 500 page proof of the global non-linear stability of Minkowski space. His enthusiasm and deep grasp of the material were captivating as he conveyed the essential ideas behind the proof. He made the stability of non-linear PDEs almost interesting!

Gravitational radiation was also a big topic. LIGO has made this very topical and the classical GR researchers are building analytic methods to supplement the numerical results. Talks on energy at infinity, measures of mass and horizon deformation were the ones I was most able to follow – since they rely on the kind of classical calculations I was familiar with.

There was also a lot of quantum gravity content here – which I simply did not follow. I don’t even remember how to quantize a hydrogen atom, never mind a spacetime. This did peak my interest (I forsee a book purchase in my future) but I also got a sense of people exploring the space of ideas because they can and it is publishable. However, this is “part of the game” and a great deal of doing research is going down blind alleys, until someone finds a way through.

I did hear some interesting talks about the process of physics, typically over pints at the end of the day. Many PhDs do multiple, very poorly paid post-docs and then often find a non-academic job at the end. I asked about whether limiting the supply of post-docs would make sense. The sentiment was that research was getting done, at bargain prices. There are clearly people who love doing the work and, like e.g. independent musicians, are willing to trade off income for the opportunity to do something they love.

As for me, I chose the other path. I elected for the more stable and better compensated path as software developer. Over the past 20 years it has generally been satisfying work. Clearly part of me misses physics – otherwise I would not have gone somewhere where I was the dumbest person in the room.

My list of engrossing side projects get longer year after year. This year as I ticked over another birthday, I decided it was time to take some “me time” and choose a project from the list and make some serious progress. What was needed was a “sabbatical week” – a week off from work dedicated to goal of making progress on a specific side project.

I have no shortage of side projects littering various Trello boards and entered on Mac “Stickies” on my desktop. I have tinkered with mobile apps and Unity assets of late. This time a ghost from my past became the focus. Long ago, I co-created a package, GRTensor, for the symbolic math package Maple to do calculations for general relativity (GR). It has continued to be used by a small group of relativists who find it useful for off-loading the horribly tedious calculations that come up in curved spacetimes. In the intervening years Maple has changed significantly and the package was becoming difficult to install and was basically unusable on Macs. I got back to GRTensor because I was going to use it do some simple calculations for geodesics on two-surfaces for a Unity asset – but I discovered as a Mac user that this was not going to work out.

As sabbatical week approached I decided it would be fun to wake up the part of my brain that once knew something about GR, so I decided that rehabilitating GRTensorII into GRTensorIII would be the goal of the week off.

After exchanging a few emails with Maple, they agreed to extend me a multi-platform license so I could look into fixing up this mess. I contacted my collaborators and we pulled together the source code. Most from 1996 with some updates “as recent as” 1999.

Oh, dear.

Initially I planned to allow a bit of diversity in my week and mix in some time on a second project, learning some new drum riffs and reading. Not to be. With a specific project and goal in mind, and only a week I ended up coding 10 hours a day. I was able to zone in to the point where I would look up from my morning coffee to discover it was lunch time. This is the “magic” of a sabbatical week!

Twenty years ago, we build GRTensor around the idea that tensor objects in GR could be constructed algorithmically. If you decide you want to define a tensor Foo, then the package created variables for Foo and auto-generated code to calculate Foo based on the formula used to define it. LOTS of global variables.

In order to make GRTensor fit into the modern Maple package paradigm, with GRTensor as a Maple module and good citizen in the eco-system, the globals in the Module needed to be defined and scoped – and not created on the fly. This led to a LOT of fairly mundane changes to push the objects into a wrapper, which in turn is scoped. A bit of Python scripting made this not too terrible. Other quirks we exploited in the 90s no longer worked in the modern Maple and hunting these down took additional time.

Another chunk of work required changing all the user input prompts to adapt to the new dialog-based input used in Maple. Lots of interactive testing.

After blasting on this for a solid week – I now have a decent beta offering that will get tested by a few researchers. Hopefully GRTensorIII plus source code will be open for use by the end of the year.

I had a great time and got a big feeling of accomplishment from the week off. The daily sense of “flow” was intoxicating. Sabbatical week is an excellent idea – which I now plan to make an annual event.

I have developed an enthusiasm (aka weird obsession) for celestial mechanics and developed several games and the Unity Asset Gravity Engine . In developing Gravity Engine I learned a great deal about high pedigree N-body simulations, but in some cases (e.g. a model of the solar system) it is not really necessary to simulate the system but rather just evolve it in the correct way. In this case Gravity Engine offers the option to use Kepler’s equation and move bodies in their elliptical orbit as a function of time. This uses far less CPU than doing the 9*8 mutual gravitational interactions (10*9, if you add in the “dwarf planet” Pluto).

[If you don’t see an animation, click the post title to see ONLY this post. There is WordPress JS bug when this animation and a YouTube link are present]

Creating code to move a planet in an elliptical orbit with the correct velocity is surprisingly tricky. This is one of those cases where you might expect you could just grab a formula from Wikipedia and bang out some code. This bias comes from all the examples in physics class where the goal is to “find/derive the formula” and get a tidy equation.

If you dig around on physics pages for an equation of an elliptical orbit you will generally encounter the equation for the shape of the orbit with eccentricity e, and semi-major axis, a:

I have attached the subscript F to the angle to indicate this is the angle from the focus of the ellipse between the position of the body and the long axis of the ellipse. Historically, this angle is named the True Anomaly.

Where is time in this equation?

Nowhere. This equation doesn’t tell us anything about time.

To get an equation for how the object moves as a function of time, we’ll need Kepler’s equation. Kepler constructed his equation without Calculus (which came along about 60 years after Kepler did this work) using geometric arguments and the assumption that an objects speed in an elliptical orbit was inversely proportional to it’s distance from the focus. Kepler’s equation is:

Here M is the position of a body moving in a uniform circle at a constant rate (that we will relate to time) and E is the angle to the position on the ellipse from the origin, called the eccentric anomaly. Here a picture will help:

The eccentric anomaly is NOT the same angle as (f in the picture) however they can be related with a bit of geometry:

.

If we have a specific time we want a position for we need to convert this into a value of M. This is done by dividing the time by the time per orbit, T. Kepler can also help us here with his third law that relates the size and eccentricity of the orbit to the mass of the bodies:

where m is the combined mass of the central object and orbiting body. (Kepler did not know the proportionality constant was the mass, that came later).

Given M, Solve for E

Ok, we’ll just isolate E…hmmm. E appears by itself and inside the sine function. That sinks our chance of getting a tidy mathematical formula. This equation is legendary in Mathematics, since it is an early example of a transcendental equation with an import application. It has been studied extensively and the approaches are well summarized in the book “Solving Kepler’s Equation Over Three Centuries” by Peter Colwell.

There are some series approximations, but they are not valid for all eccentricities. The most common approach is to iterate the equation until we converge on a value that is “good enough”.

The “recipe” for tying this all together is:

Determine the orbital period T

For the time t we’re interested in, divide by the orbital period and use the remainder to find the angle if the body were moving in a uniform circle, M.

Infinite Force? Forty year old FORTRAN to the rescue!

I continue to be fascinated by the complexity that comes from the simple problem of three masses interacting through gravity. Last year I released the ThreeBody app to demonstrate some of this complexity – challenging users to place three bodies so they would stay together. For bodies at rest this is probably impossible (although I am not aware of a proof). An early commenter asked exactly the right question: “Is the ejection of a body physical or an artifact of the simulation?”.

In the case of my app, in most cases it was an artifact of the simulation. I have been on a journey to remove this artifact and better demonstrate that it is STILL very hard to find solutions that stay together and this is now purely because of the physics and not the implementation.

The result is a significant reboot of ThreeBody, one that allows velocities to be added to the bodies and as a bonus has a gallery of very cool known stable three body solutions.

Close Encounters Have Near-Infinite Force

The force of gravity scales as 1/r^2. Start with two bodes at rest a fixed distance apart, attracted by gravity. As they get close, r (the distance between them), gets small and 1/r^2 becomes HUGE. In a game simulation applying a huge force for a short time step can result in an object moving a large distance, often far beyond the other object. In reality the pair would get the same very big force restraining them as they move past the closest approach. If you think about energy conservation and ignore the collision – it is impossible for the two bodies to fly apart. They can only get as far apart as they started (assuming they started at rest). If two bodies interacting do fly apart – it is an artifact of the simulation not coping well with the very large forces at close approach.

Simulation artifacts have been a well known issue in gravitational simulations since the beginning of computer astronomy experiments in the 1960s. There are ways to transform (“regularize”) the co-ordinates and the forces so that the infinities do not arise during close encounters. This is commonly done in scientific-grade simulations but in game physics is not typically demanded (since the collision usually results in some form of destruction).

Since my app was trying to model these close encounters, it needed a higher pedigree solution.

As usual, I started by buying more physics books and downloading papers. This convinced me that I did need to have a regularizing algorithm and also showed me that doing one from scratch would take some time. Since there is no substitute for running code, my next step was to look and see what researchers were using and if I could adapt it. There are some fantastic programs available (see references below) although many are instrumented for research, scaled for many masses and do not not need to be concerned about real-time performance. These programs are BIG and generally written in FORTRAN or C and I was hoping to continue to use C# within Unity.

I finally found the code TRIPLE from Aarseth and Zare developed in FORTRAN in 1974. It was about one thousand lines. After several attempts using tools to do Fortran to C to C# and experiments building the code as a C library and calling from Unity, I decided that the simplest approach was just to transcribe the code by hand. As an added bonus I would gain a much deeper insight into how the algorithm worked. The code then needed a bit of re-arranging to meet the real-time needs of evolution during a graphical application, and changes in reference frame (since the algorithm operates in the center of mass frame).

ThreeBody now uses the TRIPLE engine and the encounters continue to be very fascinating – even more than before. For masses with no initial velocity it is still difficult to find long lived solutions. The full version of ThreeBody allows the bodies to be given initial velocities allowing even more solutions to be explored. The full version also allows a choice of integration engine; you can go back to Leapfrog to see just how different the results are and monitor the change in total energy – which indicates the error in integration. There is also a higher pedigree non-regularized Hermite integrator for comparison to Leapfrog.

A large gallery of very cool three body solutions is now part of the app. Ranging from solutions found by Euler and Lagrange in the 1770s to those found as recently as 2013. These are hypnotically beautiful – even though not all of them a stable.

For those who want to delve further an annotated reference section is provided.

One of the great things about meetups is their ability to generate weird idea exchanges. Late last year I demonstrated ThreeBody – another of my oddball, limited appeal physics apps. I was mentioning that it might be fun to see what happens to the three body problem in a spherical space – since this would solve the “ejection to infinity” issue by removing infinity. This led to a discussion about the topology of games like asteroids where the top/bottom and left/right are wrapped. This mapping creates a space that is a topological torus. This was the spark I needed to think more carefully about what motion on a real torus would be like.

I don’t need much of an excuse to think about an odd physics scenario. My past includes time spent taking a break from the working world and doing a PhD involving curved spacetimes. One of the problems that comes up in such studies as a way of sharpening skills is the study of motion on curved 2D surfaces. It is a great place to learn to use the mathematical machinery and intuitive enough that the answers can be visualized.

This made me wonder how different a space shooter would be if the physics of a torus was made “technically correct” (the best kind of correct) . The result is a soon to be released game “Geodesic Asteroids” in which there is a dual view of the 3D motion and the 2D motion with the paths of objects moving in a “technically correct way”.

The Torus

Figure 1: Torus coordinates

First we need a mathematical description of the torus. It is a 2D surface so only two co-ordinates are needed to describe a point. If we think of the typical donut representation sitting on the x, y plane then is the angle around the z-axis (say from the line x=0) and is the angle around the cross section of the torus at a given . The radius of the center of the torus is and the radius of the cross section is .

In the mathematics for a torus, everything can be described in terms of these co-ordinates, including the equations of motion that define how an object moves when there is no force on it. We are accustomed to thinking of the torus as a three dimensional object but exactly which three dimensional shape results is partly due to the choice made when we embed the surface in 3D. There is really no reason we have to do this but it can help our intuition. [Digression: It an interesting mathematical question about exactly how may dimension are required to embed an N dimensional surface, we are accustomed to N+1 for e.g. spheres and torii but this is not the general rule!].

In our case we pick a customary embedding:

$$x = (a + b \cos{\chi}) \cos{\theta}$$

$$y = (a + b \cos{\chi}) \sin{\theta} $$

$$z = b \sin{\chi} $$

This allows us to map into the x, y, z world co-ordinates the game will use and retain our intuitive picture of a torus.

What about a 2D map of this torus? Here I elected to just use as pure 2D co-ordinates i.e. as if they were the displays “x and y”. I impose a slight twist that I’ll explain in a minute.

Motion on the torus

The embedding equation provides the world space position to show events on the torus. What equations should be used to calculate motion on the torus? The motion takes place on the torus so the description is in terms of . Here there is a departure from the usual equations that govern games. In the flat 3D game world x, y and z are independent and there is no need to worry about where in the space we are. is the same at the origin as it is at any other point. On curved surfaces things are different. To make this clear lets consider a simpler 2D surface, the sphere, with longitude and latitude .

As you know from looking at maps of airplane flights, the lines between points take what seem to be curves on the map. This is because the force-free (or geodesic) path that connects two points is a great circle. This means if you are in New York and head directly west the geodesic path must be a diameter of the circle and will carry you into the southern hemisphere until you are at the antipodal point directly opposite New York through the center of the earth. [Fun fact: very few places on land have an antipodal point on land]. You started out with only a velocity in the direction but ended up on a trajectory for which also changed. This means that the equations of motion are coupled. The equation for the evolution of depends on the velocity in the direction. In the case of a sphere the equations for force free motion are:

The good news is that on the torus the co-ordinates are well behaved. The expression are finite. (If you look at the equations for the sphere you’ll see it is possible for some values to be dividing by zero. Never a great idea. This is not a fundamental part of motion on a sphere, there is no special place where acceleration goes to infinity. It’s just a bad choice of co-ordinates and we need to choose a different origin in some case. It is quite usual to have to cover a surface with several co-ordinate patches. For the torus we are lucky that this is not needed.)

Where did those equations come from?

The equations for on a curved surface come from a branch of mathematics known as differential geometry. “Differential” since it studies the behaviour of surfaces that are smooth enough that you can do calculus, i.e. no sudden steps or cusps. One of the new ideas that enters with differential geometry is that while you can do calculus at each point, you cannot directly compare the results of these calculations at different points. This is because each point has its own tangent plane (more generally tangent space). As an object slides along the 2D surface the local co-ordinates of the object change relative to the surface co-ordinates. Returning to our sphere example, if you initially were heading west from New York when your great circle crosses the equator you are no longer facing west BUT you did not turn, you were just following your geodesic path. Exactly what the geodesic path is and how it turns as you go is something that can be calculated for a surface.

It’s not my goal to turn this post into a treatise on differential geometry. It is a very interesting subject and is the stage on which Einstein’s General Theory of Relativity takes place. All the introductory GR texts start with some background in differential geometry and I find their approach more geared to my interest than the way math books approach the subject. For those interested in a good introduction to the geodesics of the torus and an introduction to the basics of differential geometry, I recommend Jantzen’s paper. My favourite intro GR book is Schutz’s “A First Course in General Relativity”

Differences between 2D and 3D representations

A space game on a torus can be shown in 3D but it is also interesting to play the game on the 2D representation of the surface. Here the odd paths of the objects make the impact of the curved surface apparent. There are some compromises in mathematical purity that I have made in the interests of actually finishing the game. The primary one is that the size of objects in the 2D and 3D view cannot be fully reconciled (more below).

On the true mathematical surface the distance travelled as you walk around the outside diameter is different from the inside diameter. On the 2D map this means that the top edge of the map should not be as wide as the middle. Our choice of a 2D map is therefore vulnerable to the an effect similar to that occurring on a Mercator projection of the Earth (Greenland is shown as far too big). I have chosen a simple scaling depending on the “latitude” . To continue this accuracy it is necessary to change the horizontal size of the objects as they change latitude. I did put this in the game as an option – but the result is a bit odd looking. In normal play this mode is disabled – with the consequence that the objects on the screen are not exactly the correct size. The correct size is used for detecting bullet hits and in some cases an apparent hit on the map is not a “real” hit.

True scaling leads to some weird paths in the map view. If an object is moving quickly in the shrinking due to the scale factor can exceed motion towards the edge making it look as if the object backs up a bit before continuing on.

Geodesic Asteroids is available on Blackberry, iOS and Android. It has been great fun to think about and use the geodesic math I learned long ago. It is also the first time I have written a more “video game” style game. Hope you’ll find it interesting.

When I look at new physics books (which I do far too often) I get a “Flowers for Algernon” feeling. There was a time when I knew about this stuff. That knowledge has seeped away over the past fifteen years leaving me with a sense of loss and nostalgia. However, there is a trick I used in undergrad when I decided that I wanted to have the option of going to graduate school, that applies here. I am using it to get my physics kung-fu back.

First, some background.

I “speak” math with some fluency. In high school it was easy to pick up. I do not speak much else. Efforts to learn French, Spanish and Mandarin at various points in my life have all been tedious and I have never managed to get very far.

Although I speak math, I was a lousy student. In first year I was so close to the bottom of the class, that I doubt anyone below me returned for the next semester. I did manage to squeak up to being 30th percentile – and I would have told you I was working hard. What I told myself (after getting in to a good school) was that I was on a different curve now and that this was the new “level”. In reality I was looking at the material, “grazing the text”, nodding to myself that it all made sense and doing only the assigned questions as best I could.

In the middle of third year I decided it was time to open the door to grad school and using my “trick” I achieved an 85% average and made in onto the Dean’s list.

The “trick” is: Do EVERY question until you can do it PERFECTLY.

I didn’t say it was an easy trick.

It almost killed me.

For the entire term I was either in class or at my desk. I did every question multiple times. I filled in every gap in the derivation in the text books. I went back and repeated start to finish questions I had not gotten right the first time, until I could take a blank sheet of paper and get it right.

I learned by doing, using the material in the book to solve problems.

I got into grad school.

Back to the present…

There is a new physics textbook “Gravity” by Eric Poisson and Clifford Will. This book is a tour-de-force, beginning with Newtonian gravity, detouring into shapes due self gravity and tidal forces, three body orbits and then heading into techniques to model general relativity and gravitational waves in situations where full blown general relativity cannot give direct answers.

I have decided that I refuse to feel nostalgia and loss about this topic. I need to learn it.

Back to the trick. It is NOT any easier. After a fifteen year break, I have spent more that a few hours reminding myself about div, grad, curl and lots of other forgotten math. The progress is slow, but not painful. Chapter 1, question 1 took me two weeks. I had to refresh my knowledge of vector calculus, derive some results in the text and figure out Stoke’s theorem. When I go back to the text with a specific problem in mind, I read far more carefully. As I feel knowledge seeping back I am excited to discover the connections and see where physics leads. There is a sheer joy from simply using my brain much like the feeling I get from a long bike ride or XC ski. My justification is precisely the same – it’s challenging and it’s providing exercise for part of my body.

The questions in chapter one have taken me a little over two months. Chapter two is going a bit better, since I now have some of my math skills back. I do not have as much time to focus on this as I would like, but the time I can put in is rewarding – and I can go off on detours to remind myself about Green’s theorem – or whatever it is I need to re-learn. Right now I am “stuck” on a line in Chapter three that starts “expanding in terms of r/R we get” and I need to go find out about Taylor series of vector functions.

I have returned to N-body, my first mobile effort , with the goal of making it cross -platform and adding new ideas. Given all my recent positive experiences with the Unity game engine the idea is to port as much as possible and avoid “the rewrite”. (This has a very tempestuous history in software and is one of the things Joel Spolsky thinks you should never do.)

The good news for the small community who have enjoyed N-body is that moving it to Unity is going quite well, and I am FULL of cool ideas I want to add: gravitational fields for non-spherical objects, galactic potentials, dust accretion – it’s almost endless. The bad news is that I read a great Sci-Fi novel “The Three-Body Problem” by one of China’s top sci-fi writers, and this has caused me to set aside Nbody for awhile. “The Three Body Problem” reminded me of the non Sci-Fi book with the same title (you could see how that would happen). One of the things I made a note of when I read the more academic of these books was the static three body problem. The question is: can three bodies be placed in initial conditions with zero velocity such that they will stay in a bound configuration? Would you expect that since gravity is attractive they would “keep together”, perhaps with some cool triple orbits?

As I was reading the sci-fi 3BP I was feeling like I needed a break from NBody and figured I’d try a quick mock up of the static three body problem in Unity. As I get more kung-fu with Unity (and using the leap frog integrator I had ported for Nbody) this kind of side project becomes very do-able. I spent an afternoon doing the basics and found that there was a nice little physics “time waster” game here.

Shortly after, during my usual lunch time perusal of arxiv, I found a paper about using GPUs for N-body simulations which mentioned replacing the three bodies with three binaries. Another addition for the ThreeBody app.

This was all about a month ago and since then I have added the minimum of “gamification” so others can take a swing at this problem on their mobile devices. It is weirdly addictive and a good way to see how sensitive the problem is to initial conditions. Hope someone out there also finds it fun.

You can find ThreeBody for Android and Blackberry (coming soon to Amazon and iOS). There is some further technical detail.

Unity 4.6 provides a fantastic new UI framework. I will never use the old OnGui() again! I have been using the new framework for the past several months while Unity 4.6 was in beta. During this time I have collected a few tidbits on working with the framework.

Getting Started

The tutorials are very good and the information in the manual is getting better all the time. I found the tutorials on Canvas, RectTransform, Button and Event System a bare minimum to get the idea of the whole thing.

Adapting for Mobile Screen Sizes

This was one of my first “care-abouts”. There are two things that are important to know. The first is the anchor system for the layout of elements (see the RectTransform tutorial) but that is only part of the solution. The crucial other element is adding a CanvasScaler to your root canvas.

Once attached to a Canvas this can then be set to give a reference size and you’re good to go. As the device screen size change your UI element will scale appropriately.

UI interactions with Scripts

In the C# world the Unity elements can be created and modified as you would expect. You will want one or both of the following includes:

using UnityEngine.UI;
using UnityEngine.EventSystems;

ColorBlock: Changing Button Colors

The ThreeBody game I am creating makes use of collections of buttons as radio boxes. I set the disabled and highlighted colors to reflect the color I want for unselected and selected. When it is time to enable a button, then the normal color of the button is set to the value in the highlighted color. It would be reasonable to expect that:

button.colors.normalColor = Color.white; // DOES NOT WORK!

would work. It does not. Instead you need to make a temporary copy of the ColorBlock and then modify its elements and copy back the ColorBlock:

Touch Events and Buttons

Button touch events will still fall through into code that handles touches so it becomes important to screen out those touches that are over top of buttons. A technique that works for both mouse and touch events is:

Panel Fading and CanvasGroup

There is not a lot said about panels in the tutorials or online docs. These appear to be containers for holding a subset of the UI being developed. I found them useful for Settings Menus and High Score panes. In order to run a fade effect on these panels, you can add a CanvasGroup (which then has a field “alpha” that allows direct control over fading). An example of such a fade coroutine is:

Note that in my case I also change the panel active status. This needs to be done at the end of the fade out or the start of the fade in for the object to stay visible during the fade.

Off-Topic: Trail Renderers

This is not part of the new Unity UI theme but as part of getting ThreeBody ready I had to learn a bit about TrailRenders. I use these to leave paths behind the “stars” in ThreeBody so the orbital paths can be seen. I always had weird issues with setting materials and colors until I finally found some clear advice on the forums about the material choice. I need to choose a particle material and the trails look best with a mobile/particle/vertex-lit shader on that material. To get solid colors I have created a simple 128×128 solid color PNG, imported as a texture.

Another tidbit is how to recycle objects once you have used trail generation. To get rid of the trail I ended up setting the trail length to -1, then counting 5 update cycles before setting the object inactive and putting it back into my object pool. I played around with coroutines for this but they ended up creating more complications – and a simple synchronous design won out.

One way I cling to the notion that I still know something about physics is to create oddball mobile physics apps. Since this is a side-project my goal is to spend most of the time on physics and less on the time-intensive UI tuning. I have moved to Unity and while learning Unity I decided to try and make a mobile game inspired by a spinning magnet office toy I had had for years.

When I look at something like this I wonder what might happen if I could change the number of arms, number of “propellers” and a game/simulation seems like a good way to explore these ideas.

In my initial enthusiasm for Spinor I decided that adding leaderboards would be a good idea. It would encourage people to share their love of my awesome game and take me on the road to self-sufficiency as an indie developer. This meant that I needed to add code to track login state and provide options on the level select screen and game over screen. I found a way to spooge it in. What I had was “workable” and I went ahead and started to submit Spinor for Android, BB10 and iOS.

Google will take pretty much anything. Within hours my app was up on Google Play.

Amazon approved it.

Blackberry approved it.

Apple rejected it.

They rejected it on the basis that the level select and game over screens were “too ugly”.

They were right.
The level select screen was one of those things that is not the fun part of developing the app, and I had just plopped in some touchable tiles and some hacky strings using the Unity OnGUI() approach. It *was* ugly. I then decided this was a great time to search the Unity asset store. I quickly discovered the Mad Level Manager and decided to grab it when it went on sale. The end result is a cleaner looking game that is more presentable.

The chaos of magnetically interacting systems *is* cool – and I enjoyed watching some of the odd interactions that result.

Next time I’ll be submitting to Apple FIRST. They give the best feedback.