http://lukewilde.co.ukhttp://lukewilde.co.uk/icon.pngLuke Wilde | Freelance Web and Games Programmerhttp://lukewilde.co.ukRSS for NodeTue, 14 Aug 2018 21:27:20 GMT60Multiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories:

Results

Given enough time I hoped this would produce nice looking, coherent, networks. And, for smaller networks, it did!

However when trying to render larger, more complex networks, there came a point where the fitness would hit a wall. Tangles that made it past a certain point became less and less likely to be removed:

After sitting and watching my networks fail to unfold for some time, I slowly realised that I'd designed my networks to be greedy. They were being hindered by cheap hits of fitness. By shrinking the size of the network (scored by the Euclidean distance) and drawing all the nodes closer together, there was less and less room available for nodes to mutate and untangle themselves.

In addition, the networks needed to get off to a good start. Some of them would have quite fundamental flaws in their structure which could require, for example, that 4 nodes would need to mutate in unison to the other side of the network to untangle completely. Either I needed to mutate my nodes in groups or I needed to iterate through more candidate networks.

Barbaric Island Species

To this end, I modified the simulation to spawn several Island species which mutate in isolation. The fittest networks from each island would be used to grow a single mainland species.

To ensure these island species had untangled as well as possible, I removed the size constraint and increased the mutation rates. I figured that if I could mutate without being limited by size, I could introduce the size constraint once the topology is simplified to boil down the dimensions of the network.

Nice!

This improved the success rate dramatically. Here are some examples of the networks the GA designed.

In The End

While I'm claiming success, these wouldn't produce super readable diagrams. There are some readability traits which most of us somewhat instinctively bestow on diagrams we produce. We align the nodes horizontally and vertically with one another as best we can. The lines we use to connect them are usually either bendy or are restricted to right angles. The GA isn't totally reliable either; about 15% of the time the resulting network still contain knots and tangles.

But failings aside, I'm actually kind of excited that my networks developed quirks. The Euclidean distance was intended to shrink the network as much as possible, as it turns out however, that favours a somewhat circular arrangement of nodes over a denser collection with some potentially longer connections between them. The impact this has on the networks gives them a vaguely organic quality, and they end up resembling weird geological continents or Amoebas:

In short: I had fun solving the wrong problem with a powerful technology and created a wonky shape creator that eats CPU's like Weetos.

]]>http://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmhttp://lukewilde.co.uk/blog/network-simplifying-genetic-algorythmFri, 19 Aug 2016 23:00:00 GMTLike a good little megalomaniac, I've recently become obsessed with extending my command of abstract software into the physical realm. To this end, I've put together some basic Arduino projects, and while this was really fun, it's operation remained mostly magic to me.

Enter Analogue Electronics

To feel comfortable messing with digital electronics, I wanted to first have an understanding of the primitive units involved in their construction. So far I've progressed through most of an Analogue Electronics course available on Udemy. I managed to pick it up on sale for £10, a complete steal based on the quality of the course so far!

After covering some basic components it introduces you to an Integrated Circuit called the 555 timer. These cheap and tiny beauties can be used to output a voltage, oscillating at various frequencies. Something that's super useful for controlling motors, servos, and also creating audio signals.

I'd also caught wind of a tiny synth known by the name of the Atari Punk Console. And it seemed like a great opportunity to both practice circuit building and annoy the shit out of people with the harsh square waves it can generate.

As it turns out the only potentiometers I had were 100k, and I'd only bought 555 timers (not the 556 that the original circuit at Kaustic Machines calls for). To stand a chance at building this I needed to adapt the circuit to work using the components I had on hand.

The 556 is essentially just two 555 IC's merged into a single chip so I could easily check the pins on the 556, and find where they would be on my 555s. However, those potentiometers were a little tougher to substitute. To find suitable alternatives, I needed to calculate and recreate the frequencies they're intended to drive through the 556 timer. Conveniently the details on how to make such calculations can be found on the 555's datasheet.

I tried to do this using a calculator but there were too many moving parts so I hashed out a tiny program to calculate it for me. After tinkering with the values, I managed to replicate the output pretty well by making the following substitutions:

470kΩ potentiometer -> 100kΩ

1kΩ resistor -> 270Ω

10nF capacitor -> 47nF

100nF capacitor -> 470nF capacitor (I didn't have a 470nF so had to use two 220nF's in parallel)

Breadboard

It works! ...But smelt like smouldering electronics. In an effort to save my chips I quickly yanked the power supply and starting looking for the culprit.

After checking and re-checking the circuit on and off for a week, questioning the fabric of reality, and being filled with dread that I'm meddling with technology that I am not qualified to play with, I decided that the only way to proceed was to leave the circuit on so I could watch the fault catch fire and manifest as a burnt lump of carbon on my breadboard. For science!

After a minute or two, I saw smoke coming from... the potentiometer?! Turns out that the potentiometer's pins weren't making good contact with the pins of the breadboard and managing to burn it a little bit.

:facepalm:

Perfboarding

Now that I was slightly more confident about my abilities, I wanted to move the circuit from my breadboard and onto some perfboard which would be its permanent home.

During my first attempt of soldering onto this board, I learned several important soldering lessons. First and foremost you need to properly tin the soldering iron, that and lead-free solder sucks massive balls. Without proper heat transfer (due to the oxidisation on my soldering tip) the lead-free solder's melting point was too far out of reach.

After using a rotary tool equipped with a brass brush I removed the layer of oxides from my soldering tip, tinned it, then preceded to begin my second soldering attempt with some decent, flux cored, lead solder:

TADA

Finally, I imprisoned my creation in a cheeky little project box with help from a rotary tool and liberal amounts of hot glue. This would ensure a long and healthy life for the device, making it more resilient to short circuits, and look fucking legit.

]]>http://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consolehttp://lukewilde.co.uk/blog/trying-to-build-an-atari-punk-consoleSat, 28 Jan 2017 00:00:00 GMTMultiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

The Logistic Map

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

populationSize is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the fraction of carrying capacity for the environment the population is growing in.

birthAndDeathRate A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the birthAndDeathRate (R) effects the populationSize (X) over time (T).

With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.

As R increases to more fruitful levels you will see the population stabilise.

As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

functionlogisticMap(x){
return4 * x * (1 - x)
}

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the logisticMap as x (previously the population size), that result is then passed to the next invocation ad infinitum.

The Onset of Chaos

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This sensitive dependence on initial conditions is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

Results

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

Whole number seeds only

Are the numbers really random?

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's rand() on Windows

Logistic map x = 0.1

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled x = 0.1

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when x = 0.1

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

Conclusion

It goes without saying that due to the above flaws in its current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. Its use even in games is it's self fairly questionable given the algorithms flavour, tending to prefer extremities over middle ground values. This, however, makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of its generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (birthAndDeathRate = 4 and populationSize = 0.1), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.

Credits

]]>http://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-maphttp://lukewilde.co.uk/blog/generating-sequences-of-random-numbers-using-the-logistic-mapWed, 27 Aug 2014 23:00:00 GMTPhaser.io provides a fantastic and very well documented tool set for HTML5 games developers, one of it's advantages over rival technologies is that it doesn't impose a strict module pattern on those seeking to use it. This decision to empower it's users does create some overhead as developers must decide on a system for code reuse, orchestrate a build process to support it, as well as getting their application ready for production environments.

NPM & Browserify

Browserify is the backbone of my implementation. It enables access to NPM, does minification, provides source maps, sets up a watcher on the applications source, and enables a Common.JS flavoured module system. (If you're unfamiliar with Browserify I thoroughly recommend reading Substack's Browserify Handbook). You might think this covers everything, however Substack has left us with a little work to do.

Jade & Stylus

While there generally isn't a huge amount of raw markup involved in HTML5 game development I chose to build my pages on top of Jade. Primarily Jade allows me to import and use the properties file (src/properties.js) in my page's markup. It also happens to use a very minimal syntax and enables minification of the markup it produces, shaving off even more from the final payload size.

Stylus also has a very minimal syntax, enables variables and has automatic vendor prefixes, additionally it provides the ability to modularise our style sheets and break them out into separate documents to avoid ever having to deal with a thousand line CSS file.

Cache busted assets

You'll notice in the sample preloader that I append the asset URLs with a token:

This marks the file for a grunt module which (on production builds) hashes the contents of the file and replaces the aforementioned token with a query string which cache busts your assets. I've also added tags in index.jade where necessary, so JavaScript and CSS file changes will be covered too.

The kitchen sink

Because Browserify only compiles the libraries you require into your application, for convenience, I included a number of potentially useful libraries in the project, Lodash for object manipulation, Google Analytics for usage analysis, Mr Doob's frame counter Stats.js, and jQuery because you'll probably need it eventually. I also added a PNG optimisation grunt task (courtesy of Pngquant) to speed up asset creation.

File Structure

When it comes to exporting the minified code not all of the resources and source files in the src directories are going to be required to run the production build, I also wanted to avoid manual exports at all costs. As such, I opted to completely duplicate the necessary files from src and to place them in build after the various processes have taken place. This allows effortless exporting of the application by making the build a skinny and completely independent version of your game. build also serves as the web root for Connect.

Further Reading

For info on how to get started developing with it, be sure to read through the project's readme and raise issues for any bugs you find, or feature suggestions you might have.

]]>http://lukewilde.co.uk/blog/phaser-io-boilerplatehttp://lukewilde.co.uk/blog/phaser-io-boilerplateThu, 18 Sep 2014 12:15:36 GMTBiomimicry is always a pretty entertaining concept. It lends us the likes of fractals, robotics, and in it's purest form is used to model parts of the world to better understand the minutia of natural systems. Probably my favourite aspect of nature, from a somewhat bias standpoint that I share with most other living beings, is natural selection. The very beast which delivered me into existence. Engineered with sublime fitness to be master of the environment I inhabit.

The "Problem"

At work, we had an application that was driven by a state machine, which in turn was controlled by JSON. we maintained two versions of this configuration, one in the application that did the heavy lifting, and the other on some graphing software to allow us to visualise and discuss the system.

Someone Suggested:

Wouldn't it be great if we could just generate the graph from our config?

HELL YES. That would be great (I mean, it's kind of easier all round to have the graph generate our JSON, but being smart now isn't going to allow us to dick around with some AI).

The Network

Our state machine effectively consisted of a bunch of states that were connected to other states, they would have varying routes between them so the primary goal of our solution would be to 'untangle' the network, so that it could be seen in it's simplest form for all to admire. Here's a sample of the input:

Mutating the way to success

I wanted to solve the "problem" by placing all nodes of the network down randomly, connect them all up, then quantify the simplicity of the network. I'd do this a whole bunch of times. At a certain point, taking and modifying the best scoring (fittest) from amongst them to form the next generation of networks.

Scoring Fitness

Intending to create nice, small, and simple networks, the factors I used to score my network's fitness were as follows:

The Euclidean distance of all connecting lines

Area of overlapping nodes

Area of nodes outside of the canvas

Line intersections

Lines intersect with other nodes

Each of these would increase the score, so the best network would have the lowest total fitness.

Mutations

I couldn't think of a way to effectively breed between different networks so, like many computer enthusiasts before me, I resorted to asexual reproduction. Opting to derive mutated forms of the parents in subsequent generations. These mutations fell into three categories: