https://garystanton.co.uk/Ghost 0.11Thu, 21 Mar 2019 18:32:35 GMT60I recently ran into a problem with gulp-sass, whereby some of the partials that were being included in my main .scss file were intermittently not being found. Gulp would throw up the error: file to import not found or unreadable: somefilename
What's odd about this is that not only did]]>https://garystanton.co.uk/gulp-sass-file-to-import-not-found-or-unreadable/869e76d8-c8ba-45df-9b41-0bbc0b72a331Tue, 07 Feb 2017 21:57:03 GMTI recently ran into a problem with gulp-sass, whereby some of the partials that were being included in my main .scss file were intermittently not being found. Gulp would throw up the error: file to import not found or unreadable: somefilename
What's odd about this is that not only did the files in question exist, but if I simply went in and saved any file at all that was being 'watched' by Gulp - triggering a refresh of the gulp-sass task - the files would miracuously be found and everything would work fine!

Doing a bit of Googling I found that lots of people were having similar issues and the general consensus seems to be that the cause is something to do with race conditions when saving files - specifically that libsass is just too damned fast.

I've seen it asserted that anyone who has this error must be using Windows with Sublime Text, and thus the workaround is to enable an option available in Sublime Text called atomic_save. This option saves to a temporary file and then overwrites the old with the new - and indeed it does seem to solve the issue for a lot of people.

I'm using an older version of Sublime Text however, which doesn't have the atomic_save option available - so I had to look elsewhere.

What I found is that simply getting gulp to pause for a few milliseconds before attempting to compile the SCSS, solved the issue perfectly.

Here's how we do this:
Navigate to the project folder and from the command prompt install gulp-waitnpm install gulp-wait --save-dev

]]>Following on from my recent post where I detailed my approach to CFC organisation, I wanted to share some more details of my current CFML coding methodology.
Today I’m going to talk a little about database interaction.

Unless you’re taking advantage of ORM in your CF application, you

]]>https://garystanton.co.uk/scaffolding-crud-cfml-select/2441c23c-3846-4155-b771-25ddabbb483bSat, 28 Jan 2017 14:25:00 GMTFollowing on from my recent post where I detailed my approach to CFC organisation, I wanted to share some more details of my current CFML coding methodology.
Today I’m going to talk a little about database interaction.

Unless you’re taking advantage of ORM in your CF application, you will most likely have to spend a bit of time scaffolding your components with some standard CRUD functionality. Some developers like to use a single component to manage all database interactivity, but I tend towards creating functions for each individual component. The goal is to have a standardised set of code for CRUD functionality that serve as core private functions on which others can be layered for the needs of the app.

Below, I’m going to show you how I handle READ functionality and we’ll look at how to efficiently use a single function to select data, dealing with:

Some data

First off, we need some data to play with. For this exercise, I've created a very simple table with some test data in it.
Feel free to download the SQL if you'd like to play along.
n.b. I’m using Microsoft SQL Server Express 2014 and Lucee 4.5 for my examples. Everything should work in SQL Server 2008, CF9 and above.

I’m going to start by building up a very simple function that we can use to SELECT data from the table in our database.

As you’d expect, this will simply return all the data in our Games table.

Filtering

Let’s take a look at how to build record filtering into our functions. We’ll start by filtering by the most obvious criteria, the unique ID for the record.
First, we’ll add an argument to the function to accept an ID:

<cfargument name="ID" required="false" type="string" />

You may notice that I’ve chosen string as the argument data type, even though the ID will always be a numeric value. This is so that we can pass a list of IDs as an argument, and have our query return multiple matching rows.

Pagination

There are plenty of different methods out there for paginating results of a query, but most of them require that we SELECT an entire recordset before filtering using CFML or sometimes JavaScript. I prefer to handle pagination directly in the database as it's much more efficient.

We can do this using a 'common table expression', which might be easier to think of as an RDBMS equivalent to 'query of queries'.
We can add a rownumber to the query and then query the results to select rows that are between our pagination parameters. Here's a basic example:

By adding this to our existing filterable function, we can filter the data and paginate our results, all before we ever get the CF query object. It's important to note that filtering should occurr before pagination, otherwise you'll only be filtering on the subset of data.
Here's how our function looks now:

This is a great way to get a paginated dataset, but it's worth noting that we'll still have to do some work with the results to display pagination controls in our views. In another post I'll show you how I generate pagination data with a separate function.

Sorting

As with filtering, it's important that we sort our dataset before we paginate.
It's easy enough to simply change the order directly in the SQL, but to expose different sorting options to our function we need to accept a fieldname as an argument and dynamically build our query. This is of course a dangerous thing to do, so we'll explicitly specify which fields can be used for sorting and validate against the list before building our query.

This forms the basis of our read functionality. With this function in place, I tend to use a wrapper that passes parameters to this function and manipulates the results to handle pagination controls, logging, or anything else that might need to be done on each call.
I also add freetext searching using a separate function - hopefully I'll go into more detail on this in another post.

In the meantime, I hope what I've gone over here will be of help to someone out there!

]]>I’ve been thinking recently about my coding style. I’m always interested in doing things in the most elegant and efficient way possible and the methodology I use these days is a result of many years of experimentation and refinement.

I think a coding style, from formatting and file

]]>https://garystanton.co.uk/my-cfc-methodology/aeebcb39-d3c8-4881-bd93-961a8e3f0e94Wed, 11 Jan 2017 23:58:00 GMTI’ve been thinking recently about my coding style. I’m always interested in doing things in the most elegant and efficient way possible and the methodology I use these days is a result of many years of experimentation and refinement.

I think a coding style, from formatting and file structure to app conceptualisation; is a delicate balance between personal preference and the conventional wisdom of our peers. Well structured and thought out code is easy to maintain and a joy to return to, but inconsistent code can easily become unwieldy.

I wanted to share some examples of the way I’ve come to structure things – Perhaps someone will find it useful, or perhaps others might share their own methodology and I may learn something!

Today I’m looking at my ColdFusion components

I like to create CFCs that can be reused across projects, and so I’ve found that I often add more and more functions and over time these components can become difficult to manage.
My methodology now is to split each function into separate files, including them in the CFC code itself. It’s a simple concept, but the result is that it's easy to see from a glance in the filesystem which functions are in a given component, and all are easily editable.

The problem with splitting your functions out into separate files is finding an efficient way to include them in the CFC. To this end, I now use CFML file operations to automatically include .cfm pages from all sub directories.
This means we can also split functions up in to separate sub-folders to group them together. For a large CFC, it's really rather zen like. ;)

What we're doing here is using DirectoryList to return a query object containing all the files in the current folder and sub-folders. If these files have .cfm extensions, they're included in the CFC itself. It's important to specify .cfm files only, as if you're using version control or comparison tools, backup files can easily sneak into the folder and be included in the CFC automatically.

I'm using a mixture of tags and CFScript here as I'm still more comfortable with tag based CFCs even though most of my logic is written in CFScript these days. Each to their own, ay?

So, how does this compare with your own methodology? I'd love to hear some opinions.

]]>I came across a post by Adam Cameron today regarding migrating away from CFML, and the possibility of helping others to do so with some kind of project similar to the excellent ColdFusion UI the Right Way - showing how common CF functionality could be replicated in other languages.

I

]]>https://garystanton.co.uk/still-writing-cfml-in-2017/5dd4b3ef-14e8-460c-bfe7-c6b0811aa01cTue, 10 Jan 2017 23:30:35 GMTI came across a post by Adam Cameron today regarding migrating away from CFML, and the possibility of helping others to do so with some kind of project similar to the excellent ColdFusion UI the Right Way - showing how common CF functionality could be replicated in other languages.

I still love CFML, but I can't deny that the community has gotten smaller over recent years and some of the work I've had of late has involved helping to maintain legacy CF based systems while new ones are being built in some other language.

I'm certainly not against moving to other languages, but I have to say I'm still in the process of building complex and if I do say so myself, awesome systems in CFML. There's life in the old girl yet if you ask me.
Still, it's a worthwhile discussion and I would certainly be interested in a project that showed the right way to do things in languages I'm unfamiliar with.

I'd suggest anyone with some time to spare go over and fill out Adam's survey. In the meantime, I thought I'd share my answers below for posterity.

Provide a brief comment about yourself

I'm Gary Stanton, a freelance webdev based in Brighton. Primarily contracting for a few clients and have been building bespoke systems for well over a decade.

How did you come to be a developer?

I have no formal training beyond some very limited exposure to Turbo Pascal and Cobol in college.
Back in the late 90's I was primarily designing, and coding websites for anyone who'd pay me to do so. Static HTML, tables for layout, all that lovely stuff. My focus was mainly on the design at that stage.
In the early 2000's I was working for a small startup e-commerce company using an off-the-shelf offline package. As we grew, we began to need something more bespoke and dynamic. We were approached by a CF developer who ended up building our new system and so I was forced to familiarise myself with CF... It was so easy to get involved in the server side, and the possibilities excited me far more than design... I never looked back really.
Dev is my core function, but with most of my clients I have to act as sysop as well. I do a bit of design and UX too if I can’t avoid it.

Summarise your CFML usage timeline

I’ve been using CF from about 2003 I think. I started on MX so have only ever experienced JVM versions.
I've worked with every version up until CF11
Around 2012 I think, I began to use Railo for some clients and now split about 60/40 between ACF and Lucee.
I’m not moving on yet, but front end dev has become a bigger part of my workload in the last 3 years or so.

During that time was it your primary or sole dev language? Or was it always an adjunct to some other language?

CFML has always been my primary server side language, but work has often required significant client-side stuff and I've sometimes found months have gone by where I've not used CFML much at all.
I've had to work with Wordpress a fair bit, which I consider to be different to building in PHP as the API is so extensive. It's amazing how much you can do in Wordpress whilst still remaining utterly ignorant of the intricacies of PHP... alas.
These days my primary focus is still CFML, but I'm doing more with JS... I've had some work with HTML5 game development and dabbled in Node a little.

Client side stuff has become more and more involved over the last few years, with responsive dev, package managers, build automation, etc. - I'm finding this is taking up higher percentages of dev time in any given project... ‘s fun though.

Did you work on just in-house code bases for your employer, or did you also work on third party code bases too.

I've inherited a few in-house systems, and built a few of my own.
I've not really had much experience with third party CF stuff - be it a framework, CMS or application. Everything is usually built from scratch. I’ll sometimes make use of a library or some Java class though.

I've been itching to open source some of my better stuff but have never quite got anything to a stage I'd be happy to release.

If you're still primarily a CFML developer... why?

There's a whole bunch of reasons really... I've got a lot of CFML experience and I still find it a pleasure to build with.
There are fewer CF devs out there, so I find myself on the radar of companies that need it.

There's also the fact that if I'm busy being hired to write CFML, I have less time to look at other languages. I'm confident in my ability to build robust and secure apps in CFML, but without the years of experience I've had, I'd feel less confident building complex systems in another language.

As a contractor, no-one is going to pay for my time to learn anything new, so my entire workday is filled with CFML or front end dev.

If you've moved on from CFML: why?

I'm aware of the way the winds are blowing - while I've not moved away from CFML, I know I need to expand my skillset. Which way to turn is a bigger question and I've struggled to think of another serverside language that I could feel as empowered with, as CFML.

Do you use primarily or solely ColdFusion; or Lucee or Railo; or some variation of BlueDragon?

I moved from ACF to Railo for my own systems primarily because of cost - my clients have limited budgets.
I found however that Railo was faster, and that the team were far better at offering help and fixing issues. Most of my systems have migrated to Lucee now of course, though one or two are still on a legacy Railo version.

Some of my clients are still running ACF and that's great.... but if there's a bug, I have to find a workaround really, because it'll take Adobe well over a year to look at it and then decide they can't be arsed to fix it.

Having said that, the more corporate environments sometimes have requirements that ACF can handle and I'm not sure Lucee could. Bizarre legacy DLLs and the like... probably why they went for ACF in the first place.

Do you participate in any CFML-based open source projects?

Not really... I'll engage with a community if I'm using a project, but that rarely goes beyond submitting bugs or discussing features.

And what about in other languages?

I’ve made a few commits to the Magento API documentation... that count?

What is or was - for you - the best feature of CFML which has you going "yeah, that's pretty cool actually".

Flash forms? ;)

I dunno that there’s any one killer feature... and certainly none of the RIA bollocks... but there’s a few things I prefer in CF to php.
I like that I can switch between CFScript and CFML and that I can embed a tag directly in HTML. I find php cumbersome in this regard.
I like the struct and query objects, both of which seem more useful than what’s available to me in php. I also find the formatting of CFDump to be a real timesaver compared to the php options. I actually found a php UDF to mimic CFDump not so long ago.

Are there any CFML features that would have fallen into that category for you when you were doing CFML, but ended up not being as cool as you thought when you looked at other languages?

At a push maybe... query of queries? They can be useful, but more and more now I’m tending to write similar functionality directly in the DB, which feels a lot cleaner.

What is it about CFML (or the underlying ColdFusion / Lucee / etc platform) you like the least?

The JVM is still something of a mystery to me. I’ve had problems with the JVM over the years that I simply lack the understanding to diagnose... CFML is so easy to build great functionality and with a bit of experience I’m even confident I’m building things the right way; but still I’m at the mercy of the goddamned JVM, and I’m not sure as a CFML developer I should have to delve into the underlying JVM as much I have done.

I also find Adobe’s attitude to CF frustrating. Bugs take ages to get fixed, and way too much emphasis has been placed on poorly implemented RIA functionality to the detriment of the core language.

If there was a project similar to "ColdFusion UI the Right Way", but aimed at any part of CFML (like how to make a DB query in PHP instead of CFML for example), would you be keen to help on it? Would it be of interest to you to be a "user" of it?

I’d be interested in something like this if there was an emphasis on the ‘Right Way’ aspect. It took me a couple of months to learn to code in CF and probably about another five years to learn how to code well. Perhaps that would have been different if I had a computer science background, but I think it’s way too easy to learn bad coding habits. When I’ve dabbled in other languages, I have a niggling worry that I’m writing terrible code that I’ll come to loathe in a short while.
I’m not sure how much help I could be, but I’ll keep an eye on the project.

]]>https://garystanton.co.uk/mining-zcash-on-windows/c1050e39-d407-418c-b379-404e556a2346Tue, 01 Nov 2016 00:16:38 GMTZCash launched on October 28th among some of the craziest price volatility imaginable.
Here’s how I got on, mining to a ZCash pool on a Windows GPU mining rig.

I’ve been playing around with Bitcoin and alt currency mining since 2013, initially building a couple of GPU rigs to mine Litecoin and then diversifying across all manner of coins and algorithms. I’ve been mining Ethereum for some time and until recently no new coin launch has caught my attention enough to move my hashing power.

Recently however, I couldn’t help but notice the hype surrounding ZCash.
ZCash is supposed to be truly anonymous in a way that Bitcoin never was. Anonymity is something that was assumed by early users of Bitcoin, but it was soon discovered that analysis of the blockchain could link transactions to people, fairly reliably.

Several coins have stepped in and tried to fill the anonymity gap, including the likes of DASH, StealthCoin and more recently, Monero.
I won’t go into a technical comparison of competing coins, but suffice to say that no other anonymous coin has enjoyed quite the marketing push that ZCash has... I get a lot of my crypto news from CoinDesk who are invested in ZCash, so that might have something to do with it – but either way I was interested enough to give mining a go at launch.

ZCash mining software

The ZCash wallet is only available on Linux at the time of writing. My woes with Ubuntu and AMD drivers are a whole other topic; so I decided to try mining to a pool on Windows.

At the time of writing there are three main Windows miners for AMD cards that I’ve been able to play around with. My rigs are a little old, housing mostly 7950 and R9 280X cards. Most of the testing for the Open CL miners seems to be around the RX 480 cards, so I’ve found the miners to be a little slower less stable on my older cards.

NiceHash:

https://github.com/nicehash/nheqminer/releases
The team at NiceHash had a Windows miner out quickly. Primarily the software is aimed at allowing rig owners to rent out their hashing power on the NiceHash platform, however NiceHash also have a pool at https://zcash.nicehash.com and their software can be used to connect directly to this, or any Stratum enabled pool.
The NiceHash software will mine using CPU as well as AMD and NVIDIA GPUs... all at the same time.
With an AMD 7950 using the latest drivers, I managed to get around 10 Sols/s per card with this software.

eXtremal:

https://bitcointalk.org/index.php?topic=1660023.0
eXtremal’s miner has a version using the SilentArmy solver and another forked from NiceHash. (eXtremal is listed as helping with the NiceHash Open CL implementation).
The standard version was the first that I was able to get working on launch day – the others at the time were all crashing too much to use. However the software is tied into eXtremal's own mining pool, which has a high fee of 4% and had payout issues on launch day.
Speeds varied across rigs, but I was able to get around 12 – 14 Sols/s with a mixture of 7950 and R9 280x cards.
I was unable to get the SilentArmy version working with my cards, but I understand this might be a lot quicker.

Genoil:

https://github.com/Genoil/ZECMiner
Genoil is well known in the Ethereum mining community. He had a few ZCash releases available on launch day, but it was very unstable and I couldn’t get it to run at all with my cards.
The latest version at the time of writing is 0.4.2. It implements the SilentArmy solver and this is the first Genoil release that I’ve managed to get working. It’s running much quicker at around 20-25 Sols/s per card – however it’s not the most stable, it can take several attempts to run and crashes every now and again. It still seems to be the best of the bunch for now, so we’ll address the instability with a bit of scripting further down.

How to get up and running mining ZCash on Windows

AMD’s drivers can be pretty fickle and I’ve not updated in a while, so I decided to run with a fresh install of Windows 7. Here’s the steps to getting a rig up and running mining ZCash:

Windows update
No, really. The latest AMD Crimson drivers require an up to date version of Dot NET (4.6 I think) and service pack 1 if you don’t have it. Installing the drivers without a fully updated version of Windows is a fool’s errand.

TightVNC
Much as it might look like Remote Desktop is working, in truth it plays havoc with a multi GPU mining system. The AMD card sensors don’t seem to function, drivers may not install correctly and I’ve been unable to adjust clock settings. Thus, I use Tight VNC to access my Windows miners over the network.
Download the server here: http://www.tightvnc.com/download.php

Scripting

The ability to leave your miner unattended is the key to a happy and stress free mining career. With the instability of the current mining software, there are a few steps we can take to keep our rigs running.

First, we’ll write a Powershell script to check that our software is running and start it up if not

This script loops every five seconds and checks that the mining software (in this case, the Genoil miner) is running. If not, it fires up a new instance.

Second, we need to allow the software to exit when it crashes.

Out of the box, Windows will attempt to ‘Find a solution’ to the problem of software crashing (has anyone, ever seen that work?) and when it fails, will require user input to close the window. We need to stop that from happening so that the process dies completely when the ZCash mining software crashes – and our script can fire off a new instance.
There are two steps to this:

Disable problem reports via the ‘Action Center’ in the Control Panel

Disable the UI dialog box when crashing out by creating a DWORD value in the registry at: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Windows Error Reporting\DontShowUI

Third, we need to get the script to run on boot.

There are ways of running Powershell scripts through the group policy on boot, but I’ve not had any luck getting that to work – instead, we’ll create a batch file to run the PowerShell script.

Create a batch file in the same folder and with the same filename as your PowerShell script.
Add the following:

What we’re doing here is telling PowerShell to allow unsigned scripts for this session, and running the script we created to boot up the miner.

Create a shortcut to the batch file and place it in the ‘Startup’ folder in your start menu, so that it’ll run on boot.

Finally, we need to make sure that our batch file runs unattended when we boot Windows

Usually the system will wait for us to login before running items in the startup folder, so we need to automatically login.

Click on start and run: netplwiz.
Choose the account you want to log in automatically and uncheck the box marked: ‘Users Must Enter A User Name And Password To Use This Computer’

And that’s it, you’re done!

The mining rig should now automatically run our PowerShell script causing it to check for the existence of the ZCash mining software, running it again whenever it has crashed.

Don’t forget to keep an eye on the various ZCash miners out there – these are very early days and new releases are coming out all the time... these new releases can often result in huge hash rate increases, so don’t get left behind!

Good luck!

]]>I’ve been dabbling in HTML5 game development with Phaser.io for the last few months and along the way I got wind of the Ludum Dare game jam.

For those who’ve not heard of LD, this is a 72 hour (or 48 if you’re brave) game jam

]]>https://garystanton.co.uk/ludum-dare-35/e83a7114-a52d-4dc6-a8a9-154be749aa37Sat, 23 Apr 2016 00:51:00 GMTI’ve been dabbling in HTML5 game development with Phaser.io for the last few months and along the way I got wind of the Ludum Dare game jam.

For those who’ve not heard of LD, this is a 72 hour (or 48 if you’re brave) game jam held three times a year. The community decide on a theme, announced at the beginning of the jam, and developers of all ages and experience levels spend a weekend creating a game to fit.
Along the way, people post updates of how their game development is going, and offer inspiration and encouragement.

The LD community spirit

The LD jam has been going on for some years now and a significant community has built up around it.
Reading through the rules and documentation on the website it really comes across that while there is a competitive element, the main point of the event is simply to encourage people to create something. There’s no prize as such, just the satisfaction of having taken part and having something to show for it.

As a complete newbie to game development, I was surprised at just how welcoming everyone was. I spent most of the weekend alone in my office in front of my computer, but very much felt that I was in a group and that we were all in it together.
When I mustered up the courage to post what I considered to be a very mediocre game submission, I was absolutely floored at the positive response it received. Not to say that it was anything special, but it seems people only have positive words to say to anyone who took the time to get involved.

I will definitely be returning!

The theme

The theme for Ludum Dare 35 was ‘Shapeshift’.
Of course, the obvious way to go with a theme like that is a game whereby the main character can change into another form or creature, such as the main character in the game Altered Beast.

My abilities as an artist are very limited, so I couldn’t see myself creating anything along those lines, and instead took the theme literally, using ‘shapes’ and the ‘shifting’ mechanic.
It seems I wasn’t alone in this, with over 2,000 entries, some of the submissions found really novel ways of interpreting the theme. Indeed, the rules state that even adhering to the theme at all is optional – the point is to spend the time creating something. Anything... Just get it out there!!

I set out to do just that, eventually deciding on a simple shape restricted movement game.
Initially I was thinking of a kind of racing game where the player can take different paths around a track, but can only move through shape related obstacles when the correct shape is selected.

As I’ve only been playing with Phaser a short while, there’s plenty that I’ve not learned or experimented with; and after a while of thinking through how my racing game would work it became apparent that too many of the elements involved would be new to me. I would need to spend much of the little time I had, learning how to do new things such as create a level with Tiled and work on camera movement mechanics.

As I’ve been learning Phaser I’ve been applying new techniques to a basic SHUMP game and using it as a testing ground – so I had a rethink about how I could make a game that is closer to the mechanics and functions that I’m familiar with.

Eventually I settled on an endless runner style game, where the character would have to move through obstacles without getting stuck. There would be a path through the obstacles, but only the currently active shape would allow passage. The player would need to switch the active shape in order to proceed.

Developing my game

Sticking to relatively safe mechanics and functionality, I was fairly confident of being able to code up something playable in the time. I already had a prefab I’d been working on that handled sprite movement, one that dealt with keeping timers and hi scores, and some experimentation I’d done with animating text and such. As long as I kept the gameplay simple, I should be fine.

What I was less confident about is how the game might look. I spent the best part of a day messing about with graphics and mocking up a visualisation of my game. Initially I thought I’d embrace my complete lack of artistic abilities by hand drawing elements and scanning them in – but I wasn’t happy with the results and ended up just using very basic vector shapes.
The minimalist style appealed to me once I’d added some very basic effects. I quickly stumbled onto an understated colour scheme I liked and the style of the game started to take shape.

Initial versions of the game began with very slow moving obstacles, and standard 8 way movement. It worked well enough, but just wasn’t very challenging until the game sped up significantly. To make play a bit more interesting I changed the movement to be rotation based, with left and right spinning the player and up and down moving them forward and backward in whichever direction they were facing. I also found that starting at a much higher speed just felt more exciting. Previously it would have taken about 90 seconds ramp up to the speed that the game ended up beginning at.

By the second day I had a working game with graphics I liked well enough, but I spent quite a long time trying to fix problems with the collision detection.
Eventually I decided perhaps Arcade Physics as it’s known, (AABB collision) wasn’t right for the game and I made an ill-fated attempt at using Phaser's ‘Ninja Physics’ – which allows for different shaped bodies. Unfortunately, after spending a couple of hours trying to convert the game over to Ninja Physics I found that it wasn’t really complete and didn’t have function parity with the more basic Arcade Physics. I didn’t have time to work around all the problems that came up, so reluctantly switched back.

By the last day, I had to abandon fixing the collision issues in favour of finishing off the game. I still needed to add instructions, a game over screen, a logo, music and sound, scoring and a few other bits and bobs – not to mention getting the game live.

Despite my reservations, this all went fairly smoothly. I’d been playing around with Mod tracker software recently, following some tutorials. I had a few patterns I quite liked so it didn’t take too long to flesh one of them out a bit into a looping track for the game.

My game data prefab already handled keeping scores and local storage, so all I needed to do was hook it up and create some events to fire at it.

I knocked up some quick graphics for the logo and when I found myself with an hour or so left, I took a risk and started playing with tweening my font into a nice transition for the intro screen. It ended up working really well and I’m glad I took the time to add that little touch, I think something so subtle adds a great deal to the very first impression of the game.

The finished article

After about 30 hours of work spread across the weekend, I ended up with Shiftah - a hastily named endless runner style shape based game.

My first ever game submission was quite nerve wracking. It's one thing to play with different ideas, but quite another to submit a full game, made in a rush, for judgement. I wasn't happy with my game, but felt I could be proud of what I'd managed to produce in such a short time, and with my level of experience. You can view my LD submission page here.

What I liked

Mostly I was pleased with the clean look and feel of the game. I took the time to add some little bits of polish and I think it elevates the very basic gameplay to something that looks like it's had some effort put into it.
I was very pleased with the feedback I got on the music too. I've spent such a short time learning to use a tracker - literally a matter of hours - so I feel that I can do so much better once I've learned some new tricks.

What I'd change

In truth, if I could go back and try again I'd change the entire game mechanic. I settled on it because something is better than nothing, but playing some of the other submissions has been really inspiring and I'd love to have another crack at it!
Also, there are some glaring problems with the gameplay, specifically with regard to collisions and such.

And finally...

Developing games is awesome. Frameworks these days do so much of the heavy lifting that it's really easy to get some instant gratification. Getting a sprite to move around the screen is so trivial, and having it shoot bullets and kill enemies, equally so.
My advice to anyone who's ever wanted to make a game is to check out some tutorials online, and find out how quickly that itch can be scratched.
Then, enter the next Ludum Dare and I'll see you there.

]]>Anyone brave/stupid/stubborn (delete as applicable) enough to dare to run Wordpress on Windows and IIS, is likely to have run headlong into the brick wall that is NTFS file permissions.

The language I'm using here is deliberately ironic. Even in 2016 the level of help you can expect

]]>https://garystanton.co.uk/wordpress-iis-file-permissions/0ea7426b-67b3-4243-bbe3-7e44a6e2b2f4Fri, 05 Feb 2016 00:25:00 GMTAnyone brave/stupid/stubborn (delete as applicable) enough to dare to run Wordpress on Windows and IIS, is likely to have run headlong into the brick wall that is NTFS file permissions.

The language I'm using here is deliberately ironic. Even in 2016 the level of help you can expect when searching online for Wordpress / IIS configurations is abysmal. The attitude of a great many contributors amounts to the following:

"Why are you using Windows (you idiot)? Don't use that, use LAMP (DUH!) and don't come around here no more, asking your Windozy questions like a n00b... it's called CHMOD, now get back in your damned cage."

The Wordpress docs aren't a great deal better.

So perioically I find myself running into this issue on a Wordpres IIS installation, whereby I'm unable to update plugins or do other things that involve the system putting files into the wp-content folder. Thus ensues the usual frustration of spending hours Googling, clicking on results that look like they might be helpful, only to find that they're for a LAMP stack.

No. More. Damnit!

So I thought I'd document the various things I've found myself trying. Some of these have worked on their own in certain environments, others required a combination of factors.

Check your file permissions.

It's the first place you look when you see a filesystem related error. Check who owns the wp-content folder in Windows. Check who has access to write to this and sub folders.

Many times, this is the only advice you'll be able to find that is Windows centric, and indeed it is very important.
Messages are conflicting, however.

There are various users that could require access to this folder, depending on your setup.
It used to be the case that people would grant access to the NETWORK_SERVICE user. This is outdated advice and while it may help, it's not the best security wise.
Generally these days I would advocate using the App pool user for the site in question.
If your Application pool for the site is set to the default ApplicationIdentity, then you would want to grant write access to the wp-content folder to the user: IIS APPPOOL/YourAppPoolName - where YourAppPoolName is the name of the website in IIS.

Other advice states that granting write access to the IIS_USRS group will help you - indeed, it might. This group is supposed to house the application pools, but it's overkill to grant access to all your IIS users if you have multiple sites.

Another account to try is the IUSR account, which is the anonymous IIS user. Indeed, in many cases you'll find that PHP is running as this user and thus it would make sense that it should have access to folders it wants to write.
In my case, only the Application pool user is necessary, but your mileage may vary.

While this is always the first place to look, it's not always the solution for Wordpress plugin update issues. A good test is to temporarily enable full control over the wp-content folder for 'Everyone'. Please note that I am not in any way advising you do this permanently!
The idea is that if you enable full control for everyone and still have a problem, it's likely not permissions causing your issue.

Procmon is your friend...

As a Wordpress user on Windows, friends are few and far between! (It's ok, I'll be your friend too). Procmon is a lovely piece of software that gives an incredible amount of realtime information and will allow you to get an idea of what's happening and to which files, when Wordpress is attempting an auto update or plugin update. Procmon can be downloaded directly from Microsoft.

It can be a little overwhelming, but using filters you can drill down to see which files are being accessed, and by whom.
A good starting point is to add a filter for the .maintenence file in the root of your Wordpress installation. This file is added temporarily by Wordpress when it's attempting an auto update, and catching it in Procmon will help you to see what user Wordpress is using to write the file.

The temporary upload folder

When PHP is uploading files from forms and such, and indeed when Wordpress is downloading update packages, it tends to store them temporarily in a folder while they're being transferred. By default in many Windows PHP installations, this folder is C:\Windows\Temp.
Obviously this is a standard temporary folder and is shared with countless other processes.

In the first case, this folder might not be fully writable by PHP - but it makes sense to change the folder completely, just to segregate these operations from the rest of your OS.

In the php.ini file (If you've used the Windows Platform installer you can access this via the PHP Manager in IIS, otherwise you'll usually find it in C:\Program Files (x86)\php or similar.) there's a setting that you can change to whatever folder you like: upload_tmp_dir = "C:\YourNewPHPTMPFolder"
Be sure to give writable permissions to PHP on this folder. Again, check what user PHP is running as!

Disabled functions

Another thing that can really trip you up, is spending all your time looking at file permissions without realising that PHP may not even have the ability to copy files from one place to another!

Once again in the php.ini file you may find a setting for disable_functions. Newer versions of the Web Platform Installer are able to lock down PHP pretty tightly and disable a whole bunch of functions out of the box, including some file operations that are used by Wordpress when updating plugins.
Remove the following functions from your configuration to re-enable this functionality: move_uploaded_filechdirmkdirrmdirrename

WinCache

WinCache is a popular caching module for PHP. Some versions of the module have been known to cause problems with Wordpress automatic plugin updates on IIS. There's an article about this here: http://ruslany.net/2011/04/wincache-and-wordpress-plugin-upgrade-problem/
Essentially the advice is to update the version of WinCache, but you could simply try disabling it to see if it is indeed the cause of your issue.

YMMV!

These are the main issues I've found when attempting to solve the file permissions problems that plague many Windows IIS Wordpress installations. There may well be more, and I'd be very interested to hear them!

]]>So I've been dabbling for a while in HTML5 game development, specifically using the Phaser framework. I've been documenting my progress offline for a few weeks and I'd like to post this online at some point...

For now though, I wanted to write a post about using Phaser’s particle

]]>https://garystanton.co.uk/better-explosions-with-phasers-particle-emitter/3512e8bd-1cb6-471f-817b-2111e3b8df6aSun, 31 Jan 2016 17:12:00 GMTSo I've been dabbling for a while in HTML5 game development, specifically using the Phaser framework. I've been documenting my progress offline for a few weeks and I'd like to post this online at some point...

For now though, I wanted to write a post about using Phaser’s particle emitter to create an explosion effect that I think’s worked out really well.

The effect combines a single explosion animation, with a particle emitter to make multiple semi-random explosions which really beefs up the effect.

The graphics I’ve been using for B-Type came from GameDevMarket and along with the space shooter pack I purchased came a simple explosion animation that looks like this.

It’s nice enough, but doesn’t really have the impact I was hoping for.

My idea was to fire off several of these explosions, one after the other. I could probably do this by creating a group, adding the explosion animation and using the timer to spawn new instances of the explosion sprite in quick succession with a bit of randomising logic to place them around the collision area...

but wait, isn’t that basically what particles are?

There’s a comprehensive particle system shipping with Phaser, and an even more complex one available as a plugin, so I thought this should be an easy challenge.
Unfortunately, I found it difficult to get any information about using animations as the individual particles. I’d have thought that I could simply add an animation to the particle emitter and have it play as particles are spawned – however, Phaser wasn’t having any of that!
Looking around the forums I found that I could loop through the particles, adding an animation to each, but that didn’t seem to work very well either,
What I landed on was creating a custom particle class, adding and playing the animation on the onEmit function.

The emitter creates six explosion animations using a random scale and position, within very tight parameters. The emitter.start function is set to output the explosions every 50ms.
I’m sure you’ll agree, the result looks much better!

So only one day after setting up my blog on a cloud hosted Ghost instance, I’ve found myself trawling through blogs and documentation on a voyage of discovery – with the intent of hosting Ghost on my own NodeJS server.

Why?!

So only one day after setting up my blog on a cloud hosted Ghost instance, I’ve found myself trawling through blogs and documentation on a voyage of discovery – with the intent of hosting Ghost on my own NodeJS server.

Why?!

When I made the decision to go with Ghost, the thinking was that I didn’t want to get bogged down in the creation of a website or management of blog software... instead reasoning that the content I wanted to post should take the highest priority.

That thinking still stands, however I really wasn’t comfortable with the lack of overall control I had without hosting the system myself.
In no particular order, here’s what was concerning me:

URL management

Ghost Pro allowed me to use my own domain... that’s great, but I also want to host other things.

As I’m getting into game development, I’d need a place for the games I make. Many of my posts will contain code. Ghost handles this well, but I also need to post demos of said code. Those demos may be HTML5 and Javascript, they may be CFML, PHP, any number of other technologies as I progress...
The only solution seemed to be to separate these concerns across various sub-domains. I’m not happy with this solution as conceptually, this is all one site.

Hosting Ghost myself, allows me to place the blog content under a subfolder, and allows other content to exist wherever I like.

No SSL

Wait, what? Seriously?
Apparently I can use SSL if I host through CloudFlare, which I have no intention of doing. Otherwise I’m stuck with standard non-secure hosting.

Theme development

I found I had no way of previewing changes I made to my theme as I have no local version of the Ghost software.
Additionally, the workflow seems to be:

Make a change

Save the file

Create a zip file of the entire theme

Upload it to Ghost

Refresh the browser

Repeat

That’s insane! My usual workflow of Alt-Tab, F5 is infuriating enough already... I really need to get some live reload functionality working, but I’ve run into problems with that on Windows. Apparently Grunt is the way forward there, but I've yet to delve into that beyond build minification... I’ll get there eventually.

So if I host the software myself, I can at least save and refresh.

Node JS, baby!

Over the last few years I’ve found myself using Javascript more and more, out of sheer necessity. I’ve been planning to embrace this a little more and start dabbling with Node.js for server side development for some time.
I’ve run Node locally for some basic Grunt build tasks, but I’ve not really advanced beyond that. Hosting Ghost necessitates running a Node.js instance on a live server, and getting my hands dirty a little.

My hope is that as I want to add bits and pieces to the site outside of Ghost, I will attempt to do that with Node first.

But I still love CFML...

I do. It’s true.
CFML made me a developer.
Years of experience made me a pretty good one.
Contracting and inheriting code made me confident enough to make that last statement... There’s nothing quite like inheriting god-awful code, to combat impostor syndrome!

So while I want to get out of my comfort zone and start developing for Node, I’d like to know that if I just need to get something done I can switch back to old faithful, by hosting Node alongside a Lucee instance.

I can’t see the database!

Yeah. That just doesn’t sit well with me. Ghost apparently has a nice API in beta that will allow me to pull content out and do whatever I want with it, sure... but there’s nothing quite like delving into the database and bending the data to my will.

No. If there’s an option that allows me to access the DB, I’m taking it.

And so, to work!

With the decision made, it’s time to get down to it.
Installing Node is straight forward. The homepage of https://nodejs.org points directly to the download. We want the LTS version, since that’s what Ghost is supporting.

IISNode

I want to run Node through IIS. Partially because I want to be able to use other technologies on the same site and partially because it’s what I’m comfortable with. I also don’t fancy opening up more ports on my server.

IISNode seems to be what I’m looking for here: https://github.com/tjanczuk/iisnode
Setup is a straight forward installer and running a batch file adds a bunch of samples on the default website.
Initially, these samples don’t work and I receive the following error:

The iisnode module is unable to start the node.exe process. Make sure the node.exe executable is available at the location specified in the system.webServer/iisnode/@nodeProcessCommandLine element of web.config. By default node.exe is expected in one of the directories listed in the PATH environment variable.

A quick Google shows I need to restart the WAS service, thus: net stop was /y & net start w3svc

I’m now able to see the hello world sample. That was fairly painless... hurrah!

Installing Ghost

Again, this is fairly straight forward. Download the latest version of Ghost, https://ghost.org/download/ & unzip the contents to a folder under the webroot.
Open a command prompt, navigate to the folder containing the Ghost installation and run the following command: npm install –-production

Updating the config.

I now have a version of Ghost installed, but before I start it I need to mess with some config settings.
In the root of the Ghost setup is a file, config.example.js. If I run Ghost now, it’ll copy that over to a new file, config.js. I may as well do this myself.

In the config.js file there are a few settings we need to change.

url: Set the URL of your blog. email: In order to send password reset emails, we need to tell Ghost about our email setup.
Detailed instructions are here: http://support.ghost.org/mailserver: Since we’re using IIS, we need to tell Ghost to let IIS decide the port. port: process.env.PORT

Update web.config

We need to tell IIS to forward requests to node. In the root of the Ghost installation, add this to the web.config under: <configuration><system.webServer>

Running Ghost

I read a lot of articles during the course of setting this up. Most of the ‘Ghost on Windows’ tutorials include a manual starting of the app, using Npm start –production.

What’s not been made clear, but actually makes sense now that I think about it, is that this isn’t required when using IISNode. The module will automatically start the Node app if a request is made to the URL.
Thus, you can now visit your blog URL and you should see the default Ghost blog homepage!

Hurrah! You now have a working Ghost installation on IIS.
Hopefully.

A note on environments

Node has an understanding of different environments out of the box, using the NODE_ENV setting. You may have noticed in your config.js, there were settings for both development and production environments.

Ghost runs in development by default, but obviously on a production server you’ll want to change this.
Since IISNode is starting the app for us, we can’t simply set the environment in the command line, but we can change the default in the /core/index.js file, thus: process.env.NODE_ENV = process.env.NODE_ENV || 'production';

I’m not overly happy with this, for two reasons. Firstly I don’t feel right editing anything in the core. Presumably updates will overwrite this setting, so it’s something to remember whenever you update Ghost.
Secondly, this messes with my version control. I don’t want to have to have a different setting on dev than in production and usually create some form of conditional sever interrogation to avoid this.

For now however, I can’t seem to find another way of doing this – short of having the .js file auto updated by something else server-side.

Update: Using HTTPS

When deploying on my production server, I found that the site was returning an infinite redirect.
It turns out that if you try to use an https url, you need to add a header to the request. I found a github fork of Ghost specifically for Azure, and this had the answer:

Create a file in your Ghost root called iisnode.yml, and add the following:

The sad truth is of course, that all our good intentions and indeed our enthusiasm for them, generally peters out by about mid-February.

Not this time though!
Let us live in this delusion a little while longer, as I explain that in 2016 I've decided to create my own little space on the web, in the guise of the delightful blog you currently find yourself reading.
(Really, how did you get here exactly?!)

All change please.

For years I've had a blog over at Simian Enterprises that housed my various code shenanigans, and occasional rants.
As with many blogs, the frequency of updates became less and less, as I became busier and busier. It was always my intention to post little updates detailing what I'd been working on with samples of any code I thought worth sharing... but who has the time?

Over the last couple of months, I've been dabbling a bit in HTML5 game development. I've been documenting my progress offline, with the intention of posting online at some point. Given that this has little to do with my application development work, it seemed that the Simian Enterprises blog wasn't the right place for this - and so I decided I would retire the old blog and transfer the content over to my own personal site at garystanton.co.uk.

Posting on a specifically personal site rather than on my company website, should allow me to write more diverse content. I code, lots. But I do other things too, and I also have strong opinions no-one really wants to hear. What better place for them than this little corner of the interwebz?
Plus I can swear here too. Fiddlesticks!
See?

Stand clear of the platform.

I've had ideas about this blog floating around in my head for a couple of years now. As with a lot of developers, I spend so much time and effort working on projects for clients that I very rarely find the time to work on my own site, and so nothing ever really happened.
I realised that I was never going to sit down and spend the time to hand craft the perfect website and so decided to just get it done - and to that end I'm using a cloud hosted blogging solution, Ghost.

This feels odd to me. I like to build things from scratch. It's what I do. I like to build them in CFML primarily. I like to have full control over the minutiae of every byte... I'm actually writing some basic blogging software for a client at the moment!
Handing over the reigns to another system feels... weird.

I also realised that I probably wasn't going to find the time to push pixels around in Photoshop and come up with a lovely, expressive design that wowed viewers whilst alluding to the nuances of my character... etc.
So for now I've simply downloaded a theme that appealed to me - no longer appeals to me - Do you like the new theme? ;)
I'm not sure how I feel about Ghost yet. Again, I wasted lots of time wading through the various blogging platform options and there's too many to really know if I've made the right choice. I'm fully prepared for the possibility that I will migrate to another platform, or eventually roll my own - and you, dear reader, should be prepared for this eventuality also. If you're reading this in 2016, the blog may well look completely different the next time you visit.

I'm also not sure whether Ghost can handle importing my old posts from the Wordpress powered Simian Enterprises blog. I don't feel a blog is right for that site any longer, but I'd still like the content to be available here.