In the last six months I’ve launched two export tools for Adobe Illustrator.

Smart Layer Export

This tool followed on from a free export panel I released some time ago. It enables a quick way to export artboards and layers into a variety of formats. I’ve added a ton of features and usability to the tool, and it’s been getting a great response.

Promises

Sometimes when using @inject to inject multiple properties, it’s helpful to know when all of the properties have been injected. This can be done by adding the @promise metadata to a method with a list of the properties that should be watched for. The method will be called immediately after all properties get set.

Notice that it is possible to use non-injected variables in your promises (‘active’ in the below example). At compile-time these variables will be wrapped in getter/setters to facilitate the watching process (or the setter will be amended if it already has one).

12345678910111213141516171819202122232425

class MainPlayer extends AbstractTrait{

publicvar active:Bool;

@inject({asc:true})privatevar gamepad:IInputDevice;

@injectprivatevar controller:CharacterMotionControl;

publicfunctionnew(){super();}

@promise("active", "gamepad", "controller")publicfunction onPromiseMet(met:Bool):Void{if(met){trace("Both traits have been added and are ready to bind together");
controller.setInput(gamepad);}else{trace("One of the traits is about to be removed, unbind them now");
controller.setInput(null);}}}

How promises behave

The first call to your bound method will be called as soon as all applicable variables have been injected, the met parameter will always be true on the first call.

If any of the properties get set to another value, then the method will be called twice, once before the property is committed with met=false, then again after the value has been set with met=true. This will really only happen when promises are pointing to non-injected properties (which are set outside of the composure framework).

If any of the applicable traits are removed (i.e. the property is set to null) then the method will be called again with met=false

Future features

There are a few other ideas floating around for more features and improvements, which I’ll have a look at when I get a chance:

Compile-time binding for certain pre-established objects. This will make things much faster and lighter when generating lots of the same type of game objects (a bunch of enemies, for example).

A light and simple event system for communicating events between the traits of a ComposeItem. This would also come with metadata support.

A simpler, lighter version of the @inject metadata that trades some flexibility for performance.

SVG is a great graphic format, it scales perfectly for different screen pixel densities, so that it looks crystal clear on all devices.
Being a vector format it’s also typically smaller than raster based formats (i.e. JPEG, GIF, etc)
It’s only drawback is that it consumes resources to be rendered, but this has been almost entirely mitigated in recent years by all browser vendors optimising their rendering software.
It also has the power to include animations within a single file, through the underutilised SMIL standard. These animations are also very clean and light.
Unfortunately there is little in the way of generating these animations, this is why I have been working on this export panel for flash for some time now.

HaxeBridges is a library which allows a single project to be compiled into separate parts, and for separate platforms.
This is useful for many situations, including when multi-threading, client/server, etc.

The idea is that your main application should be able to naturally use objects which will be published via a different platform/compile.

The example included in the repository now is a simple example of creating a worker thread (flash only for the moment).

I recently used a small video camera for an overseas journey and realised on return that the video files generated are formatted in such a way that prevents them from being read by most professional video editing/converting software. To get around this took a little research, so for those who are having similar issues I thought I’d post how I fixed the files. These instructions are for Windows, if anyone converts for use on OSX please post in the comments.

First, download ffmpeg, look for the link with “Static” in the title under the correct OS type heading (32-bit or 64-bit).

Open this compressed file (might need to install 7zip if you can’t immediately open the archive).

Extract the bin/ffmpeg.exe file from the archive and copy it into the folder with your video files.

It will create new video files (in H264 format), after which you can delete the original AVI files.
Now you should be able to move these two files into any folder with the broken video files, run the “Convert.bat” file and it fix your videos.

If, like me, you had your camera mounted upside-down for practical purpose, then use this script instead, which will also rotate the video 180 degrees.

While working on a project which required rich vector animations in the browser, I came across Dave Belias’ library for exporting still SVG frames from flash. I wondered if I could re-purpose it to export Animated SVGs, a relatively unknown standard for containing fully animated imagery within a single SVG file.Continue reading

The original tool was built for raster outputs, and whilst it did support EPS, each EPS file that was generated actually had all layers included (all invisible except one).
So after some reworking I got it to generate small vector SVG/EPS files, each containing only what was needed.

I also made it easier/cleaner to add new formats and added a few usability tweaks (options which don’t apply to the current output format are disabled).

Installation & Usage

In Illustrator, use the File > Scripts > MultiExporter option to bring up the dialogue box, it’s all fairly self explanatory.

Notes

I made minimal changes to Matt’s functionality, here are his original notes with my modifications.

Supported formats are: PNG8, PNG24, PDF, EPS & SVG

You can choose whether you want to export all the artboards in the document with the currently visible layers showing, or if you want to export files for each of the layers in a document on the currently active artboard, or if you want to export a combination of all the artboards multiplied by all the layers.

Files are named based on their layer name. It only exports layers where their name has been changed from the default “Layer 1″, “Layer 2″ or “Artboard 1″, “Artboard 2″, etc. I removed this feature, but might add it back as a configurable option.

If you put a minus sign (-) in front of a layer name or artboard name, it will skip that layer or artboard. (Useful for when you no longer decide you like a particular mockup, but don’t want to delete it from the master Illustrator document.)

For layers only: If you put a plus sign (+) in front of a layer name, that layer will always be visible. Useful for if you want to have a layer that always appears in the background of each exported image.

It stores its settings in a nonvisible, nonprinting layer named “nyt_exporter_info”

This will make the panel resizable.
(Thanks to Alexey Tcherniak for looking into this)

Edits 11/04/2013 – 8/05/2013

There were some issues with the alignment of objects when multiple artboards existed, which I’ve fixed.
Also, it now avoids outputting imagery when nothing would be included, this makes the “Artboard + Layers” output much more useful (as you’d rarely have one layer which spans across multiple artboards).
I have added the functionality to trim the exported files to their visible size (as opposed to the artboard’s size), this will allow mouse interactions to pass around the visible area to regions behind the SVG.
Fixed an issue where layers containing only invisible items (e.g. Guides) were causing an exception.
Fixed an alignment issue which appeared if the artboard had been resized after creation.
Fixed cropping issue with large layers (and Trim option)

Edit 17/05/2013

Separated the artboard and layer selection, which now allows for more fine-grain set ups.
Now allows for ‘Trimmed Edges’ functionality for all output formats.

Edit 17/07/2013 – 28/08/2013

Edit 6/12/2013

Added a mode to avoid visual clipping of round backgrounds.
Thanks to John Ford for the input.

Edit 7/8/2014

I’ve changed the file naming system to be much more flexible, using a token pattern instead of the prefix/suffix fields. Try it out here.

NEW VERSION

There’s a brand new version of the tool, rewritten from the ground up, it’s way more powerful and allows for multiple image formats to be generated from a single execution.
Also, it avoids the group issue that was the cause of that annoying message box.
I’ll still be offering this version, but if you get a lot of use out of it, buying the power version would be much appreciated.

Over the last few years XML has become a very unfashionable standard and especially in the Haxe world has drawn a lot of fire for being too verbose and containing too much redundancy.
I still use XML when I feel it’s appropriate, and I wanted to explain why.

When is XML the wrong choice?

In the past, XML has been used as a way of serialising data as input for a known web application (either from the back-end or from a file). I see this as a misuse of the technology. If it is not a public service, it has no reason to be be easily human readable and deserialising XML at runtime is wasteful. Ironically, this is what happens every time your browser loads a page and no-one seems too concerned about that.

Syntax problems with XML

I will also happily admit that the closing tag in XML is absolutely useless. Some might argue that it adds checkable redundancy to the file and avoids misspelling an opening tag, but that begs the question; Why not make all XML structures have redundancy? (Anyone think closing an attribute by retyping it’s name sound good?)

This could also be argued of the CDATA tag, which seems to be avoided simply because of it’s utter ugliness.

YAML?

YAML is like Brittany Murphy (was), it’s pretty, but it has some problems. Firstly, it doesn’t scale well, because it uses indentation as syntax, the deeper your heirarchies become the heavier each individual item is (so that a simple piece of information can use 10x the necessary characters, if deep enough). Sometimes removing indentations can make code more readable, and XML allows this (as does JSON), or it can be removed all-together when needed. In XML and JSON, you may choose to put several bits of information on the same line for brevity (normally as attributes), but YAML does not allow this behaviour.

This is not to say YAML does not have it’s uses. I believe it will find it’s place representing small pieces of structured data in broadly editable systems (like Wikipedia or something).

JSON?

In my opinion JSON is a strong contender to topple XML, and should be used in many public web-services instead of XML/SOAP. It’s human-readable and light on it’s feet. I have had many more issues resolving JSON serialisation incompatibilities than XML, but this is probably a result of it’s more organic evolution into a standard.

Where I see XML rise above JSON is when it comes to namespaces. Lots of developers will go their whole careers without needing to understand XML namespaces, but once you do, you’ll realise that XML is a little more than serialisation format. It allows different nodes to be scoped differently and to avoid any naming collisions. This might sound trivial but it means that XML data-sources can be annotated with tags from multiple different systems each of which can process the file without disturbing the other nodes, all without any need for delimiting anything (as in some template languages).

I’ll give you an example, you could use XSLT(an XML based language) to transform Android layout files (another XML based language) into XHTML. Neither of these standards were build to be aware of the other. And the output XHTML tags within your XSLT files can live as first class citizens alongside your transformation tags only because of namespaces. In this scenario, the resulting format does not even have to use namespaces to benefit from having them. So by using XML as an input for your application, you will have made it many times more flexible than otherwise.

When is XML appropriate?

When interpreted at compile-time, XML performance issues all but disappear, and the opportunities in pre-processing the XML add a huge degree of flexibility to a system.
I would argue that this is where XML belongs and will hopefully thrive as it’s misuse recedes.

Guise is not Haxe 3 ready yet, only 2.10 (although there is a branch being worked on)

Current usage types

Before getting into the details, I’ll show a few examples of Guise using different platforms/styles.
You’ll notice that I’ve only implemented a handful of controls so far, this has been done to keep the codebase flexible while the core architecture is still being finalised.

Graphics API Style

This style uses a flash-type graphics API to draw UI elements to screen.
Skins are written in XML which is interpreted at compile-time (no loading/parsing XML needed).
Transitions between skin states are generated automatically based on the skin (although would probably be customisable in future).

Currently it is using NME, although it can support any number of drawing APIs (and did support CreateJS at one point).

Bitmap API Style

This style uses a texture-based bitmap API to draw styles on the GPU.
Currently it’s using Starling as it’s underlying display platform, but we anticipate dropping Starling support in favour of Nicolas’ H2D library (when it is a little more mature and supports a few more platforms).

There is also no reason that NME couldn’t support this type of skin, so expect to see that in the future (this would allow use alongside the graphics-type API above).

HTML5 Wrapper

We see this wrapper coming in handy when a native app needs to be pushed to the web.
Obviously you’d want to get some CSS in there as well.

I’ve just included here as an image for those on old browsers (sliders are very weird in IE9 btw).

Waxe Wrapper

Waxe is a Haxe wrapper for wxWidgets, a native UI binding library for Windows, OSX and Linux. Unfortunately the project seems to have stalled (Hugh?), but I still believe it’s a good starting point for native UIs (especially for OSX).

The wrapper classes made for Waxe would hopefully provide the base for wrapping a similar solution for mobile, wrapping a library like Basis or MoSync NativeUI (with a set of externs), which would then open up support for iOS, Android and WP7 native controls.

Disclaimer

Whilst I haven’t pushed anything to haxelib yet, I’ll include the information below for anyone who wants to poke around the repository and test it out.

Setup

To use Guise with a native UI wrapper, you’d do something like this (file paths will have to be relative to calling class):

The code is exactly the same regardless of the intended platform, just the initial install call changes.

Skinning

The skinning system is very flexible and doesn’t push an anticipated structure on the developer. For example, a TextInput control can be skinned to have no text field, or ten of them, or an icon, or whatever your designer desires.
Each control has states, these are like MouseOver, Inactive or Focused; multiple states can be active at once. Each control also has layers and these layers can take on a different appearance based on which states are active. Each layer can also change it’s size and position based on which states are active (and transitions between these positions will be generated).

Future?

As mentioned above, we’re waiting for a few other haxe libraries to stabalize before integrating them, but once they have been integrated we’ll have all popular platforms covered in one way or another.

Native Layouts

Currently Guise has a few Haxe-based layouts and I intend to expand on this, but sometimes there is no substitute for native layouts. Supporting these would be a matter of providing some sort of NativeLayout class which would read a platform-specific layout file (XIB for iOS/OSX, Layout XML for Android, XAML for windows). The generation of all of these layout files from a common source file is the subject of another research project I’m working on (slowly).

Skin editor

As skinning is done in XML, I will at some point look at the viability of building a visual editing tool. This would use vector based drawing tools which could then be exported as either Graphics skins or Bitmap skins (with scale-9, bitmap fonts, etc).

When working with XML files, it’s often convenient to break the data structure down into smaller parts, each saved within a separate file.
There is a standard called XInclude which allows XML sources to reference other XML sources which can help reassemble your separate files into one structure.

As part of my XML Tools library, I’ve implemented an XInclude system which takes in a root XML file, loads in any referenced XML files and returns the complete XML structure.

Note: I’m well aware that XML has become the whipping boy of the web-dev world, but despite it’s verbosity we still have to deal with it.

Structuring your XML

To reference a file from within your root XML file (or any subsequent file), use the ‘include’ element like this:

12345678910111213141516

// root.xml<root><includehref="child.xml"/></root>

// child.xml<child><grandchild/></child>

// results in<root><child><grandchild/></child></root>

Or if you’re already using elements with the name ‘include’, you can use the XInclude namespace:

As per the spec, if you want the referenced file to be added as a text node, you can specify using the ‘parse’ attribute:

1

<includehref="child.xml"parse="text"/>

I’ve also added a feature which is not in the spec but I have found useful in the past. Using the ‘inParent’ attribute, you can have the root element of the referenced XML file ignored, with all of it’s attributes and chidl nodes being added directly to the parent node of the ‘include’ element. Here’s an example of the results:

1234567891011121314

// root.xml<root><includehref="child.xml"inParent="true"/></root>

// child.xml<childattribute="test"><grandchild/></child>

// results in<rootattribute="test"><grandchild/></root>

Installataion

As per usual, you have to install the xmlTools library from haxelib like this:

1

haxelib install xmlTools

And then include this library in your project settings.

Usage

Scroll to the bottom if you’re interested in using the tool from the Command line.
By default it uses the ‘mloader’ haxelib for all of it’s file-system access (you can use your own I/O system by implementing the org.tbyrne.io.IInputProvider interface).

The first parameter here is the name of the root XML file, the second is the folder where all of the XML files are located.
The XML file-paths can include sub-directories of the root directory passed through.

The first argument is always the root XML file to operate on.
The second argument (-d) is the root directory that contains all of the XML files.
The third argument (-o) is the output file to write the result to.

Multiple executions can be done with one command using the ‘–‘ separator:

I recently needed to navigate some XML in Haxe and noticed that there were few options for doing this quickly and easily in Haxe.

I did notice Oleg’s Walker class which brings some of the E4X functionality of AS3 to Haxe.
While the resulting code was more elegant than hand-writing loops and tests, it still felt too verbose, and I decided to add some macro sugar to it to cut down the syntax (and bring it closer in line with the E4X spec).

The result is the E4X class, which reduces the amount of code by 2-3 times (in comparison to a fully runtime, function-based solution). Due to haxe language restrictions, the resulting syntax is not quite as compact as the AS3 equivalent, but it’s close.

Usage

E4X expressions must be wrapped in the macro call, and they return an iterator of values (the type of which is based on the last part of the expression).

To get all children:

1

var nodes:Iterator<Xml> = E4X.x(xml.child());

Here are some different ways to get a list of all the child nodes with the name “node”:

Note that all of these examples should be wrapped in the E4X.x() call, as in the code snippets above.

Performance

I also ran some performance tests for several targets (and the equivalent tests in AS3 E4X), the results of which are below.
This helped me make some performance improvements to Oleg’s original code, and I managed to squeeze an extra 25-30% increase in performance out of it.

Surprisinigly, the JS target seems to perform best overall (although this is probably more as a result of Chrome’s JS engine).
Even after my improvements, the AS3 target was woefully slow in comparison to it’s native counterpart, although all of the other targets seemed to hold their own, with more complex expressions becoming faster than the AS3 E4X equivalent (if anyone knows why this performs so poorly, let me know).

On a recent job I was tasked with creating a visually elegant replacement for an image out of some text (when an image was unavailable).
I decided to adjust the font-size of each line of text in a block to fill a box.

The result is the TextBoxTest class.
To use it, you create a text field, set it’s properties (text/size/multiline etc), then send it to the TextFitter class like this:

The second parameter is ‘defaultSize’, this gives the TextFitter a starting point when resizing the text and does affect the end result. If ommitted, the font-size from the ‘defaultTextFormat’ will be used.

If wordWrap is set to false, you’ll have to manually add line-breaks. In this mode lines of text will simply be resized until they’re the same width as the field itself (the height of the field will be ignored).

If wordWrap is set to true, the height of the field will be taken into account, and the text will be reduced in size until it all fits within the field. If ‘defaultSize’ is ommitted and ‘wordWrap’ is true, TextFitter will first attempt to maximise defaultSize so that it always fills the field (even if the defaultTextFormat.size property wouldn’t ordinarily fill the field).

It’s worth noting that the code uses multiple while loops and while it has internal limits on iterations, I’d recommend using it up once and then generating images from the output.

Following some pretty good feedback on Composure, the composition library for Haxe, I decided to get some code documentation published.

The result was a batch file which would generate documentation in Markdown format, which can then be manually committed and pushed to the github wiki. You can check out some examples of the results here, here & here.

The Documentation system

I wanted to have something generated directly from the code, and I had a preference for having it hosted within the Github Wiki system (for the simplicity of having code and docs accessible from the same place).

When a repository is created in Github, the system automatically creates a second repository to store the wiki files, which can be edited via the Github web interface or by cloning this wiki repository onto your local hard-drive and manually editing the files, which are in Markdown format, a simplified formatting language which gets converted to html by the github back-end.

I’d need to generate Markdown from my code and have it placed into wiki repository. I only found a few documentation systems which processed haxe code, and only one of them which allowed for custom templates, this was ChxDoc.

There were a few limitations to ChxDoc, specifically, that you have no control over what files get generated, or what file type it spits out. I’d have to reorganise and rename the ouput files as part of my batch file. ChxDoc works by processing an xml representation of the code, which is created by running the haxe compiler with the -xml flag, I’d include this step in my batch file as well.

The Templates

I copied the default ChxDoc template and stripped all of the html tags out then added in the Markdown syntax. I didn’t need several of the output html files, so some of the template files remained untouched (deleting them caused ChxDoc to fail).

Setting up the Repositories

To streamline the documentation process, I added the wiki repository as a submodule to the main repository, this means that the wiki files would always sit in the same relative position to the main source code (i.e. in a ‘github-wiki’ folder).

In the ‘build’ directory, I created a batch file which does the following:

Generates the XML representation of the code using the haxe compiler.

Deletes the old documentation folder.

Regenerates the documentation using the chxdoc program, my templates and the xml code graph.

Deletes irrelevant generated files.

Renames the ‘All Classes’ file (changing the file type from html to md).

Changes the remaining html files to md files.

The batch file, template and ChxDoc are all included in the Composure repository (don’t tell me that binary files don’t belong in a repository you nazis) or you can just check out the batch file here (which will obviously only function on Windows).

Edit: 06/02/2013

Over the years I have realised that inheritance is massively overused by developers, and that by replacing it with a solid composition design, a lot of code becomes a lot more reusable.

Some languages and platforms have native support for composition (e.g. Unity3D), but for the languages I use there was nothing, so about two years ago I built a lightweight composition framework for AS3 called Composure, I’ve recently completely rebuilt it for Haxe, utilising Haxe’s awesome typing and macro systems to make this small library really powerful.Continue reading

I’ve just released a new Android App.
It allows you to watch out-of-copyright videos from the Internet Archive Database on your phone or tablet.
It’s currently early days and it doesn’t really have any browse functionality yet, just search fields.