Developer Postmortem

With the submission phase of the 2013 App Innovation Contest now concluded, I have found a moment to catch my breath and reflect back on the breakneck development process endured over the past six weeks. In this lengthy postmortem wall-of-text, I will touch upon the variety of successes and challenges that I met as I worked my way toward producing a fun and entertaining desktop application compatible with the Lenovo Horizon All-in-One and the Aura Interface. This article will not be a code dump, but rather a detailed and hopefully insightful explanation of some approaches that I took throughout development and how I overcame many obstacles along the way.

Forward

When I first reviewed the specs of the Lenovo All-in-One, I was immediately attracted to its gigantic 27" screen and full 1080p resolution, coupled with the 10-point multitouch capabilities. A sketching app seemed like a natural fit and I instantly envisioned a table-top drawing pad that all members of a family could gather around and enjoy. As it turns out, I wasn't the only one with such a vision. At least two other entries from the Entertainment category are also drawing apps, although they all have their own great uniqueness. Other similar concepts may exist outside of CodeProject, but I have a difficult time navigating the foreign AIC 2013 sites.

Three years ago, I developed a drawing app known as Scribblify for iPhones (and subsequently, iPads). It was developed using Objective C and was strictly a mobile iOS application. The interface and all features were designed for low resolutions (480x320) and limited hardware capabilities. The most distinct features of this app included the original and highly varied brushes that I created, symmetry drawing capabilities and special color effect modes. The goal of the app was not to compete with any serious illustration apps, but rather to allow anyone regardless of age or skill level to instantly create colorful and unique artwork. Scribblify gained a somewhat cult following especially across schools; with over a quarter million downloads it continues to be enjoyed around the globe.

ABOVE: The original mobile version of Scribbify, circa 2010. Although updates have been made to the app since then including retina and iPad support, the overall interface remains mobile-centric and is designed to fit in a 480x320 area; using such an interface on a desktop version would feel constrictive, at best.

Armed with the general familiarity with developing creative drawing applications as well as prior experience developing for touch devices (including mobile and desktop), I proposed as part of this competition to redevelop Scribblify from the ground-up for desktop platforms. It may have been theoretically possible to do a more direct port of the original, but the substantial differences in mobile vs. desktop behavior and the relatively primitive GUI I designed originally made this undesirable and non-intuitive on the large All-in-One.

Approximately six solid weeks of development time and three major changes to the core programming methodology later, my completed product was uploaded to CodeProject. It inevitably contains some hiccups and is missing some features that I had hoped to include, but I still consider it to be a great success given the time constraints and development hurdles I encountered.

ABOVE: The final product. After 1.5 months of intense development and a transition between three development platforms, this is how the new All-in-One version of Scribblify turned out.

Prototype Development - AGK (App Game Kit)

Before starting work on the production-grade application, I wanted to first create a functional prototype to examine the overall feasibility of my plans and to get a sense of how the application would work on the actual touch device. For that, I turned to a small development kit known as AGK (App Game Kit), released by TheGameCreators a couple years back. AGK supports BASIC (Tier 1) and C++ (Tier 2) development branches and provides easy deployment to a variety of platforms including iOS, Android, Blackberry, Windows and Mac.

In fact, AGK is fully capable of producing well polished and fast desktop applications; my final award winning 2012 AIC entry, Ballastic, was developed using AGK. At least three other competitors this year also used AGK. For this app, however, I knew that AGK lacked some essential functionality that I had desired, and the next release wouldn't be ready until December. Even so, I used AGK to create a very basic working prototype for testing touch responsiveness and a few app-specific features with the Lenovo AIO.

ABOVE: The humble beginnings, October 5, 2013. This is the original prototype for Scribblify AIO Desktop conceived in AGK to test the touch responsiveness and a couple paint-related effects I wished to include in the final submission including mirror mode and color choices.

AGK is powered by OpenGL and therefore can handle texture and image manipulation very efficiently. However, the current version does not natively support dynamic drawing to textures or render buffers (the next update is expected to support this). You can disable clearing of the main buffer [EnableClearColor(0)] which will then retain whatever gets drawn to it, but this is somewhat unpredictable behavior and causes more issues when trying to overlay interface elements since they will essentially be baked into the screen as well. There are numerous other feasible approaches, including memblock manipulation (i.e., iterating through pixels and modifying them manually) and standard sprite cloning, but to create a simple drawing canvas separate from other elements is still more trouble than what would be ideal--again the next beta release should solve this problem.

Base Development - HTML5 Canvas 2D

I have a lot of experience in Web-related development. One platform that I often considered for bringing Scribblify to the desktop was HTML5 and it's native canvas element, which is now well supported by all browsers. However, I was never very motivated to do so (with only one person ever making a formal request for it) and never acted upon the idea until two weeks into this competition. Although I had worked with canvas rendering to some extent in past projects, I had never gotten nearly as acquainted with it as I became over the next couple of weeks.

Initial benchmarks of 2D rendering via HTML5 canvas on the Lenovo Horizon were very promising. In fact, when using the native drawing features available in HTML5 (i.e., drawing lines, rectangles, fills) a steady 60 FPS can be achieved in Chrome even when throwing complex operations at it. Every existing HTML5 app I tested on the Lenovo used shape-based drawing operations and ran smoothly. However, most of these apps only supported single touch and none of them included the rich image-based textures and components that Scribblify required. So, the only way to find out the true performance feasibility for Scribblify via canvas 2d was to actually build out the app!

During this phase of development, I opted against using any framework other than the native HTML5 canvas capabilities. As a consequence, all of the grunt work relating to context transformation and rendering had to be manually developed. HTML5's default canvas feature is quite low level, or perhaps I've simply become spoiled by all of the open source frameworks and products (including AGK) that support simple sprite drawing and easy object transformation out-of-the-box. With HTML5 canvas manipulation, applying a simple rotation to an image can be a pretty complex matter involving saving the context's current state, transforming the entire canvas, drawing the image to the newly transformed context (with an offset to compensate for the top-left anchorpoint), then restoring the previous state of the canvas.

ABOVE: The final iteration of the HTML5 2D Canvas version of Scribblify, October 27, 2013. All of the user interface elements are standard HTML components (styled with CSS and controlled via JavaScript) whereas the canvas rendering itself is done using native HTML5 canvas methods (no external libraries).

One of the biggest obstacles I encountered going the canvas-only HTML5 approach related to dynamically changing the color of the brushes. When using the primitive drawing shapes and commands supported by Canvas, you can easily set the stroke and fill colors with no hassle (i.e., context.fillStyle = '#FF0000';). In fact, you can pass in any RGB/RGBA/HSL/HEX format and it will work just fine, same as in CSS. Once the stroke or fill colors have been assigned, all subsequent calls to the context will use the specified fill color, until you update it again or restore the context to its previous state. This is how virtually all canvas-based drawing apps alter brush colors and is fast and efficient. You can get more creative by creating custom icon fonts with unique shapes (i.e., different brushes)--the canvas fill commands will equally affect text; text objects can also be transformed like any other canvas element giving you much more flexibility than the primitive shape commands alone.

Unfortunately, the above method of colorization will not work for Scribblify. The signature brushes developed for Scribblify depend heavily on shaded tones to create organic and abstract texturing effects that are not possible using flat colors. By using text or primitive shapes and canvas fill functions, you will always get a solid color returned. Sure, you can adjust overall opacity or apply additional post-processing effects such as soft edges, but the results are still not adequate for Scribblify and carry excess overhead. Another clever approach is to read the pixels of the source image, store a separate instance for each channel (R, G, B, Alpha), then combine them into a new image on-the-fly by multiplying the global transparency by the destination color values using the "lighter" composite mode available in HTML5. The downside of this approach is it does not work as expected in a few special cases, such as creating the resultant image on a transparent canvas. Still another way is to iterate through all the pixels of the brush and manually update the color tint of each pixel. This is acceptable when calling the command periodically, but the color effects in Scribblify require many color change operations per frame as the user draws and therefore this approach severely hinders the framerate.

For the HTML5 2D canvas version of Scribblify, I ultimately ended up taking a relatively simple two-step approach to set the brush color efficiently even many times per frame, while preserving the texture detail of the original texture as well. The first phase is to draw a primitive rectangle to a temporary canvas using a fill color that matches the desired brush color. Then, using the "destination-atop" composite mode, draw the source image on top of the rectangle. The resultant will preserve the transparency of the source image while using the tint color of the rectangle (to work effectively, the source image should be grayscale). Then, using the "multiply" composite mode, which has only recently been included in Chrome, again draw the original source image on top of the previous output to restore the grayscale tones but keep the final color. The drawImage() method of a canvas' context can accept another canvas as it's parameter instead of an image, so you can pass the temporary canvas directly to the main canvas to draw the color-manipulated brush.

In just three short weeks, I had made excellent progress with the HTML5 canvas approach. Along the way I was able to complete most key interface components including the brush picker, basic color picker, size and transparency sliders, and basic color effect options. As I mentioned earlier, I made the development decision to use strictly HTML, CSS and JavaScript components for the GUI instead of unnecessarily coding everything out in canvas. It makes sense, in my opinion, to only use canvas operations for features that can't possibly be completed in a more simplistic and logical manner such as using common HTML components. (I had originally coded the top color picker bar as a dynamically generated canvas element, but reverted it to a simple block of DIV containers styled with CSS for simplicity sake). With the help of jQuery and numerous extensions, some tasks relating to the interface design were greatly simplified.

Alas, even with all of my successes in this 2D canvas version of Scribblify, I found the performance was less than optimal once multitouch and color effects were involved. The framerate would dip down to 10-20 FPS especially when drawing very rapidly with many fingers, which was unacceptable in my vision of creating a truly immersive drawing experience. The image drawing operations via drawImage() proved to be quite a bottleneck for 2D canvas rendering when compared to more traditional vector and shape-based path drawing.

Final Development - WebGL to the Rescue

With only a couple of weeks to spare, I began exploring WebGL alternatives to my native 2D canvas approach. I am not well versed in OpenGL, let alone given such a short deadline. However, I had a hunch that Chromium's fast WebGL rendering capabilities would be able to solve the issue of drawing many textured sprites to the render buffer or canvas in rapid succession. I explored a variety of open source and MIT-licensed WebGL frameworks in my quest to find one that'd allow my existing work in canvas to readily carry over with minimal effort, with the hardware-accelerated performance benefits of using WebGL.

One interesting solution I found was called WebGL-2d. This small JavaScript library would instantly convert any 2D canvas into a WebGL canvas with a single line of JavaScript after initialization. The native 2D canvas operations were transparently converted to WebGL operations behind-the-scenes, and the resultant canvas was then rendered to the screen using WebGL. To my dismay, this library hadn't been updated for a couple of years and was lacking many of the canvas functions that I had already programmed in 2D canvas, including context transformation for image rotation among other essentials. I did not have enough experience with WebGL to adequately make the necessary changes with the deadline fast approaching and much more still to be done on the app. Another framework I explored was pixi.js, which is very slick and effortlessly supports 2D and WebGL canvas modes depending on the user's browser. But it is a very new engine and one that would've taken substantial modification to migrate my canvas data to it as it did not include prebuilt functionality for some key elements, including the ability to dynamically adjust image colors.

My ultimate decision was to inherit the relatively mature, MIT-licensed Cocos2d HTML5 framework for the canvas itself while retaining all of my previously developed components for the core user interface. Cocos2d HTML5 originally only supported canvas operations, but more recently it integrated WebGL as well. Since I developed the entire GUI using standard HTML components and libraries, there was little issue porting them over to a C2D-powered HTML5 application. All of my original HTML5 canvas drawing operations, however, had to be revised quite substantially and ported to C2D using a new syntax. It took me approximately one week to get all of my drawing code ported over to C2D. WebGL has the added benefit of more straight-forward and accurate color conversion (especially when encapsulated using the C2D library) and handles texture drawing at lightning speeds.

ABOVE: The nearing completion Scribblify app, now using the WebGL-powered Cocos2d HTML5 engine at a blazing 60 FPS even with multitouch and constant color swapping.

The moment of truth. I tested the new WebGL-capable version of Scribblify and monitored the framerate. Even with all ten fingers drawing at once while using the most operation-intensive brushes, I was generally able to sustain approximately 60 FPS! It would occasionally dip lower especially in mirror mode, but never to a point of being consciously noticeable.

Wrapping It Up - Aura Interface Requirements

With a week to go I still had some serious GUI components to tackle. The Aura Interface guidelines mandate several strict requirements for apps intended for that platform. Two items in particular caught my attention. 1.) The apps must be fullscreen with no chrome or taskbar. 2.) The apps cannot use any native dialog windows. Since my app was Chromium-based, I knew my best bet at fullscreen would be to compile it down into a native executable shell. (There were also local security considerations involved, so just using 'kiosk' mode exclusively would not have been sufficient). A couple open source initiatives supported exactly that, including app.js and node-webkit. Node-webkit is more actively maintained with wider support options, so I ultimately took this approach to package up my Web app and launch it fullscreen through a single EXE.

With Web-based apps, using the default dialog windows is natural and easy. Scribblify needed to support a minimum of saving the drawings locally. This would have been achievable using simple JavaScript functionality to call a 'Save As' dialog window after converting the canvas data to a URL/PNG. Loading an image could have likewise been a matter of using e File Input element to allow the user to select a file from their computer. But to comply with the Aura requirements, instead I ended up developing a custom local gallery system with save and load capabilities included. Using Node.js, the file system restrictions found in local Web apps were resolved and general file system operations could be executed without much trouble. This gallery functionality took a fair amount of effort but ultimately made the app feel substantially more polished and complete.

ABOVE: The completed Gallery view of Scribblify. From the Gallery, users can preview their previously saved art, view full-screen versions, delete a particular work, or import any item back into the canvas for additional compositing.

I cannot stress just how thankful I am for the multitude of open source libraries and components out there. I was able to use many existing libraries as a basis for some important pieces of Scribblify--a color picker widget for the advanced color selector (jQuery MiniColors), jQuery UI for slider controls, iDangerous Swiper for the brush picker touch functionality, Lightbox for the base fullscreen gallery preview, colors.js for various color adjustment routines, node.js for system file manipulation, node-webkit for EXE compilation, Cocos2d HTML5 for WebGL functionality, jquery.ui.touch-punch to bring improved touch support to jQuery UI elements...

A Touching Moment

One last footnote about problem encountered during the development process. It seems that the Lenovo Horizon has some quirks with touches not always being captured or released successfully when met with frantic touch movements or interacted with by more than just fingertips. Other developers reported about this isue on the CodeProject forums and elsewhere, when developing using various different platforms. The same issue was prominent in Chrome/Chromium, most notably when pressing one's full palm down on the screen and moving it a little (this is true on any website or Web app ran through Chrome on the Horizon). Chrome stores each touch in a TouchList object upon touchstart, but when many touches come in contact at once some are never released. Eventually the 12-limit TouchList array is filled and all subsequent touches are halted until the browser is restarted. As such, manually intervening can only go so far in curing this problem (the TouchList object is immutable and cannot be altered by the client). I detailed this bug to the Chromium team and they analyzed it as best as possible, but concluded it is a device-specific Windows-only problem that they can't replicate on their own 10-point touch devices using the same demo application I built for testing.

I spent considerable time working to compensate for such possible touch issues to the extent allowable by Chromium. If one or several touches are erroneously recorded, the app will silently handle alternate operations when interacting with the HUD and this generally will go unnoticed (one exception being option toggles in the preferences which may behave sporadically). If the user immediately triggers an impossible number of touches, an alert will appear explaining the situation with some guidance on how to prevent it in the future (by using only fingertips). At this point, Scribblify automatically saves their current work to the gallery and encourages them to restart the app to restore full touch control. They can ignore the warning and may still be able to draw to an extent, but some fingers will not be available. I trust that this issue will actually be resolved with firmware/driver updates for the Horizon, as it doesn't seem to be reproducible on other devices.

Designed for All Ages and Skill Levels

As has been my motto all along, it was my utmost goal to create an app powerful enough and with enough functionality that it could be enjoyed by professional artists, while also giving children and non-artists equal pleasure in creating great art. During the development phase, I showcased the app to a very diverse crowd ranging from a two-year-old child all the way to a 97-year-old woman, all of whom had no problems creating dynamic and abstract art while enjoying it immensely. I believe, with that, my mission was a success!

Original Article Below

Introduction

Let your creativity run wild with Scribblify, a one-of-a-kind drawing and painting tool for children and adults of all ages and skill levels. From natural to abstract and everything in between, Scribblify allows anyone to create spectacular artwork with ease--limited only by one's imagination. This app entry will be for a desktop port of the existing mobile application, and is being developed specifically for the All-in-One PC, as promoted through the App Innovation contest. The desktop version will include a more diverse user interface and other unique features to accommodate the features available on Lenovo's All-in-One, including up to 10 simultaneous touches and widescreen HD display.

This desktop app, for the All-in-One, will include dozens of hand crafted brushes, each with its own unique appearance and behavior. Most of the brushes are unlike anything seen elsewhere and range from organic to surreal. In addition, many exciting color effects will be provided to support creative blending, plasma colors and more. The wide variety of exclusive brushes, preset backgrounds, advanced color effects and mirror drawing capabilities ensures endless entertainment to all.

Background

Scribblify has existed as a mobile application since 2011. As a mobile product, great care had to be taken to design an interface and feature set that low resolution phones and related devices could support. Until the 2013 App Innovation Contest, no desktop version of Scribblify was imagined let alone conceived; it has remained a mobile-only app, thus isolating a large percentile of potential users.

It is not difficult for one to recognize the benefits of developing such an app for much larger and more powerful form factors, including the 27" full HD All-in-One PC promoted through this challenge. With a surface size large enough to fill a coffee table and 10-point multitouch technology, I immediately envisioned a desktop version of Scribblify that would enable entire families to gather around the All-in-One and create brilliant works of art as if finger-painting on a large canvas.

Eye Candy

As a visual teaser, screenshots of the core foundation and artistic possibilities of Scribblify are presented below. Note that these screenshots are derived from the original mobile version of the application. However, the majority of brushes, effects and features found in the original version are also being implemented into the new desktop app. The overall interface, on the other hand, will be redesigned to reflect the much larger screen estate available on the All-in-One machine. .

Under the Hood

The original mobile version of Scribblify was developed using Objective C and OpenGL ES 1.0 for iOS. Subsequently, and Android port was developed from scratch using compatible C++ OpenGL ES libraries. The All-in-One version is again being developed from the ground up using PC-compatible solutions and libraries. A portion of the media assets (notably brushes and backgrounds) will be inherited from the mobile counterpart. The core drawing and coloring methods will also be ported over as needed to effectively mimic the basic functionality found on mobile versions.

The specific framework and underlying programming language(s) that will be used for the final desktop version of Scribblify is still being evaluated. During the preliminary tests for the All-in-One, the C++ Tier 2 version of AGK (App Game Kit) will be used to rapidly develop functional prototypes with full sensor support. AGK, developed by The Game Creators, features a suite of sensor-specific methods for effortlessly tapping into many sensors available on Intel's latest line of products.

Since last fall, AGK has supported the full line of sensors available in Intel's Ultrabook line of laptops and all major mobile devices; these commands should remain generally compatible with the Lenovo line of computers. Since the AGK framework supports C++ development, additional sensor support can be worked in as-needed during the prototyping phase of this project.

Specific to Scribblify, multitouch is the most significant sensor required by the app. AGK provides integrated support for more than a dozen simplistic touch commands, making it easy to quickly capture touch data and process it as needed. It includes built-in support to distinguish between taps, holds and drags, as well as the previous and current coordinates of each touch. A very simple example of some such commands is as follows:

Once the general prototype has proven successful, the development of Scribblify for desktops may veer into other development territory. Unfortunately, AGK may not yet support a variety of desired features needed to ensure smooth and versatile performance of this app. Therefore, other OpenGL-based solutions will be evaluated as needed to convert the prototype into a polished application. Details of alternate platforms for final deployment are still being reviewed, but may include an HTML5-based Web application (using Canvas and other non-WebGL techniques) or a C++/C# derived app using readily available OpenGL-based frameworks. The key is to quickly develop a working app in time for the competition deadline, with minor updates to follow thereafter.

All-in-One PC

As mentioned throughout this article, this project represents the first PC port of an existing mobile application. The application is being developed new, with some existing media and core functionality ported over from the mobile versions. Many features not found in the mobile device will be included in this version specifically to emphasize the capabilities of the All-in-One PC. Some of the main upgrades include:

Up to 10 simultaneous touches (versus a maximum of two on the mobile version)

More prominent and intuitive interface to reflect the larger screen space

License

Share

About the Author

Although I am a Web developer by profession, I have long been enthused by game and app development. There is something extraordinarily rewarding about the prospect of your product entertaining millions around the world. After spending years developing an assortment of games and applications as a hobby to varying degrees of popularity, I took it one step further a couple years ago by establishing a formal side business and launching several commercial applications on iOS. I have also dabbled into the Smart TV market by releasing one of the few third party games for a certain line of plasma televisions and am always looking for the next big thing to develop on.

Comments and Discussions

Hi,
Awesome app and congrats on winning the Intel App Innovation Contest 2013 and keeping CPian Community proud.Can you give some information's on the final app how you manage to compile such a wonderful thing and the technologies used?It will be very useful

Thanks! I did update my article a few weeks back with more details of some technologies used. I evolved it through several different prototypes and technologies, but the final version is driven by HTML5/JavaScript/WebGL and is packaged through Node.js to create the fullscreen standalone app that supports local file storage, as required by Lenovo.

Just wanted to check in and see how development of Scribblify is going so far? Would be great if you could take a few minutes to fill out this short Round 2 Dev Survey[^] to give us some feedback on your progress.

Also wanted to make sure you have seen the bundling requirements for the AIO submissions to integrate with the Lenovo Aura interface.

Info below:

"Contestants submitting an Entertainment or Games app for the Lenovo Horizon AIO must bundle their application binaries + supporting dependencies (if any) as a single EXE installer or an MSI installer, so that once it is installed the application will integrate into the Aura Interface (a.k.a. Horizon Shell) and can be launched and run from the Aura interface. There are no programming changes necessary; just a specific way you must bundle your application.

This article[^] and video[^] will walk you through the necessary steps to bundle your AIO app. "