apps for artists

creating tools for creative people

creations of:

Professionally, I'm a freelance UX prototyper. In my spare time, I've also created the experiments you see below.

Use the links at the bottom, or the left and right arrows, to move between experiments. Scroll up and down to learn more about a particular one.

Feel free to contact me for more information about any of my creations, to hire me, or just to say hello.

Enjoy!

-Brenton

ambidex

the same code on the client and the server

react

node.js

webpack

javascript

Summary

Ambidex abstracts away the differences between the client and the server, so the same React app can be rendered by both. Rendering on the server is beneficial for SEO, time-to-glass, and supporting clients that don't speak JavaScript (like robots). Rendering on the client saves bandwidth, reduces server load, and makes the site feel more responsive because changes happen instantly.

Download

Overview

While working in the eBay Mobile Innovations lab, I was asked to create a JavaScript toolkit that would provide merchants with the best modern technologies to power their stores. Of the frameworks available in 2014, React was an obvious winner, in part because it was the most compatible with essential techniques for online stores, like SEO. However, the gulf between "compatible" and "integrated" is a big one, so I created Ambidex.

It was crucial that our application returned server-rendered HTML that search engines knew how to understand, but many of the advantages of a "modern" approach were predicated on client-side rendering. We needed to be able to render the same codebase in both places.

Differences Between Client and Server Rendering

There are two primary obstacles that must be overcome to render the same app in both environments: titles, and data.

Server-side and client-side apps use different APIs to set the document's title. Keeping with the React tradition, Ambidex solves this with a declarative approach: each component can either statically or dynamically declare a sectionTitle. When the page renders, Ambidex walks the component tree, combining the sectionTitles and calling the environment's title API with the resulting value.

The data-loading problem is a bit trickier: The client should render immediately on every change to make sure the app feels responsive; however, the server only gets to respond once, so it must delay rendering until all the requisite data has loaded. Therefore, Ambidex needs to know which data each page depends on and to detect when that data is available.

To solve this problem, I first made a key insight: a search engine indexes its catalog by URL. Therefore, any data that's both needed to render a public page and changes between pages must be captured in the URL. By mapping each URL parameter to its corresponding Flux store and detecting when those stores have acceptable values, Ambidex can detect when the page is ready to render.

Developer Experience

Since Ambidex was the tool I worked with every day, it was crucial for it to have a stellar developer experience.

One of the best features of Webpack is hot module replacement, which shows your changes immediately in every connected browser on each save. Unfortunately when I created Ambidex, HMR was very convoluted to set up and therefore not commonly used. To remedy this, the first thing I did when I started building Ambidex was make sure that enabling hot module replacement with Ambidex was as simple as setting a flag to true.

The store that I built Ambidex to power was designed to be viewed on a mobile phone, and agnostic to which data provider a merchant chose. To those ends, I created a test harness that ran three phone-sized instances of the webstore side-by-side in a single tab. Each was backend by a different data source. As I worked, I kept this test harness open in a tablet that sat at my desk. Every time I hit save, each of the test instances would instantly update with my changes. Because they were on a tablet, testing the touch interactions was painless. By backing each test with a different data provider, I ensured that we always maintained compatibility with our supported partners without any extra work on my part.

Ambidex provided the best workflow I've ever used. I shared my experience at Facebook's inaugural React.js Conference. You can watch my talk in the video player at the top of this page.

mapperino

everywhere you've ever been, in 3D

every ride, every hike, every run, every walk

html5 canvas

javascript

python

appengine

Summary

Mapperino is your own personal map of the city, showing the parts of town you frequent, the parts you haven't explored yet, and the epic 50 mile bike ride you did that one weekend in an immersive 3D environment.

Demo

Overview

Mapperino was born out of the desire to visualize everywhere I've ever wandered on my bike. Explore the map in 3D by scrolling with your trackpad.

Workout Backup

The data that feeds Mapperino has been collected from a variety of sources, including Strava, Endomondo, miCoach, runstar, Motion-X, and EveryTrail.

When I created Mapperino, these providers acted liked they owned a user's data and made it very difficult to access outside their respective apps. Therefore, while creating Mapperino, I also built a handful of tools to enable users to rescue their data. The initial prototype for Workout Backup was an Android app that queried the SQLite databases of Strava and Endomondo, generating both GPX files and a JSON feed for the Mapperino prototype. I also prototyped a browser extension that listened in the background for new rides to be uploaded, backed them up, and posted them to Mapperino (like Time Machine, but for your ride history).

Now that providers are both opening their APIs and utilizing aggregators like Google Fit, these tools are no longer necessary.

cognicube hd

rubik's cube-style 2D puzzle

It's like a Rubik's Cube, but flat. . .

android

actionscript 3

air

Summary

Cognicube is a puzzle game for tablets and mobile phones. It uses the same play mechanics as a Rubik's Cube, but presents the faces in two dimensions.

Download

Accolades

Cognicube HD was featured by Barnes and Noble at the launch of the Nook Tablet, and by Sony at the launches of the Tablet S and Tablet P.

Overview

Cognicube was my experiment in the mobile application marketplace. Frequently lauded for both its aesthetic polish and mental challenge, Cognicube has been played by nearly 9000 people.

Responsive Design to Fit Many Devices

I originally developed Cognicube as a phone game in the summer of 2010. The following year, I released Cognicube HD, which added support for tablets as well. The tablet edition utilized a split screen design, with the original puzzle in the bottom pane and an MC Escher-style representation in the top. This design worked well on both traditional tablets (like the Barnes and Noble's Nook Tablet) and on clamshells (like Sony's Tablet P). Both vendors featured Cognicube HD at the launch of their respective devices. In fact, Sony highlighted my game at CES.

Orientationless

When Cognicube was first released, Android was still finding its design footing. I wanted the game to feel at home on the platform, but also to go beyond the weak conventions that had been established so far. To that end, I made sure that the game feels natural no matter how the player decides to hold the device. The only time words are encountered is in a modal dialog that appears the first time you play the game or after you complete a puzzle. The dialog is always oriented to be easily read from the player's point-of-view. If the device is rotated while the dialog is open, the dialog will swing around to match the player's new perspective.

story stash

collaborative storyboarding system

helps artists create enchanting stories

Summary

All artists care about is telling entertaining stories. To create a cartoon, they are currently stuck with clumsy paper-based storyboards, which are not only difficult to organize, share, and store, but are also vulnerable to accidental destruction. Story Stash frees them from paper clutter, enabling them to do what they do best - create enchanting stories.

Demo

I created a prototype of the service during my senior year at USC. To demonstrate its potential, I've prepared the following samples using artwork other artists have shared online. Neither artist has used or endorsed Story Stash. (sources: Heroes, Venture Bros)

Accolades

Story Stash was awarded Top Undergraduate Business Plan, 2008 by the University of Southern California's Grief Center for Entrepreneurial Studies.

Design Overview

Story Stash was designed from the beginning to be used on a pen tablet. Multitouch was also explored (see eyePoke), but was not mature enough in 2007 to be included in the initial prototype.

I worked very closely with Dan Povenmire and Swampy Marsh, creators of the Disney Channel's Phineas and Ferb, to ensure that Story Stash's interface and feature set both enabled professional artists to be more productive.

Sketchpad Layout and Tools

The sketch panel in Story Stash is the same size as one on a traditional storyboard, while the note area is as wide as a Post-It note. The tools in-between them are digital representations of the most popular tools in the field: a pencil, marker, eraser, and sticky note. Paper storyboards are littered with sticky notes, which like layers in the digital world, enable artists to quickly try something new without having to erase the underlying artwork.

Color Well

With one seamless motion, the artist can select any desired color without fiddling with sliders. Simply tap the well, slide over the desired tone, and release. It's like having the entire color wheel built into your pen.

Sequences

To group a set of cards into a sequence, click the first card in the group, hold shift, and click the last. In a multitouch environment, this would be accomplished with the pinch gesture. Though not implemented in this prototype, sequences should be able to collapse into a single card in the card drawer. Not only would this make it easier to skim a storyboard, but it would also enables multiple authors to work contemporaneously on different acts without one's changes distracting the other.

Cloud Marking Menu

To avoid cluttering the workspace, file management tasks are consolidated into a marking menu, represented by the cloud icon. The menu is disabled in this demo.

Backwards Compatible with the Paper Workflow

These text fields represent dialogue, locale, action, and miscellaneous, the four standard text areas on a paper storyboard. Story Stash is designed to adapt to the user's context. Story Stash can be shared across the network, presented with a projector, or hung on the wall with sheets of paper.

marking menu

open source mouse gestures extension

javascript

chrome

safari

Summary

Marking Menu makes browsing the web more fluid by enabling you to navigate with just a flick of your wrist.

Download

Overview

As a Wacom tablet user, I was a fan of the easyGestures extension for Firefox, but had long since grown tired of Firefox's tendency to leak memory. When Chrome added its extensions API, I decided to develop a gestures extension as a weekend project, and Marking Menu was born.

Marking Menu was exclusively available for Chrome for the first couple years of its existence. In the summer of 2011, I decided to give Safari another try as my default browser, and I brought Marking Menu along for the ride. At the same time, I open-sourced the extension. The same codebase supports both Chrome and Safari, through a small abstraction layer that provides implementations of the Chrome APIs I relied on in Safari. You can explore for yourself at the Marking Menu Google Code project.

360video

live panoramic video remapping with shaders

actionscript 3

pixel bender

air

Summary

360video uses a pixel shader to remap any video recorded with the 0-360 Panoramic Optic into an equirectangular viewport in real time. It is comprised of two pieces. The 360video Unwrapper Studio allows anyone to easily create a 360° video that can be embedded into a web page with the 360video Player. Because the desktop application uses AIR and the embeddable player uses Flash, both can share the PixelBender shader.

Demo

Overview

The 360video Unwrapper Studio is designed to create 360° videos as quickly as possible. Creating a 360video is as simple as dragging a video into the Unwrapper Studio. The tool automatically creates a Street-View-style video virtual tour ready to be embedded online. Videographers can crop, color correct, and sharpen the video by applying additional real-time filters without leaving the Unwrapper.

Automatic Calibration

The Unwrapper Studio algorithmically detects the edges of a wrapped 360video donut and crops the video accordingly. With no user intervention, a source file is unwrapped the instant it is added to a 360video scene.

Drag-In, Drag-Out

Well-integrated with the surrounding system, the 360video Unwrapper Studio plays nicely with other applications. To add a new shot to a scene, the user simply drags the source media from the camera into the Studio. When the scene is completed, it's dragged from the Unwrapper Studio into an HTML file. An embed tag will be generated and inserted into the page's markup.

Easy Uploading

The files making up a 360scene are bundled into a package, which appears as a single file on the user's computer but as its constituent parts on a web server. To post a 360video online, all the user has to do is drag the 360scene package into an FTP client.

eyepoke

multitouch computer vision for Flash

enables applications to support mice,pens, and multitouch with a single API

actionscript 3

air

Summary

eyePoke is an open-source multitouch event dispatcher for the Flash platform. It uses a webcam to watch an FTIR table for lit fingers, which are mapped to TouchEvents in Flash. Because TouchEvents extend MouseEvents; taps, clicks, and drags can all be handled by the same code regardless of which input method the user prefers.

Overview

In the summer of 2007, I was interested in adding multitouch interaction to Story Stash, but was unable to find a straight-forward way to do so. There was not yet a standard way to implement multitouch in Mac OS, Windows, or Flash; a few open-source libraries existed, but they all required running a separate application in the background and were wholly incompatible with the MouseEvent API already present in ActionScript.

Inspired by Jeff Han's TED Talk and the experimentation of the homebrew multitouch community, I read patents voraciously, learned about different methods that had been previously used to enable multitouch, and studied the physics behind them. I built a multitouch table using the FTIR phenomenon and set to work taking advantage of the device in Flash.

During my research, I discovered NUI Group, a community of other enthusiasts investigating natural user interfaces. Like many of my peers, at the time I had a limited background in computer programming but was fascinated by the potential of this new paradigm. I spent many hours in forums and chat rooms learning object-oriented programming from more experienced community members. Ultimately, I became a NUI Group administrator, guiding others on their journeys into touchscreen tables, FTIR, and ActionScript 3 programming.

eyePoke was praised for both its speed and its simplicity. Before eyePoke, other NUI Group developers doubted that a performant computer vision library could be written in a scripting language; they were excited and impressed to be shown otherwise.

I was drawn to the multitouch community not only because I was fond of the concept of tangible information, but because the researchers were able to create such amazing things through clever applications of otherwise simple technology. They gave people superpowers using a camera, a sheet of acrylic, and a handful of LEDs. I continue to be inspired by their work, but quickly realized that I simply didn't have the resources to create both a digital storyboarding system and a multitouch framework in my spare time. Furthermore, I expected technology vendors like Apple and Adobe to create more generic frameworks to enable multitouch that would make eyePoke obsolete. eyePoke was put on hiatus just a few months after it started, but its source code is still available for others to study: