Thing Translator

Point your camera at things to hear how to say them in a different language.

Thing Translator is an open-source app that demonstrates
the simplicity and charm of harnessing modern machine learning techniques –
namely, computer vision and natural language translation. You can watch a video
explaining how it works here.

It was built with some friends at Google Creative Lab as part of the
A.I. Experiments series.

ExifExodus

Remove GPS data from your photos before you share them online.

What is EXIF?

EXIF
is a type of metadata that is embedded in photo files from most types
of cameras and phones.

This metadata includes information about the device used to capture the photo,
but also often includes the GPS coordinates of where the photo was taken.

Many users unknowingly share this information with the general public
and site/app owners when uploading photos online.

This has been a common vector of privacy lapses, including cases where
journalists have unintentionally published photos with geotagging data intact.

Recent press has also revealed the NSA’s collection of EXIF data in
its XKeyscore program.

What is ExifExodus?

ExifExodus is a small piece of
open-source code that
runs directly in your browser and strips EXIF data out of your photos before
you upload them.

How does it work?

You can run ExifExodus whenever you’re uploading photos by using its
bookmarklet (available on the site)

When ExifExodus encounters a JPG file, it will remove the EXIF data by
copying the pixels to a new image file, similar to taking a screenshot of
something.

Alternatively, you can drop your files in the dropzone at the top of the
site) and receive versions free of EXIF data.
You can then save these new files and upload them wherever you’d like.

Is EXIF without merit?

That’s certainly not the implication of this project. Metadata adds
another dimension to photos and is valuable for preserving context.
This project aims to educate and give users a choice in the matter of sharing
it with specific services (and the web at large).

Doesn’t Facebook (etc.) remove EXIF data before displaying photos?

Yes. Although this prevents the general public from accessing your EXIF data,
you should be aware that the end recipient is free to use or store the metadata
before removing it.

Any caveats?

The ExifExodus bookmarklet won’t work with any site that uses Flash
(or any other proprietary plugins like Silverlight) to upload files.
For such sites, use the dropzone converter, save the output files, and upload
those instead.

ExifExodus only works with JPG files (which is the most common image
format to carry EXIF metadata).

HexaFlip

Visualize arrays as cubes.

Transform arrays of any length into cubes that can be rotated infinitely.
Originally developed as the time picking interface for ChainCal, I expanded it to visualize arbitrary arrays and
wrote an article detailing the process on
Codrops.

Usage

To bootstrap a new app, run natal init with your app’s name as an argument:

$ natal init FutureApp

If your app’s name is more than a single word, be sure to type it in CamelCase.
A corresponding hyphenated Clojure namespace will be created.

By default Natal will create a simple skeleton based on the current stable
version of Om (aka Om Now). If you’d like to base your app
upon Om Next, you can specify a React interface template during init:

$ natal init FutureApp --interface om-next

Keep in mind your app isn’t limited to the React interfaces Natal provides
templates for; these are just for convenience.

If all goes well your app should compile and boot in the simulator.

From there you can begin an interactive workflow by starting the REPL.

$ cd future-app
$ rlwrap natal repl

(If you don’t have rlwrap installed, you can simply run natal repl, but
using rlwrap allows the use of arrow keys).

If there are no issues, the REPL should connect to the simulator automatically.
To manually choose which device it connects to, you can run rlwrap natal repl --choose.

At the prompt, try loading your app’s namespace:

(in-ns 'future-app.core)

Changes you make via the REPL or by changing your .cljs files should appear live
in the simulator.

Try this command as an example:

(swap! app-state assoc :text "Hello Native World")

When the REPL connects to the simulator it will begin to automatically log
success messages, warnings, and errors whenever you update your .cljs files.

Tips

Having rlwrap installed is optional but highly recommended since it makes
the REPL a much nicer experience with arrow keys.

Don’t press ⌘-R in the simulator; code changes should be reflected automatically.
See this issue in Ambly for details.

Running multiple React Native apps at once can cause problems with the React
Packager so try to avoid doing so.

You can launch your app on the simulator without opening Xcode by running
natal launch in your app’s root directory.

By default new Natal projects will launch on the iPhone 6 simulator. To change
which device natal launch uses, you can run natal listdevices to see a list
of available simulators, then select one by running natal setdevice with the
index of the device on the list.

To change advanced settings run natal xcode to quickly open the Xcode project.

The Xcode-free workflow is for convenience. If you’re encountering app crashes,
you should open the Xcode project and run it from there to view errors.

You can run any command with --verbose or -v to see output that may be
helpful in diagnosing errors.

Dependencies

As Natal is the orchestration of many individual tools, there are quite a few dependencies.
If you’ve previously done React Native or Clojure development, you should hopefully
have most installed already. Platform dependencies are listed under their respective
tools.

Taxa is a small metaprogramming experiment that introduces a minimal grammar for
type annotations to JavaScript (and by extension, CoffeeScript).

Unlike other projects of this nature, Taxa is purely a runtime type checker
rather than a static analyzer. When a Taxa-wrapped function receives or returns
arguments of the wrong type, an exception is thrown.

Further unlike other type declaration projects for JavaScript, Taxa’s DSL lives
purely within the syntax of the language. There is no intermediary layer and no
preprocessing is required.

Grammar

Taxa type signatures are intended to be quick to type and to occupy few additional
columns in your code.

Following this spirit of brevity, examples are also shown in CoffeeScript as it’s
a natural fit to Taxa’s style.

In the following, Taxa is aliased as t (though $ or taxa feel natural as well):

t = require 'taxa'
# or in a browser without a module loader:
t = window.taxa

Shorthand

Taxa provides a shorthand for built-in types, indicated by their first letter.
The following is equivalent to the previous example:

exclaim = t 's s', (word) -> word + '!'

var exclaim = t('s s', function(word) {
return word + '!';
});

Capital letter shorthand works as well:

exclaim = t 'S S', (word) -> word + '!'

var exclaim = t('S S', function(word) {
return word + '!';
});

The shorthand mapping is natural, with the exception of null:

0 => null

a => array

b => boolean

f => function

n => number

o => object

s => string

u => undefined

Multiple arguments are separated by commas:

add = t 'n,n n', (a, b) -> a + b

var add = t('n,n n', function(a, b) {
return a + b;
});

The above function is expected to take two numbers as arguments and return a third.

Ignores

Occasionally you may want to ignore type checking on a particular argument.
Use the _ character to mark it as ignored in the signature. For example, you may
have a method that produces effects without returning a value:

Since all non-primitive types are objects, specifying o in your signatures will
of course match complex types as well. However, passing a plain object or an
object of another type to a function that expects a specific type (e.g. WeakMap)
will correctly throw an error.

Keep in mind that Taxa is strict with these signatures and will not walk up an
object’s inheritance chain to match ancestral types.

Partial Application

Like any other function, those annotated with Taxa carry a bind method, which
works as expected with the additional promise of modifying the output function’s
Taxa signature.

Aliases

You can add your own custom shorthand aliases like this:

t.addAlias 'i8', 'Int8Array'

t.addAlias('i8', 'Int8Array');

And remove them as well:

t.removeAlias 'i8'

t.removeAlias('i8');

Disabling

You can disable Taxa’s type enforcement behavior globally by calling t.disable()
(where t is whatever you’ve aliased Taxa as). This will cause calls to t() to
perform a no-op wherein the original function is returned unmodified.

This is convenient for switching between environments without modifying code.

Its counterpart is naturally t.enable().

Further Examples

Take a look at the test cases in ./test/main.coffee for more examples of
Taxa signatures.

Caveats

When a function is modified by Taxa, its arity is not preserved as most JS
environments don’t allow modifying a function’s length property. Workarounds to
this problem would involve using the Function constructor which would introduce
its own problems. This only has implications if you’re working with higher order
functions that work by inspecting arity.

It should go without saying, but this library is experimental and has obvious
performance implications.

Taxa is young and open to suggestions / contributors.

Name

From the Ancient Greek τάξις (arrangement, order).

CoffeeScript / JavaScript

stream-snitch

Event emitter for watching text streams with regex patterns.

stream-snitch is a tiny Node module that allows you to match streaming data
patterns with regular expressions. It’s much like ... | grep, but for Node
streams using native events and regular expression objects. It’s also a good
introduction to the benefits of streams if you’re unconvinced or unintroduced.

Use Cases

The most obvious use case is scraping or crawling documents from an external
source.

Typically you might buffer the incoming chunks from a response into a string
buffer and then inspect the full response in the response’s end callback.

For instance, if you had a function intended to download all image URLs embedded
in a document:

Of course, the response could be enormous and bloat your data buffer. What’s
worse is the response chunks could come slowly and you’d like to perform
hundreds of these download tasks concurrently and get the job done as quickly as
possible. Waiting for the entire response to finish negates part of the
asynchronous benefits Node’s model offers and mainly ignores the fact that the
response is a stream object that represents the data in steps as they occur.

The image download tasks (represented by fn) can occur as sources are found
without having to wait for a potentially huge or slow request to finish first.
Since you specify native regular expressions, the objects sent to match
listeners will contain capture group matches as the above demonstrates
(match[1]).

For crawling, you could match href properties and recursively pipe their
responses through more stream-snitch instances.

Here’s another example (in CoffeeScript) from
soundscrape that matches data from
inline JSON:

Caveats

stream-snitch is simple internally and uses regular expressions for flexibility,
rather than more efficient procedural parsing. The first consequence of this is
that it only supports streams of text and will decode binary buffers
automatically.

Since it offers support for any arbitrary regular expressions including capture
groups and start / end operators, chunks are internally buffered and examined
and discarded only when matches are found. When given a regular expression in
multiline mode (/m), the buffer is cleared at the start of every newline.

stream-snitch will periodically clear its internal buffer if it grows too large,
which could occur if no matches are found over a large amount of data or you use
an overly broad capture. There is the chance that legitimate match fragments
could be discarded when the water mark is reached unless you specify a large
enough buffer size for your needs.

The default buffer size is one megabyte, but you can pass a custom size like
this if you anticipate a very large capture size:

new StreamSnitch(/.../g, { bufferCap: 1024 * 1024 * 20 });

If you want to reuse a stream-snitch instance after one stream ends, you can
manually call the clearBuffer() method.

It should be obvious, but remember to use the m (multiline) flag in your
patterns if you’re using the $ operator for looking at endings on a line by
line basis. If you’re legitimately looking for a pattern at the end of a
document, stream-snitch only offers some advantage over buffering the entire
response, in that it periodically discards chunks from memory.

Node.js

ear-pipe

Pipe audio streams to your ears.

Concept

ear-pipe is a duplex stream that allows you to pipe any streaming audio data to
your ears (by default), handling any decoding automatically for most formats.
You can also leverage this built-in decoding by specifying an output encoding
and pipe the output stream somewhere else.

Usage

When arguments are omitted (e.g. ep = new EarPipe;), the type defaults to
'mp3', the bitrate defaults to 16, and the third argument is null
indicating that the pipe destination is your ears rather than a transcoded
stream.

If your input encoding isn’t mp3, make sure you set it to one of the formats
supported by SoX:

The JSON will contain a recursive representation of the directory and all
children. Each key is a file or directory name with a corresponding value
containing a stats object and a children object if it is a directory.
Directories also are also given a sum property which reflects the size of all
children recursively, unlike the typical size property of directory’s stats
object.

Commune

Commune.js makes it easy to run computationally heavy functions in a separate
thread and retrieve the results asynchronously. By delegating these functions
to a separate thread, you can avoid slowing down the main thread that affects
the UI. Think of it as a way to leverage the web workers API without ever having
to think about the web workers API.

Using straightforward syntax, you can add web worker support to your app’s
functions without the need to create separate files (as web workers typically
require) and without the need to change the syntax of your functions. Best of
all, everything will work without problems on browsers that do not support web
workers.

Usage

Here’s an example where the first argument is the function to thread, the second
argument is an array of arguments to pass to it, and the third is a callback to
handle the result once it comes through:

Running the above in a browser with worker support, you’ll see the results of
each function call appear simultaneously, meaning that none of these large loops
had to wait for the others to finish before starting. Using Commune.js with
care, you can bring asynchronicity and parallelism to previously inapplicable
areas.

To simplify things more, you can DRY up your syntax with the help of
communify() which transforms your vanilla function into a Commune-wrapped
version:

How It Works

When you pass a new function to Commune.js, it creates a modified version of the
function using web worker syntax. Commune.js memoizes the result so additional
calls using the same function don’t have to be rewritten.

Just write your functions as you normally would using return statements.

Commune.js automatically creates binary blobs from your functions that can be
used as worker scripts.

Caveats

Since web workers operate in a different context, you can’t reference any
variables outside of the function’s scope (including the DOM) and you can’t
use references to this since it will refer to the worker itself. For functions
you want to use Commune.js with, use a functional style where they return a
modified version of their input.

Also, since this is an abstraction designed for ease-of-use and flexibility,
it does not work exactly as web workers do – namely you can’t have multiple
return events from a single worker.

CoffeeScript / JavaScript, web workers

Express SPA Router

Concept

Let’s say you have a modern single page web application with client-side URL
routing (e.g. Backbone).

Since views are rendered on the client, you’ll likely use RESTful Express routes
that handle a single concern and return only JSON back to the client. The app’s
only non-JSON endpoint is likely the index route (/).

So while /users might return a JSON array when hit via the client app’s AJAX
call, you’ll want to handle that request differently if the user clicks a link
from an external site or manually types it in the address bar. When hit in this
context, this middleware internally redirects the request to the index route
handler, so the same client-side app is loaded for every valid route. The URL
for the end user remains the same and the client-side app uses its own router to
show the user what’s been requested based on the route. This eliminates the
tedium of performing this kind of conditional logic within individual route
callbacks.

Installation

$ npm install --save express-spa-router

Usage

In your Express app’s configuration, place this middleware high up the stack
(before router and static) and pass it your app instance:

app.use(require('express-spa-router')(app));

AJAX requests will be untouched, but valid routes called without AJAX will
result in the the index route’s result being returned. Non-matching routes will
be passed down the stack by default and will be end up being handled by whatever
your app does with 404s. This can be overridden by passing a noRoute function
in the options object:

Express’s default static paths are passed along correctly by default (as are
/js and /css), but if you use different paths or have additional static
files in your public directory, make sure to specify them in the options
either via a regular expression or an array of directory names:

You may also have valid client-side routes that don’t exist on the server-side.
Rather than having them reach the 404 handler, you can specify them in the
configuration options using extraRoutes and passing either a regular
expression or an array:

Monocat

Automated asset inlining.

Monocat is ideal for deploying small, static, single-page sites where you want
to minimize the number of http requests. Monocat compresses and writes the
contents of external assets into the html source for an easy speed optimization.