TL;DR: One bitcoin is to all bitcoins what 364 troy ounces of gold are to all gold. At the time of writing, one bitcoin is worth $1,181, whereas 364 troy ounces of gold are worth $458,276. What does this mean? I don’t know, but it is better than comparing the price of one bitcoin to the price of one troy ounce of gold.

Bitcoin and gold are similar in certain ways. They are both distributed, decentralized, and implausible to counterfeit. The ongoing supply of both is limited. And in both cases it’s hard to pin down whether each is a currency, commodity, or something else entirely.

There are many fascinating differences between Bitcoin and gold. While Bitcoin is at the absolute bleeding edge of financial technology, gold is one of the most ancient ways to represent value. Bitcoin is easy to move, gold is difficult. All Bitcoin transactions are openly tracked for all eternity, not so with gold.

So it’s easy to see why so there’s so much discussion about Bitcoin and gold. They are so alike, yet so different, and it’s endlessly interesting to explore this contradiction.

From what I can tell, there seems to be a general understanding in the financial media that there is no meaning in comparing stock prices. I have never seen an article suggest that Apple’s a little bit behind because AAPL’s at a measly $136 and GOOG’s at a big $829. But somehow it seems that this understanding goes right out the door when Bitcoin and gold come up!

I understand that a financial comparison between the two is too tantalizing to avoid. So what kind of comparison could we do that might have at least a little tiny bit of real meaning? Well, I’m not an economist, so I’ll just have to make some shit up. First, we’ll need some facts:

So what does this data tell us? Well, first of all we can see that there are a lot more troy ounces of gold out there than there are bitcoins. Looking at what’s currently in circulation, we can see that for every bitcoin there are about 364 troy ounces of gold. A long time from now, when both Bitcoin and gold are totally mined out, the ratio will be closer to 417 bitcoins/troy ounces of gold. (Well, that’s assuming that Planetary Resources is not successful within the same timeframe.)

For brevity, in the rest of this post I will focus on what’s currently in circulation. There is interesting analysis to be done on the mining rates of each currency and how they vary over time, etc, but that’s something for a separate post.

When one owns a single bitcoin, one controls about one sixteen-millionth of all bitcoins in existence. When one owns a single troy ounce of gold, one controls about one six-billionth of all troy ounces of gold above ground. Obviously as a fraction of the whole, a troy ounce is tiny compared to a bitcoin!

So how much would it cost to own one sixteen-millionth of all gold (i.e. the same fraction that one bitcoin is of all bitcoins)? Since there are 364 troy ounces of gold per one bitcoin, it would cost 364 * $1,259 = $458,276.

Contrasted with the meaningless comparison of the price of one bitcoin to the price of one troy ounce of gold, I think this number might have at least a little bit of meaning. I can’t really speculate on what that meaning might be, though.

I recently launched a little side-project web app, School Seating Charts, which makes it easier (and faster!) for teachers to build seating charts for their classrooms. The site is built entirely in Clojure and ClojureScript, which have been a pleasure to work with.

While writing this post, I realized that, for better or worse, I have a lot to say about Clojure and ClojureScript development. So, to make my life easier I’ll be splitting my thoughts into several posts. In this first post, I will give a general overview of my development experience, and in future posts I will dive more deeply into the details.

Tooling

To build the app, manage dependencies, and generally keep myself from setting my hair on fire, I use Leiningen. There’s not much that needs to be said here — if you’re using Clojure without Leiningen, you are doing it wrong!

I develop Clojure with Vim, which puts me in the (slight) minority versus Emacs users. Overall, while the LISP-editing tools in Emacs are probably more refined, editing Clojure in Vim is not bad at all. Between VimClojure and Paredit.vim, I’m not left wanting. If you go down the VimClojure route, be sure to use the lein-tarsier Leiningen plugin, which makes it much easier to get Vim talking to an instance of your app.

Finally, of course, I use the lein-cljsbuild Leiningen plugin for compiling and testing my ClojureScript code (as the author of the plugin, it would be a bit weird if I didn’t). School Seating Charts is the reason that lein-cljsbuild exists in the first place — it’s a classic example of an open source project scratching its author’s itch.

Production Environment

The app runs on Google App Engine (GAE). Now, I work for Google, so I’m probably biased (although I began working for Google well after I chose to target GAE), but I have been really happy with it so far as a Clojure deployment target.

GAE supports Clojure by virtue of the fact that it supports Java. There’s a lot of interop required to work with the GAE APIs, but luckily the appengine-magic library has already taken care of almost all of this by wrapping the Java APIs in an idiomatic Clojure API. With this in place, it really feels like there’s native support for Clojure.

Like many other cloud providers, GAE takes care of a ton of the fundamentals for a web app: the database, memcache, server infrastructure, load balancing, logging, metrics, and a lot more. This convenience comes with many restrictions, though, such as limited access to local files and outgoing network connections. However, my app’s requirements happened to mesh nicely with the features and limitations of GAE, which I think is a large part of the reason I’ve been so happy with it.

Deployment to GAE is simple. The lein appengine-prepare command gets things ready and then the GAESDK takes it from there and uploads and starts the new version of the app.

ClojureScript: The Good

I’m ambivalent about the JavaScript language. I don’t hate it, but neither do I like it. So, after I first read about ClojureScript, I jumped at the opportunity to write my app’s client-side code in a LISP.

Overall, ClojureScript has been an absolute blast. It has numerous advantages, such as a solid namespace system, compile-time macros, and much of the other goodness that one would expect from Clojure. The single biggest win, though, is being able to freely share code between the client and the server. Of course, this can be done in JavaScript with node.js, but JavaScript is really just not that good of a server-side language. Performance aside, I’d prefer Clojure simply due to it’s access to the massive ecosystem of Java libraries.

What kinds of code does School Seating Charts share between Clojure and ClojureScript? Well, my favorite thing is all of the HTML IDs and CSS selectors. The HTML generated by the server and the DOM lookups made on the client always agree because the IDs and selectors are defined in one spot. This helps to prevent a whole class of errors from those things being mismatched.

The app’s config API is shared between the client and server. So, things like “is debug mode on” or “what’s the price of the app” come from a central place and are handled by the same code, so the client and server always agree on their values. Other shared code includes various geometry utilities that are required in both places.

The best example of why I love using the same language on the client and server probably has to do with the code I wrote to shuffle students around the classroom. The teacher lays out desks and inputs a student roster, and then can press a button to randomize the student placement, all the while respecting other criteria (e.g. keep talkers away from one another). I originally executed this algorithm on the client, which worked fine in modern browsers, but didn’t perform very well in IE8. After considering my options (rewrite the code in JavaScript, simplify the algorithm), I decided to move the calculations to the server and have the client retrieve them via XHR. This change took me around 10 minutes to implement. To me, this is mind-blowing. The algorithm in question is pretty tricky, with lots of edge-cases, and even a straight port between languages would have taken hours and introduced bugs.

ClojureScript: The Bad

So, what are ClojureScript’s rough edges? Well, there are a few things. For one, with so few people using the language, you are more likely to run into edge-cases that haven’t had the bugs beaten out of them. For instance, I used ClojureScript’s pr-str function to serialize data structures to send to the server. Apparently, not many other people had tried using pr-str with a large data structure on IE8. Performance was unusably bad, and I ended up having to patch the compiler to get acceptable performance.

Debugging is another rough edge. At the time of writing, ClojureScript does not have source map support, which means that when your code throws an error at runtime, you’re stuck looking at a JavaScript stack trace with little to no ClojureScript-specific information. Personally, I found the generated JavaScript pretty easy to read, as it retains most of the symbols from the code it was compiled from. Regardless, this clearly needs to improve. Thankfully, people are working on it.

ClojureScript, like Clojure, is a hosted language. This means that interop with the JavaScript platform is a first-class feature. Overall, interop with JavaScript code is impressively easy. School Seating Charts makes extensive use of jQuery and several jQuery plugins.

The compiler is implemented on top of the Google Closure Compiler (yes, the terminology is extremely confusing), which means that ClojureScript can take advantage of its excellent optimizer for things like dead code elimination and compression. This is absolutely necessary for production deployment, as for School Seating Charts, the JavaScript output is 1.8MB before optimization (it’s 188K after optimization, and 46K after gzip).

However, the advanced optimizations come at a cost: if your ClojureScript code calls into any external JavaScript libraries, you must provide an externs file to tell the compiler which symbols need to be passed through uncompressed (Luke VanderHart wrote an excellent post about this). The documentation on how to create these externs files is very poor, and for me, at least, required a lot of frustrating trial and error. While this lack of documentation is ultimately a Google Closure Compiler problem, it very much affects ClojureScript development as well.

With all of that said, please don’t take my criticisms of ClojureScript too seriously. In reality, it’s inspiring that the language is scarcely 14 months old and yet is totally usable for production systems. The community around the language is aware of all rough edges that I highlighted, and there’s work being done to address them all.

Appendix: Libraries

This is just a survey of all of the app’s direct dependencies, taken from its project.clj config file. Some of these libraries are pretty specific to the way the app is built (stripe-java), and others are likely to be found in every Clojure web app out there (compojure).

appengine-magic
Makes it much easier to write a Google App Engine app in Clojure.

I’d like to introduce Digbuild, an open-source game engine inspired by the excellent game Minecraft (and Infiniminer before it — that’s right, Minecraft is itself a clone). I’ve been working on it on and off in my spare time for a few months now, and today I decided that it’s ready to show to the world. For the last couple of months I was debating when it would be time to publish it. I didn’t want to release it in such an early stage that it was unusable, and in particular I didn’t want to release it in a state where it was nearly impossible to build. This weekend, though, my good friend Blake Miller took it upon himself to build Digbuild (say that 5 times fast), and as it turns out, the build system is relatively workable. So, have at it!

What Digbuild Is

Right now Digbuild provides a randomized, voxel-based world for the player to explore. In this regard, it’s very similar to Minecraft. You can create and destroy blocks, and thus you can build castles and any other structures that spring to mind. Digbuild has several improvements over Minecraft:

Infinite world height. You can build structures as tall as you like.

Colored lighting. Different blocks emit different colors of light, and colored glass blocks filter the light that flows through them.

Translucent materials. Want to build a castle out of six different colors of stained glass? Go for it.

Bump– and specular-mapped textures: Glass is shiny and rocks are rough.

Open source. Want to improve something that’s not changeable through an existing API? Hack the source to your heart’s content.

What Digbuild Isn’t

Although Digbuild is heavily inspired by Minecraft, it does not strive to be just like it. If you want to play Minecraft, go play Minecraft! The ultimate goal is for Digbuild to go in several directions. We’re planning a Python-based scripting engine to make building plugins easy, and it can always be forked. There’s a lot of things that Digbuild lacks at the moment:

It’s unfinished. If you want to play a game, don’t choose Digbuild. It’s still early in development, and right now is targeted towards hackers.

There’s no multiplayer support. It’s planned, but is still a ways off.

There’s no crafting. The crafting system will eventually be fully Python-based, but there’s no support for this yet.

How to Contribute

We’d be thrilled if you wanted to help make Digbuild better. It’s got a long way to go before it’s really a video game, but building it is (at least) half the fun, right? If you’re interested in working on it, just fork it on Github and go crazy. Add something cool? Issue a pull request and see it get merged into the main game.

There’s plenty of work to do aside from coding, as well. We need to create textures for new materials, come up with ideas for gameplay, and eventually add sound effects.

Finally, we’re under no pretense that Digbuild is perfect. It’s still a work in progress, and any kind of feedback at this stage could be helpful. So don’t hold back your criticisms or ideas!

Learning More

I plan to write a series of articles on what I consider a few of the more interesting bits of the Digbuild implementation. Right now the topics I expect to write about include the random terrain generation, graphics optimizations, and efficient collision detection algorithms. If there’s anything else interesting about how Digbuild works, let me know and I’ll consider writing about that too!

I’m doing a little bit of work that involves frequently rebuilding the Linux kernel and installing it on a headless ARM board. The particular ARM board I’m working with has some vendor support for flashing kernels, but it’s slow and clunky, and I have to run it inside a Windows XPVM. The ARM board uses the U-Boot bootloader, though, so it’s possible to boot the kernel in a couple of different ways. One way would be to load the kernel via TFTP, but I haven’t gotten that working yet on my board. The other option is to load it via serial, which isn’t very fast but requires very little setup.

U-Boot’s loadm command allows a kernel to be loaded, via serial, into a memory location. The bootm command may then be used to boot the kernel directly, which saves time compared to writing the kernel to the flash memory and loading it from there. The trouble is that loadm expects the kernel to be sent via the Kermit protocol. I found a few examples of how to deal with Kermit, but none of them directly applied to loading a kernel with U-Boot.

I came up with the following Kermit script to solve my problem. This script automatically waits for the board to reset, sends the loadm command, pushes down the kernel, and runs it via the bootm command. After it boots the kernel, it turns into an interactive console. This script relies on C-Kermit, which I installed under Ubuntu as follows:

bash$ sudoaptitudeinstall ckermit

The script I’m using is as follows. There are a lot of settings hard-coded into the script, so read the comments carefully to determine what parts you might need to change to suit your setup. To use this script, simply copy it into a file named, for example, boot-kernel, give it executable permissions, and run it.

#!/usr/bin/kermit

# Serial port setup. These settings will likely need to be# changed to match the configuration of your workstation# and the ARM board you're working with.set line /dev/ttyUSB0set speed 115200set serial 8n1

# This is the string that my board outputs to allow the user to# gain access to the U-Boot console. Change this to suit your# setup.
input 60"Hit SPACE to stop autoboot"# If your board wants you to press a different key to get to# U-Boot, edit this line.
output " "
input 5"u-boot>"# Here, 0x800000 is the memory address into which the kernel# should be loaded.
lineout "loadb 0x800000"# This should be the absolute path to your kernel uImage file.
send /path/to/uImage
input 5"u-boot>"
lineout "bootm 0x800000"

# This command drops you into a console where you can interact# with the kernel.
connect

Once the script has given you console control, you need to use the Kermit escape key to exit. By default, this is set to Ctrl+\ (that’s a backslash). To see a list of commands, type Ctrl+\ and then ?. The command to immediately exit the console is q.

One last thing to note: this script doesn’t do any error checking. Each of the input commands can fail, if it does not see the text it’s looking for in the specified time. The script could be extended to check for errors using Kermit’s IF command.

Recently I’ve run across a few articles (on Hacker News and elsewhere) about the drawbacks of telecommuting. I agree that there are drawbacks, but I believe that they can be counterbalanced by the benefits under the right circumstances.

The Right Circumstances

Not every person is cut out to telecommute, and not every job is suitable to be performed remotely. Furthermore, there are many tools available to make telecommuting much more effective.

The single most important traits for a telecommuter to have are strong writing and comprehension skills. There are no two ways about it; a telecommuter is going to engage in a lot of written communication. You can’t yell over the cubicle wall to ask for a quick clarification. Since they are not physically present, any communication with them requires a small amount of overhead. Thus it’s important that each bit of communication with the telecommuter be clear and concise.

The ever-present communication overhead implies that jobs which require more frequent communication are less suitable for telecommuters. The best jobs are those in which a lot of “heads down” work needs to be done. These are the kinds of jobs where even if the employee were physically present, they’d want an office with a door that shuts tight. Many nuts-and-bolts, back-end software engineering jobs fall into this category. For instance, writing a device driver requires large chunks of up-front communication, but after that it requires deep concentration and few interruptions — perfect for a telecommuter. Other jobs, such as project management, require constant communication and incur a much greater telecommuting overhead.

Finally, tools are instrumental in making telecommuting work. In a software shop, a good Wiki system allows for collaborative documentation. A bug/feature tracking system helps keep everyone in sync on priorities. File sharing, phone conferencing, source control, desktop sharing, VPN systems — all of these are absolutely critical to enable a telecommuter to do their job.

The benefits of telecommuting only apply fully when the above circumstances are met. It’s easy to see how telecommuting could leave a bad taste in someone’s mouth if it was attempted with the wrong person, job, or tools.

The Benefits

Better documentation. One of the major drawbacks of working with someone far away is that you can’t walk up to their desk and pick their brain. Sure, you can call them, but once you’ve resigned yourself to the overhead of a phone call, more likely than not you’ll just send an email or instant message. But there’s a hidden benefit to this: more knowledge ends up written down. Informally, you end up with more knowledge in your email or IM history. More formally, you have more opportunities to write documentation. A good telecommuter knows when an email thread has become overgrown and needs to be dumped into a Wiki article.

Higher throughput. For software jobs that require extended periods of deep concentration, telecommuting can often provide the best work environment. This can require some effort on the remote employee’s part (e.g. establishing a no-interruption rule with the kids), but when it’s pulled off successfully it can be orders of magnitude better than being cramped up in a cubicle next to a salesperson who’s constantly on the phone.

More hours. The lack of a commute and the ease of making a quick lunch at home save a lot of time for a telecommuter. When a doctor’s appointment comes up in the middle of the day, it’s easier to justify working late to make up for it, instead of taking personal time off.

More flexible pay. The market value for a talented engineer differs between, say, the Bay Area and Wisconsin. The cost of living and market demand vary drastically between different geographical areas. A business in an expensive metropolis can save tons of money by hiring a telecommuter from an area where it’s cheaper to live. This can benefit the telecommuter as well, if the business, for instance, splits the difference between the local and remote market salaries with the employee.

Conclusions

In no way am I trying to prescribe telecommuting as a panacea or some kind of magical efficiency booster. But, as a telecommuter myself, I have seen it work out really well firsthand, and I feel the need to point out the fact that it does have a few tangible benefits. Like any other business decision, though, it shouldn’t be chosen without careful thought and planning.

It’s happened to everyone. You kick off a software installer, answer a few questions about how you’d like things set up, click next and you’re presented with a long progress bar. “No problem,” you think to yourself, “this is a good excuse to grab a cuppa joe.” You leave the computer to its business and hit the kitchen, maybe catching a glance at the paper. After some time has passed, it occurs to you that the installer’s probably been finished for a while, so you head back to your computer to start using your fresh new software. And then BAM! You get slapped in the face with just one last question that the installer needs you to answer. It turns out that it’s only partially complete, and when you click next again, you’re presented with another long progress bar. Now you’re faced with a decision: do you switch tasks again, or do you babysit the installer, in case it has another question?

This behavior drives me absolutely mad. I’m impatient with installers to begin with; they’re a hiccup (albeit a necessary one) between me and the software I want to use. Of course, it’s not just installers that suffer from this problem. Any piece of software that has to perform some kind of long-running task can be subject to this annoyance, simply by requiring user input anywhere except at the very beginning or end of a lengthy task.

The amount of frustration this bug causes is directly proportional to how long the task will take. I recall a recent mishap where I was installing an older Debian distro on an extremely slow ARM machine. I thought I had answered all of its questions, and left my office to do errands for several hours. I was confident that when I returned, the machine would be ready to go. Of course, you know how this story ends: upon my return, I found the installer waiting for input, and it took several more hours for the installation to complete. My work for the day was set back, and my schedule was thrown off.

Thankfully, the solution to this problem is extremely obvious: batch up and prompt for all of the necessary user input before starting a long running task. Never, ever interrupt the task to prompt for more input unless it is 100% unavoidable. If truly unforeseen circumstances do require user action, try to continue any work that can still be performed. If all of the work is dependent on the user feedback, consider continuing the work in the background by guessing the most likely user response. If the user shows up and enters a different response than the one guessed, back out the guess work and do the right thing. If the user is not present to see the prompt, at least there’s a chance that the long-running task will continue down the right path uninterrupted.

It’s been a long time in the making, but I am proud to announce the first beta release of cppsh, the bash-like shell specifically designed for those engineers who find themselves most comfortable at the reins of a C++ compiler. The best features from both bash and the C++ language come together in cppsh to make you a more productive shell user. Some of the most important features of cppsh include:

File Iterators

File iterators allow you to traverse the files in your working directory using the convenient C++ STL iterator syntax:

Configurability

Like many UNIX programs, cppsh can be configured by editing the .cppshrc file in your home directory. Unlike most UNIX programs, however, the .cppshrc file is a full-fledged C++ header file. The .cppshrc file is responsible for defining the cppsh_shell type. This is done by creating a user-specific traits class and passing it as a parameter basic_cppsh_shell template:

#ifndef DOT_CPPSHRC#define DOT_CPPSHRC

#include <cppsh/basic_cppsh_shell.hpp>

namespace cppsh {

struct user_cppsh_traits{typedef vi_editing_mode editing_mode_t;

staticconstint command_history =1000;

static std::string prompt(){return"cppsh>";}};

typedef basic_cppsh_shell<user_cppsh_traits> cppsh_shell;

}// namespace cppsh

#endif // DOT_CPPSHRC

Extensibility

If you’re a demanding user, you might find that the .cppshrc file does not offer the power you need to customize cppsh to fit your needs. You’re still in luck! All of the features described above (and more) are packed into only 412,011 lines of C++ code, so you can easily hack cppsh to fit your own needs. Internally, cppsh makes extensive use of template metaprogramming, so the code is terse and easy to understand.

What are you waiting for?

Get started with cppsh today — visit the project page for downloads and documentation. You’ll be happy you did.

Here’s another song. I haven’t bothered naming it yet, so I’ll just release it under the codename I’ve been using. The song again features Ableton’s Collision instrument for the bells in the beginning. It was partly inspired by the upbeat and airy sound of Aphex Twin’s Flim, which is one of my favorite tracks of all time. The audio cutting techniques that I used were probably a result of my deep love for Machine Drum’s music, which has some of the sweetest audio slicing that I’ve ever heard. Machine Drum is to audio as a teppanyaki chef is to an onion tower.