As a technical co-founder, the second most-asked question I receive (behind “where do I find a technical co-founder”) is from engineers asking me, “What language/framework/ new-hip-thing should I learn?”

This question has always puzzled me, especially when it comes from seasoned veterans who have well-respected jobs. Although my alma mater taught me this way as well, this is not the way I think — and neither should veteran engineers. Why would you want to learn yet another language, when that language is nothing more than a set of instructions that tell a machine how to behave? Any engineer worth his salt can pick up a new language in a matter of weeks. The same goes for a new framework. If you’ve been writing software for 10 years and you’re worried about the new hot framework out in the world, you’ve lost the forest for the trees.

The bigger question is, how do you take your knowledge of language and software systems to hack real-world systems? I look at this as a form of social engineering; instead of the goal being to teach a machine how to behave, you are attempting to create a machine that mimics the way humans behave. Or better yet, teach the machine to behave the way the user expects it to behave.

This is the difference between a good product and a great one, and it has nothing to do with software. Companies like Twitter had to change the way humans behaved. It was a long and arduous road, and people still don’t ‘get Twitter,’ whereas companies like Apple and Facebook have painstakingly studied human beings and purposely built products around their needs.

I can’t tell you the amount of times that I’ve come into an engineering meeting where one engineer is bragging to another that in five lines of beautifully succinct code they have been able to complete their task in record time with flawless execution. But I can count on one hand the number of times an engineer has bragged that they changed five lines of HTML and upped the engagement rate 10% on a feature that everyone perceived as useless. A hack is a hack, and that’s what engineers do. I don’t care if it took zero lines of code or a hundred and a roll of duct tape to complete the task, and guess what? Neither do your customers. The product is all that matters.

Both Facebook and Apple have taken this to extremes by hacking human perception. Facebook recently discovered that by changing their loading animation icon to what iOS and Android natively do, the user couldn’t tell if the slowness was coming from their phone, their carrier, or Facebook itself. This simple, genius hack saved so many man hours and millions of dollars of server time it’s hard to even quantify the impact. Apple has done something similar by creating a stripped progress bar that scrolls at just the right speed so you feel as though it is moving faster than it actually is.

These hackers were so perceptive and audacious that rather than engineer a faster application, they decided it was easier to change the users’ perception of time. All of this by changing very little code and no infrastructure.

But let’s get back to the engineer’s original question. I use my current team of three engineers to prove my point. When I began hiring back in May of 2013, not one of them knew our native language of Python, and the other two had barely written any JavaScript (which we have a lot of). Less than a year later, I now have three full-stack engineers who are capable of writing complex interactions on a JavaScript-heavy front-end stack all the way to organizing complex schemaless datasets. If I had attempted to hire three full-stack engineers out the gate, I would be lucky to end with one. Instead, I found three humans who are willing and able to think about the bigger system and realize that often, the best code is the code not written.

Next time you’re hiring an engineer, don’t just ask them the usual whiteboard questions and what they learned in college 10 years ago. Your job is to find out if they are willing and able to program the way users behave — or just program computers.

Last May my friend Kyle Stewart, Executive Director of ReAllocate, came to me last May and asked, “what would you do with a 14,000 square foot warehouse in the heart of San Francisco?” Assuming this was a loaded question I first asked him how much it would cost. He assured me that it wasn’t a concern of mine and told me to grab my tools because we had work to do.

It turns out that our friend Mike Zuckerman had secured a warehouse at 1131 Mission Street for just $1 for the month of June to conduct a civic experiment. A ‘culture hack’ as he refers to it; repurposing otherwise unsused or private properties for different uses then they were intended. For me it was about creating a modern day community center run by hackers, completely based on merit, not money. If you could swing a hammer, paint a portrait, program a computer, or organize a coup you were given a canvas in which to experiment.

Much like how children rally together to build a treehouse with no reservations, we did the same. This all happened so fast and there was no time to fuss over governance, planning, and structure. Traditonal democractic process was thrown out the window and its place we were left with an adhocracy. Where immediacy, merit, and vision collide.

And so, starting in June the great experiment began. For the entire month the place was hopping with excitement from ten in the morning to ten at night. There were free yoga classes, free drawing classes, and free dance classes. There were artists painting murals on every flat surface you could find and hackers soldering away at LED light projects. Artists were working with entrepreneurs and the homeless were interacting with computer programmers. Everywhere you looked you would find people mixing ideas and cultures.

After only a few weeks of being open, we realized that one month just wasn’t going to cut it. In true Freespace fashion, we gathered a headless team of volunteers together to start a crowd funding campaign and media blitz to raise $24k to pay the full rent the following month. Sure enough, we had enough press coverage and touched enough people that we were able to raise the money and continue our experiment.

Fast forward 6 months later to today, Freespace in its original incarnation is gone but the idea is spreading. One just opened up in the heart of Paris and the city of San Francisco gave us a grant to do another one on Market Street. In only 6 months the conversation has changed from us begging and pleading for space, to now where building owners are contacting us to activate their vacant spaces. Requests are coming in across the globe as the idea and the vision spread, farther and wider than any of us ever imagined.

Mike has now made it his mission to help foster its growth and he is currently on airplanes every week to do just that. So if Freespace or Mike comes to your city or town, please stop by and say ‘hi’.

Right before Burning Man this year the Huffington Post reached out to me to do a live interview about the event, specifically about its shared economy. Neither the producer nor the host had been there so naturally they had a lot of misconceptions about what actually goes on. From what they had heard, the shared economy is the emphasis of the event.

The other two guests and I were able to shift the focus onto the event itself and less about how it’s “shared economy is changing the world”. At the end of the day, Burning Man is many different things to many different people. The shared economy of Burning Man is only one of the 10 principles that makes this event so special. Everyone takes something different home from the event and its just something you have to experience.

A few months back my friends Kyle and Nick joined me in an endeavor to build a chicken coop for our friends at the Avolon Hot Springs. Avalon is an amazing place high above Middletown, CA in the mountains of Lake County. It dates back to the 1850′s when it was called Howard Hot Springs and catered to the rich and famous Victorians.

Today it still retains a lot of its history with original springs still on the property, rustic cabins, and a beautiful main hall. We were happy to help out and add to the charm on Avalon. Also, fresh eggs when we visit are a plus.

Below are some of the photos and plans. We precut and built most of it in my basement and then transported it up there. Even with that pre-planning it still took us a day and a half to assemble and paint it.

(We added a door for cleaning and a thick plastic floor for easy cleaning)

(We used organic recycled materials for the insulation)

(The plans I drew up. Click the photo for a larger image)

All in all it was a very fun and rewarding project. It’s also quite simple and the average DIY’er should have no problem tackling it. It was all of our first times too! For more photos, check out the chicken coop build photo set on Flickr.

A few months back my friend @nick showed up at my door with an idea and a cardboard box full of wood. He and his coworkers from Twitter were building a kiosk that would take photos and tweet them from @twisitor. Naturally, I thought the project was hilarious so we headed down to the garage.

The wood he brought was a nice veneered bamboo with a bull-nosed edge on one side which gave it a nice finished look. The shingles were cut by @nick and his cohorts from cheap shims they got from the hardware store. The nailgun made quick work of the project and it turned out great! Everyone liked it so much that it ended up in the lobby. Thanks @nick for a fun and random project!

It’s been a dream of mine since I was a child to have a projector screen. Turns it out its not that hard or expensive, it just takes time and careful planning. The project has been complete for almost a year now but I hadn’t gotten around to creating a write-up until recently.

Every business has workflows or pipelines it must maintain to keep operations running. Manufacturing companies consume raw materials to create finished products and in turn the sales department pipelines the products out the door to its customers. Internet tech startups aren’t any different. From hiring, lead generation, data onboarding, and sales pipelines all the way down to the software release management just like a brick-and-mortar manufacturing company. Every company produces some sort of good or service and can be broken down into a series of workflows.

People play an incredibly large part in every company workflow; however us humans can be costly and hard to manage which creates a challenge when scaling horizontally. A human can only remember so much and without machines it becomes impossible to grow effectively. With the advent of commodity software in our workflows we can get multiples of scale with the use of less human intervention. Another way to look at it is make the humans you have on staff more efficient.

Software is definitely the answer to hit multiples of scale but on the flip side outsourced human labor has also become a commodity due to services like Amazon’s mTurk to labor on oDesk and eLance. We’ve even successfully hired dozens of folks locally for over a year from Craigslist when we really need to maintain a high degree of data integrity and skill. One could easily achieve another level of scale by just hiring more people.

The big question really is where is the line between human intervention and software automation? The engineer in me immediately screams, “software!” but in reality this is the wrong way to begin to look at the problem. If you’re automating something of a known quantity like a grocery store then go get yourself some point-of-sale system and call it a day. Software is clearly your answer and you can stop reading now. When you’re a fast paced startup and need to pivot quickly you wouldn’t architect an entire software stack around an ever-changing business model. You will spend more time and money rewriting code as your business adapts rather than having a human do the job from the beginning.

This is exactly how UPS scaled their business. They would use humans extensively and slowly automate different pieces of their jobs. One script after another would be written until finally they would string them all together into a pipeline of software. They did this for the same reason we do it; they were in uncharted waters. There’s no reason to double down on engineering that is 10x more expensive than much cheaper labor that can adapt quicker than machines.

Discovering Inefficiencies

Unfortunately every business is different and identifying scaling problems takes proper vision and knowledge of the business. If you’re reading this article then you probably understand all this so I will keep this section brief. To me, there’s one rule you have to follow that will allow all others to build upon it; “that which is measured is improved”. You cannot even begin to optimize until you find the highest level of return or biggest problem area to focus on. Sure you can blindly go into different departments to help them optimize but how do you decide where to start?

The analogy I like to think of is diagnosing car troubles. If your water temperature gauge doesn’t work but your voltage gauge does and you solve your electrical issues your car still may overheat down the road. In reality, you can fix the battery issues later as long as the car starts but the highest value of return is fixing the overheating issues immediately.

Solving Inefficiencies

Taking a scientific approach to solving these problems is the most financially prudent and effective way to find scaling solutions. There are a few ways we have gone about doing this. The easiest (also least likely) way is to have the humans tell the engineers what they spend their day doing. This helps the engineers keep their heads down on other projects without directly getting involved in the nitty-gritty.

Unfortunately, not many non-engineers are able to actually explain what it is they do all day long in enough detail to replicate. We’ve had mixed success but our staff is learning how to distill their use cases down more and more. In order to help them discover inefficiencies we have one golden rule; if you find yourself doing the same task for more than a few hours a day tell your boss or the engineers immediately!

If we step back even further and look at the bigger picture what I am really saying is question your job! Don’t just do something because you are told to do so. Many folks have a hard time with this concept and want to please their boss but in reality what we are after is scale. The only way to get to the next level of scale is to be introspective.

The other way we find inefficiencies is to send in the engineers. We have successfully achieved multiples of scale by having engineers shadow employees that have highly repeatable workflows. In some cases, we have even removed the human from the task for a few weeks while the engineer fills in to really dig deep into the problem. This can be a painful process for the engineer but it achieves extremely good results.

Knowing When to Stop

Always remember that just because you can do something doesn’t mean you should. Much like writing web software, scaling pipelines is an iterative process and it’s much easier to roll forward than to roll back. Removing all the human touch points too fast will mask problems and likely create more. Automation itself needs to be monitored otherwise you will lose sight of what’s really happening.

One of the tricks we employ is to automate things to the point of complete automation minus the final step. We do this for quality control purposes in many cases. For example, let’s say you need to gate an approval process part of your system. Rather than automatically approving an event to happen, we send an email to the administrators with the proper information they need to make the decision and several URL links we dub ‘one-click approval links’. This is our way of gating sensitive or otherwise high-risk events that require human attention. This buys us many levels of scale without sacrificing quality. If we hit the next level of scale and still require humans for quality reasons we can easily outsource this for very little cost.

Some pipelines like the one I just described will always require human interaction but other times we do fully automate. When this does happen we have to write even more software to monitor these events. Whether it be audit trails or reports that are emailed to us we need to keep an eye on the gauges. As we scale up it becomes more and more important to have a complete picture as to the status of your pipelines.

Conclusion

There really is no silver bullet to any of these problems and solutions come from careful study and measurement just like scientists would study behavior in lab. Software alone cannot solve all of your logistics problems to scale just as humans will never be able to achieve the same scale without software. Since humans are smarter than machines and will be for the foreseeable future you will need to weld the two together. Finding out how far to push the automation is really based upon your business needs and how closely you study your logistical systems.

My recommendation for anyone who is attempting to scale up their logistical pipelines from shipping product to onboarding data is to use humans and study their behavior closely. It will cost you less up front and in the long term as well as delivering a better more accurate end result. Measure twice and cut once.

A few weeks ago, right before my 30th birthday, I filed my first patent with the founder of my company Adam Sah and our other engineer and my housemate David Merritt. I dreamt the up the idea while brainstorming lead generation at trade shows that we attend. A large part of my job involves bridging the physical world with the online world and how to share data between the two. We are always looking for ways to make this happen not only faster and easier, which equates to cheaper, but also more accurately.

The patent involves software that is used on commodity hardware such as a cell phone to collect data and use it in a novel way. Unfortunately I cannot share much more information due to the simplicity of the design how easy it is to replicate. We are currently using this technology internally and soon we will be sharing it directly with our customers. Once the provisional is approved I will provide the final document and a demo.

Recently Jane Dagmi, a writer for Bob Vila, found me on Twitter and began to follow the restoration process of my Victorian house. She approached me along with 3 other DIY renovation bloggers to write about our workshops and our restoration process. It was an honor and the first interview I’ve ever done like this. I hope you enjoy it.

This is a cache I have created that allows me to always serve the user cached data, i.e. never serving a miss. There are times when some data is just too complicated to make the user request thread wait and other times you just want to be sure you have high availability with no user facing recompute time at all. This is a great technique for template caches and API wrappers which I have employed more than once in my career.

I achieve the goal of no cache misses by first building what I call a persistent cache; memcache backed by disk or database. This way if memcache ever takes a miss I look in the database and pull the result from there. I then recache that result in memcache for a short amount of time and return the response to the user. At the same time, I then put a task on my task queue (MQ) to go recalculate the data and then update my persistent cache. If all caches are empty then I hit the API and upon success, cache the results and return the data to the user. If the API throws an error I put a task on the queue and hopefully the next time the user requests that data the cache will be warmed upon a successful API query.

Essentially this creates a type of MRU cache, only using memcache for frequently used data. I could use cron or something like it but I like that my users and bots are the ones who flush my caches based on demand rather than time or some other indicator.

(click diagram for a larger version)

The diagram above describes how I built the caching for Klout Widget, a hack I created just to demonstrate this strategy. Their API was a little finicky so I decided to create a persistent cache that would be resilient to bad or missing data. Go ahead, give it a try.

Anyway, hope this strategy is useful. If anyone knows the name for this type of cache I’m all ears. As far as I know its something I dreamed up.