Technology

This post is mostly a reminder to myself about my setup to deal with environment variables when working on web applications. I develop in Ruby on OS X, to give you the gist of my setup.

It wasn’t until I switched from ASP.NET and C# to Ruby that I started using Heroku. Heroku is awesome, but has its opinions of how apps should be setup. It was a little bit of a mental shift coming from Microsoft technologies, but I love it now.

Importing & Exporting Config Variables – The Heroku CLI

Heroku wants you to store all credentials and other settings in environment variables. This allows you to modify credentials and such without affecting the code. It also makes sure that the code doesn’t have sensitive information in it when checked into git.

Heroku’s command line utility lets you easily list and update environment variables on Heroku. However, it’s not very easy to get those environment variables loaded when developing on your local machine. There’s a plugin for Heroku’s command line utility that lets you export environment variables to a “.env” file. I’ve created a modified version of the plugin that lets you specify separate files for each environment (e.g. development.env, staging.env, production.env). These files can be modified and pushed back to Heroku easily.

Loading Environment Variables – Using Foreman

Now that we can manage the environment variables on Heroku, how do we use them locally? Well, that’s what Foreman is for. It was created by David Dollar, along with the original Heroku CLI plugin that I tweaked. The beautiful thing is that Foreman uses the same “.env” files and lets you specify the filename to load.

A quick step back: Foreman lets you easily start up multiple processes required to run your web app. You can specify what processes to start in a Procfile, which is the basis for Heroku web apps.

Now, you can just run a command such as “foreman start” and it will load up the Procfile in the current directory. You can also specify your environment variables to load such as “foreman start -e development.env”. Your Procfile might simply have “web: rails s” in it, so Foreman just runs that command with environment variables automatically loaded from your file.

The latest piece of the puzzle I put together was to get the environment variables loaded when I wanted to use the Rails console. Foreman has a nice little hidden command which is “foreman run”. The latest version as of a day or two ago lets you use your “.env” file. All you have to do is run “foreman run -e development.env rails c” and you’re good to go!

Simplifying Your Development Environment – Use Shortcuts!

Last but not least, I created some shell functions that really easily let me run any command with my environment variables loaded. Additionally, I wrap my command in “bundle exec” where appropriate to make sure the proper gems are loaded when running my command. Here they are:

All you have to do now is prefix any command with “dev”, “prod”, or “stage” to run the command with those environment variables. For example, I’m constantly running “dev rails c” and everything works flawlessly!

Update: Using Environment Variables in Your Gemfile on Heroku

I am hosting some private gems via git and to access them Rails needs a username and password. By now, you know where the credentials are stored. However, Heroku does not load environment variables when building the slug. To enable this, you need to install a Heroku “labs” plugin by running this command:

We’ve been using Shopify at ThankLab and chose them because they’re a hosted solution and have a simple API. However, we quickly uncovered many limitations that required us to come up with workarounds. Some of the issues we had to resolve were a result of integrating Shopify into a third-party website with existing user accounts.

As we dug into documentation on the official Shopify wiki and blog, it became clear that many of the solutions they provide are their own hacks around limitations many customers have run into. This realization was double-edged: Shopify clearly encourages hacking on their platform, but many capabilities or “features” are not really baked into their core product. In the end (frustrations aside), the challenge of making Shopify work for us became an exercise in creativity.

Shopify limitations: Checkout page has a unique URL (is not known ahead of time), does not accept query string parameters to pre-fill form fields, and only allows you to customize CSS.

Get the Shopify checkout URL ahead of time

I kinda lied there a little bit. The checkout page CAN be determined ahead of time, but because the URL has a random looking token in it, it isn’t immediately evident. Here’s an example of my cart on the Penny Arcade store:

As it turns out, the ID in the URL is your shop ID (429942) and is always the same for all customers. The 32-character ID (c813f407436dc8cdda0c50d58e6d5e96) is your cart token, which is stored in a cookie called “cart”. To get the checkout URL for each customer, you’ll have to use JavaScript:

Pre-filling the Shopify checkout form

We are now able to determine the Shopify checkout URL ahead of time, but need to pre-fill the form. If you’re familiar with Ruby on Rails or web development frameworks in general, you know that you can typically take the names of form fields and add them to the URL as query string values. Those values are then pre-populated in forms that have fields matching those parameters.

In the case of the checkout form, if you view the HTML source of the page in your browser and find the “input” tags, you’ll find the field for the email address on guest checkout. The name is “order[email]“. Now, when you add that name as a query string value, the checkout URL looks like this:

Unfortunately this didn’t work. The email address is still blank. However, if you submit the form with invalid information (e.g. blank email address), you’ll notice that the cart URL changes. The URL now ends in “create_order”:

Now you can see that my email address is pre-populated in the form. You can pre-populate all form fields if you wish. In our case we are only pre-populating the email field and using CSS to hide the table row. This effectively lets our users checkout without re-entering the email address we already know.

Shopify limitation: Shopify customer logins can only be created by inviting a customer (manually) via email, at which point customers can create a password.

Originally wanted to create a Shopify customer account for each user in our third-party database, but Shopify doesn’t let you create customer accounts through the API. For some reason the only way to create a Shopify customer account is to manually “invite” the user via email, which sends them a link to create a password. We ended up automating this process using Mechanize:

Create a Shopify administrator account that only has access to Customers

Mechanize logs in to shop admin with administrator account

Mechanize retrieves the invite link for a customer

Mechanize visits the invite link and fills out the form to create a password

Pretty simple. However, we felt that this was less reliable than the guest checkout method. This process would also sometimes take up to 7 seconds to complete, and we decided it would add more complexity to our application to deal with this latency, as well as making sure that our users would be logged in to their Shopify accounts behind the scenes reliably (read: posting their Shopify username and password to the login form on Shopify in a hidden iframe).

Some other convenient undocumented Shopify functionality discovered

We actually have our own custom cart on a website separate from Shopify. So, when users click the “checkout” button, we need to “copy” the items we have in our cart to the Shopify cart. To do this, we first need to clear their existing Shopify cart and then post all items to Shopify.

Redirect after clearing Shopify cart

To clear a Shopify cart, you visit the “/cart/clear” page. After the cart is cleared, it automatically redirects you back to the empty cart page:

http://store.penny-arcade.com/cart/clear

However, if you want to send your user to a page of your choosing, all you have to do is add that page to the URL as a “return_to” parameter:

I have been working on an auto-complete web service that searches Amazon’s Product Advertising API. I built it in Node.js and using the APAC package made it really easy to query the API. The only thing that was extremely impractical was the JSON data returned by APAC.

Since Amazon’s API only returns data as XML, APAC uses xml2json to convert the XML to JSON. Unfortunately the resulting JSON is quite ugly. I wanted to be able to choose the data I needed and copy it to a new, clean JSON format. My solution was to create json2json.

json2json lets you create a template that describes how to transform the original JSON to a new structure. I wrote the Node.js package and example template in CoffeeScript because it has a much cleaner and simpler syntax than JavaScript. However, it is extremely simple to convert to JavaScript (click on “Try CoffeeScript”) and can easily be modified for use in a browser. Check out the (crude) documentation and example files and let me know what you think.

I created FBAPI.js to handle the setup requirements that Facebook’s SDK requires, such as adding a “root” tag to the page before loading the SDK. Now FBAPI.js takes care of all the SDK requirements and lets you use the Graph API without worrying about the overhead. FBAPI.js adds helper methods for event binding and retrieving user data. However, the best part of FBAPI.js is that you don’t have to wait for the page or javascript dependencies to be loaded before you can start using it! All methods use promises and callbacks. This lets you run your scripts in any order you want!

I’ll start by saying that the easiest way to handle optional parameters in javascript is to use an “options” object that allows a function to be called with as many or as few parameters (arguments) as you wish.

I’ve been working on Rembly, which uses Spine.js as the core piece that ties all the functionality together. I decided to use Mustache.js for my HTML templates. And finally, I chose ICanHaz.js as a simple and lightweight way of managing my HTML templates.

Although ICanHaz.js is a great start, managing my HTML templates became unwieldy because I started having little templates everywhere. Each part of a page that is dynamically updated needs to be broken out into its own template. When you’ve broken a web page into small parts, it’s hard to keep track of what it looks like when put back together. It also becomes hard to create the correct CSS styles when you lose track of the HTML hierarchy.

This lead me to enhance ICanHaz.js with a ton of new features. The primary one being nested templates, which allowed me to keep my full HTML page template in tact, while designating specific HTML tags as “sub templates” or partials. You can also specify additional templates to load and replace script “include” tags with the loaded HTML.

I’ve been working with Stratum Security for the past couple of months on ThreatSim (@ThreatSim), which we are happy to announce to the world today! ThreatSim is a web-based phishing attack simulator to help companies assess how vulnerable their network and internal assets may be to phishing attacks. Not only does ThreatSim track who is clicking on phishing emails, but we’re also making an exfiltration agent available, which simulates transmitting sensitive data from the local network out to the internet.

I think I’m a fan of Google’s Instant Search. Rather than having to hit “enter” or click the “search” button, the search results automatically refresh as I type. It’s quirky sometimes, because I’ll see some search results that look accurate as I’v only typed in a couple letters, and if I’m typing too fast, it passes me by and it’s not always easy to back up to to those previous results. So, for the most part, I like it.

Google just introduced Instant Preview, which displays a screenshot of each search result, while highlighting the relevant part of the page that qualified it as a search result. Convenient for skimming, and could lead to some interesting hacks that websites implement to generate some catchy screenshots that may or may not look the same when you actually click the link.

Is it silly that Microsoft spends so much time and money chasing the goal of preventing their hardware (and/or software) from being hacked? If you build it, someone will hack it. It’s pretty much a fact this day and age. I think it’s time to embrace it. As a matter of fact, wouldn’t this HELP push the product? More people will buy it because you can do much more with it than with the limited functionality that is shipped with the product!

With all the controversy over voting fraud, it’s great to see that people are consistently trying hard to come up with way to help voting systems become more reliable and trustworthy. David Bismark gave great (short) talk about a new kind of ballot that seems very logical and useful. You can walk away with a “receipt” that lets you verify your vote was counted.