I sometimes hate software frameworks. More precisely, I hate their rigidity, and cookie cutter systems of mass production.

Personally, when someone tells me about a cool new feature of a framework, what I really hear is “look at the shine on those handcuffs.”

I’ve defended this position on many occasions. However, I really want to put it down on record so that I can simply point people at this blog post the next time they asks me to explain myself.

Why Use a Framework?

Frameworks are awesome things.

I do not dispute that.

Just from the top of my head, I can list the following reasons why you should use a framework

A framework abstracts low level details.

The community support is invaluable.

The out of the box features enable you not to reinvent the wheel.

A framework usually employs design patterns and “best practices” that enables team development

I’m sure that you can add even better reasons to this list.

So why would I not be such a big fan?

What Goes Wrong with Frameworks?

VENDOR LOCK-IN.

Just to reinforce the point, let me say that again: VENDOR LOCK-IN.

There is a natural power asymmetry between the designer/creators of a framework and the users of that framework.

Typically, the users of the framework are the slaves to the framework, and the framework designers are the masters. It doesn’t matter if it is designed by some open source community or large corporation.

In fact, this power asymmetry is why vendors and consultants can make tons of money on “free” software: you are really paying them for their specialized knowledge.

Once you enter into a vendor lock-in, the vendor, consulting company, or consultant can charge you any amount of money they want. Further, they can continue to increase their prices at astronomical rates, and you will have no choice but to pay it.

Further, the features of your application become more dependent on the framework as the system grows larger. This has the added effect of increasing the switching cost should you ever want to move to another framework.

I’ll use a simple thought experiment to demonstrate how and why this happens.

Define a module as any kind of organizational grouping of code. It could be a function, class, package, or namespace.

Suppose that you have two modules: module A and module B. Suppose that module B has features that module A needs; so, we make module A use module B.

In this case we can say that module A depends on module B, and we can visualize that dependency with the following picture.

Suppose that we introduce another module C, and we make module B use module C. This makes module B depend on module C, and consequently makes module A depend on module C.

Suppose that I created some module D that uses module A. That would make module D depend on module A, module B, and module C.

This demonstrates that dependencies are transitive, and every new module we add to the system will progressively make the transitive dependencies worse.

“What’s so bad about that?” you might ask.

Suppose that we make a modification to module C. We could inadvertently break module D, module A, and module B since they depend on module C.

Now, you might argue that a good test suite would prevent that … and you would be right.

However, consider the amount of work to maintain your test suite.

If you changed module C then you would likely need to add test cases for it, and since dependencies are transitive you would likely need to alter/add test cases for all the dependent modules.

The transitive dependencies make it nearly impossible to reuse your modules.

Suppose you wanted to reuse module A. Well, you would also need to reuse module B and module C since module A depends on them. However, what if module B and module C don’t do anything that you really need or even worse do something that impedes you.

You have forced a situation where you can’t use the valuable parts of the code without also using the worthless parts of the code.

When faced with this situation, it is often easier to create an entirely new module with the same features of module A.

In my experience, this happens more often than not.

There is a module in this chain of dependencies that does not suffer this problem: module C.

Module C does not depend on anything; so, it is very easy to reuse. Since it is so easy to reuse, everyone has the incentive to reuse it. Eventually you will have a system that looks like this.

So it seems that we are at an impasse: we want to reuse infrastructure code from a framework, but we also want our business logic to be independent of it.

Can’t we have it both?

Actually, we can.

The direction of the arrow in our dependency graph makes our code immobile.

For example, if module A depends on module B then we have to use module B every time we use module A.

However, if we inverted the dependencies such that module A and module B both depended on a common module — call it module C — then we could reuse module C since it has no dependencies. This makes module C a reusable component; so, this is where you would place your reusable code.

For example, standard Component Oriented Architecture will break large C++ applications into multiple dll files, large JAVA applications into multiple jar files, or large .NET applications into multiple assemblies.

There exists rules about how you structure these packages, however.

You could argue that we would spend too much time with up front design when designing this type of system.

I agree that there is some up-front cost, but that cost is easily off-set by the time savings on maintenance, running tests, adding features, and anything else that we want to do to the system in the future.

We should view writing mobile code as an investment that will pay big dividends over time.

I plan to create a very simple demo application that showcases this design in the near future. Stay tuned.

We will only need the cs-training.csv and DataDictionary.xls files for this project.

Create application folder

Use the following commands to create a directory and move into it.

In the directory create a file called create_credit_classifier.py. We will use incrementally build the program in that text file.

Load Data Set

We need to parse the cs-training.csv file so that we can make sense of the data. The following shows the first 5 lines of data from cs-training.csv

We can observe the following fields in the file

SeriousDlqin2yrs

RevolvingUtilizationOfUnsecuredLines

age

NumberOfTime30-59DaysPastDueNotWorse

DebtRatio

MonthlyIncome

NumberOfOpenCreditLinesAndLoans

NumberOfTimes90DaysLate

NumberRealEstateLoansOrLines

NumberOfTime60-89DaysPastDueNotWorse

NumberOfDependents

DataDictionary.xls contains a description of each column, except for the first column. However, the first column is obviously an identification id column.

For this example, we want to predict if someone is likely to be a credit risk based on past data.

The column “SeriousDlqin2yrs” is our “outcome” feature and the rest of the columns are our “target” features.

We want to create a classifier that given some target features can predict the outcome feature. In order to do that we need to do what is known as “feature extraction”. The following code will do that will pandas.

Generate Training and Testing Set

We now have to separate our data into two disjoint sets: a training set, and a testing set.

We have to do this because we will use “cross-validation” to measure the accuracy of our predictive model.

We will train our classifier on the training set and test it’s accuracy on the testing set.

Intuitively, if our classifier should classify credit risks in the testing set the same as in the real world. This makes the testing set a proxy to how it would behave in production.

Define Classifier Type

Scipy comes with a bunch of baked-in classifiers. We will use the default Naive Bayes classifier for this example.

Train Classifier

To train the model we simply have to feed the classifier the target and output variables

Validate Classifier

Now that we have our classifier we cross verify the results against our test set.

The output for this script is the following

The output shows that we have a 92% accuracy with the following error types

55737 true positives

110 true negatives

212 false positives

4076 false negatives

Save Classifier

With our classifier done we can save it so that we can use it a separate program

Create Web API

With our model created, we can now create our web service that can decide if we should give credit to someone based on certain demographic information.

Recently, cookie authentication has fallen out of favor in the web community for it’s inability to port across domains and scale.

However, there exists a method to use cookies in a distributed fashion that is both portable across domains and scalable: macaroons (it’s a better kind of cookie).

Google introduced the world to macaroons in 2014 with a white paper. Unfortunately, there does not exist much documentation or code examples for it; so, I thought I’d help fill that void with this blog post.

(Not) Introducing Macaroons

I will not talk about the theory of macaroons. You can use the following resources to get the theory:

Suppose that you had a website that needs to authorize a user and authenticate their access before it can do anything. You probably would have the server send the user a cookie. The user would then send the cookie value to the server with each request, and the server would use that value to determine authentication and authorization details.

This has severe limitations because we cannot easily share that cookie across domains, and most attempts to do so usually introduce network bottlenecks. I will not go into the details of why this happens; you can easily google it.

Macaroons overcome these limitations by allowing you to put arbitrary restrictions on a “cookie”. Further, these “cookies” can include information from third parties.

This allows you to easily share them across domains and distribute them arbitrarily.

Let’s consider a simple example.

Suppose we had an application that we logically divided into (a) the browser, (b) the service provider, and (c) the identity provider.

The browser contains the code that powers user interaction, the service provider contains the code that powers data access and manipulation, and the identity provider powers user management of the system.

Suppose that the user wants to see his stock portfolio on a web page.

In order to satisfy this use case, our system needs to complete the following tasks:

Verify the user’s identity

Verify the user’s rights

Find the data

Show the data

The following sequence diagram shows how we can satisfy those tasks with the help of macaroons.

I will use node.js and the macaroon.js module to power this example. I will only provide code snippets that are relevant to the most important elements of the use case.

Request Macaroon

For this example, the browser will kick-off the macaroon process by making a request to sp.domain.com for a macaroon. Let’s suppose that sp.domain.com exposed the url “/api/macaroons” for that

Generate Macaroon

The code to generate and return a macaroon could look like the following:

This code creates a first party caveat for the macaroon.

A first party caveat is an enforceable restriction on the macaroon created by the minter of the macaroon. In this case, we retrict this macaroon to a particular ip address.

The code also creates a third party caveat for idp.domain.com. A third party caveat is a restriction that we delegate to another party. In this case, we are requiring the recipient of the macaroon to pass the macaroon to the idp to authorize his requests.

The ability to delegate attenuation to third parties is the secret sauce for macaroons. With normal cookies the service provider would have to manage and coordinate all the services associated with authentication/authorization. However, with macaroons, we can simply delegate authority to a trusted third party.

For security reasons, we generate the caveat key on sp.domain.com and encrypt it using the public key for idp.domain.com. We have to pass the encrypted caveat key to browser along with the macaroon.

The following is the code that could handle the response from /api/macaroons.

The browser is responsible for coordinating the authentication for the macaroon. In this case,

the callback sends the encrypted caveat key and the macaroon to idp.domain.com along with a username and password.

Verify Third Party Caveats

The idp will verify the username and password of the user. If that passes then we can create a discharge macaroon and apply it to our original macaroon. When we create a discharge macaroon, we are satisfying the delegation request from sp.domain.com. In this case, we also assign a timestamp to the macaroon to enable revocation. We are telling sp.domain.com that this macaroon is only good until the time set.

We expect sp.domain.com to obey that restriction.

idp.domain.com will send the original macaroon and the discharge macaroon to the callback. We need them both because macaroon’s hmac chaining algorithm protocol requires them to verify the authenticity of the macaroon.

The above code makes a request to sp.domain.com for the portfolio information. It also sends the root macaroon and discharge macaroon with the request.

Request Data

We know have enough information to verify the authenticity of the macaroon. I know that we have some handwavy helper functions that verify the authenticity. The particulars are not important, though. The verification details will always changed depending on the application.

Conclusion

This is not production ready code. There is no error checking or edge case handling. This is just a bare bones examples of how you could use macaroons in a distributed environment. We can easily scale this process up to an arbitrary number of servers. Hopefully, you’ll be able to use this as a starting point to building your own macaroons.

I recently found a way to unit test d3 code, and it has transformed my approach to writing d3 applications.

I want to share this with the d3 community because they typically don’t emphasis unit testing.

Consider the following d3 code.

This code does the following:

append a red circle to a canvas

change the color of the circle to green when the user clicks on it.

Most of the time, we rely on our eyeballs to verify the correctness of d3 applications.

For example, this is what the canvas looks like when we D3 renders the canvas.

and D3 will render the following when the user clicks on the circle

In this case, we can manually verify that the code works.

Unfortunately, we can’t always rely on human eyeball testing.

However, with the jsdom module, we can programmatically verify whether the browser updated the DOM properly: we can (a) set expectations for our DOM, (b) write d3 code to fulfill our expectations, and (c) automatically run the tests with gulp.

First let’s setup our project.

Setup Project

Create a directory for our project, and initialize it as an npm package

Create a directory for our source code and our tests

Install the necessary npm modules

Create gulpfile.js

Create the d3 Application

Put the following code under src/Circle.js

The Circle module takes two arguments. The first argument is the dom element that we will append the svg canvas to. The second argument is the id we will give circle. We will need this id for testing.

Testing the d3 Application

Create the file test/Circle.spec.js with the following contents

So far we have not added any tests to the test suite. This is only boilerplate code that we will need to write our tests.

The function d3UnitTest encapsulates the jsdom environment for us. We will use this function in each test to alter the dom and test the dom afterwords.

Let’s create our first test.

Append the first test to our test suite.

In this test, we simply create check if the id “circleId” exists within the DOM. We assume that if it exists then the circle exists.

Notice the done parameter in our function. We do this to trigger an asynchronous test. jsdom will run asynchronously. We have to provide a callback to it; so, we manually have to call done() to inform jasmine when we are ready for it to test.

Append the second test to our test suite.

Append the third test to our test suite

This code simulates a click on the circle. You have to know a little bit about the internals of d3 to make it work. In this case, we have to pass the circle object itself, and the datum associated with the circle to the click function.

FYI, I couldn’t find a way to get jsdom to simulate user behavior; so, you have to understand the d3 internals if you want to test user behavior. If you know how to use jsdom to simulate user behavior then please let me know how you do it because this method is very error prone.

Run the Tests

When we run gulp from the command line we should see the following

Conclusion

I understand that I gave a pretty simple and contrived example. However, the principles that I demonstrated apply to more sophisticated situations. I can vouch for this because I have already written tests for very complicated behaviors.

There is a downside to this approach. You have to use node.js and write your d3 code as node modules. Some people might not like to have this dependency. Personally, I don’t find it very inconvenient. In fact, it has forced me to write very module d3 code.

It is also extremely easy to deploy my d3 code using browserify with this method.Your milage may vary with this approach. I’m sure that there are ways you can achieve the same result with browser based tools. For example, I’m pretty sure that you could do something similar with protractor.