This post is going to explain the importance of interfaces, and the concept of programming to abstractions (using the Go programming language), by way of a simple example.

While treading what might seem like familiar ground to some readers, this is a fundamental skill to understand because it enables you to design more flexible and maintable services.

Interfaces in Go

An ‘interface’ in Go looks something like the following:

type Foo interface {
Bar(s string) (string, error)
}

If an object in your code implements a Bar function, with the exact same signature (e.g. accepts a string and returns either a string or an error), then that object is said to implement the Foo interface.

In the above example we’ve defined a FooBeeper interface that requires two methods: Bar and Beep. Now if we look at the various objects we’ve defined thing, differentThing and anotherThing we’ll find:

thing: fulfils the FooBeeper interface

differentThing: does not fulfil the FooBeeper interface

anotherThing: does not fulfil the FooBeeper interface

Alternatively, if we were to break the FooBeeper interface up into separate smaller interfaces (like we demonstrated earlier), then in our above example, the differentThing and anotherThing would become more re-usable.

This means that for the response body object to be valid, it must support the Read and Close functions defined by these interfaces (the returned object will likely include other functions, but it needs Read and Close at a minimum).

The next thing that happens in the code is that we pass http.Response.Body to an input/output function called ioutil.ReadAll.

If we look at the signature of ioutil.ReadAll we’ll see that it accepts a type of io.Reader, which we’ve seen already, and so this is another indication of why smaller interfaces enable re-usability.

What the io.Reader interface means for our code is that the input we provide to ioutil.ReadAll must support a Read function, and (because http.Response.Body implements the io.ReadCloser interface) we know it does implement that required function.

So already we’ve seen quite a few built-in interfaces being utilised to support the standard library code we’re using. More importantly, you’ll find the use of these interfaces (io.ReadCloser, io.Reader, io.Closer and others) are used everywhere in the Go codebase (highlighting again how small interfaces enable greater code re-usability).

Tight Coupling

Now there’s an issue with the above code, specifically the process function, and that is we’ve tightly coupled the net/http package to the function.

What this means is that the process function has to intrinsically know about HTTP and dealing with the various methods available to that package.

Also, if we want to test this function we’re going to have a harder time because the http.Get call would need to be mocked somehow. We don’t want our test suite to have to rely on a stable network connection or the fact that the endpoint being requested might be down for maintenance.

The solution to this problem is to invert the responsibility of the process function, also known as ‘dependency injection’. This is the basis of one of the S.O.L.I.D principles: ‘inversion of control’.

Dependency Injection

If we call a function, then it is our responsibility to provide it with all the things it needs in order to do its job.

In the case of our process function, it needs to be able to acquire data from somewhere (that could be a file, it could be a remote procedure call, it shouldn’t matter). The most important aspect to consider is how it acquires that data.

The how is not the responsibility of the process function, especially if we decide later on that we want to change the implementation from HTTP to GRPC or some other data source.

Meaning, we need to provide that functionality to the process function. Let’s see what this might look like in practice (this is just a first iteration and so is actually not a great solution, but is a solution):

By using this interface as the accepted type in the process function signature, we’re going to be able to decouple the function from having to acquire the data, and thus allow testing to become much easier (as we’ll see shortly), but the process function is still fundamentally coupled to HTTP as the underlying transport mechanism.

The reason this is a problem is because the process function still knows that the returned object is a http.Response because it has to reference the Body field of the response, which isn’t defined on the object we’ve injected (meaning the function intrinsically knows of its existence).

How far you take your interface design is up to you. You don’t necessarily have to solve all possible concerns at once (unless there really is a need to do so).

Meaning, this refactor could be considered ‘good enough’ for your use cases. Alternatively your values and standards may differ, and so you need to consider your options for how you might what to design this solution in such a way that it would allow the code to not be so reliant on HTTP as the transport mechanism.

Note: we’ll revisit this code later and consider another refactor that will help clean up this first pass of code decoupling.

But first, let’s look at how we might want to test this initial code refactor (as testing this code allows us to learn some interesting things when it comes to needing to mock interfaces).

Testing

Below is a simple test suite that demonstrates how we’re now able to construct our own object, with a stubbed response, and pass that to the process function:

Much like we do in the real implementation, we define a struct (in this case we’ve named it more explicitly) fakeHTTPBin.

The difference now, and what allows us to test our code is that we’re manually creating a http.Response object with dummy data.

One part of this code that requires some extra explanation would be the value assigned to the response Body field:

ioutil.NopCloser(bytes.NewBufferString(body))

If we remember from earlier:

The Body field’s ‘type’ is set to the io.ReadCloser interface.

This means when mocking the Body value we need to return something that has both a Read and Close method. So we’ve used ioutil.NopCloser which, if we look at its signature, we see returns an io.ReadCloser interface:

func NopCloser(r io.Reader) io.ReadCloser

The io.ReadCloser interface is exactly what we need (as that interface indicates the returned concrete type will indeed implement the required Read and Close methods).

But to use it we need to provide the NopCloser function something that supports the io.Reader interface.

If we were to provide a simple string like "Hello World", then this wouldn’t implement the required interface. So we wrap the string in a call to bytes.NewBufferString.

The reason we do this is because the returned type is something that supports the io.Reader interface we need.

But that might not be immediately obvious when looking at the signature for bytes.NewBufferString:

func NewBufferString(s string) *Buffer

So yes it accepts a string, but we want an io.Reader as the return type, where as this function returns a pointer to a Buffer type?

If we look at the implementation of Buffer though, we will see that it does actually implement the required Read function necessary to support the io.Reader interface.

Great! Our test can now call the process function and process the mocked dependency and the code/test works as intended.

More flexible solutions?

OK, so we’ve already explained why this implementation might not be the best we could do. Let’s now consider an alternative implementation:

All we’ve really done here is move more of the logic related to HTTP up into the httpbin.Get implementation of the dataSource interface. We’ve also changed the response type from (*http.Response, error) to ([]byte, error) to account for these movements.

Now the process function has even less responsibility as far as acquiring data is concerned. This also means our test suite benefits by having a much simpler implementation: