Rapid API Development with Azure Functions

If you need to rapidly create a simple REST-Style CRUD (Create, Read, Update, Delete) API, then Azure Functions makes it really easy. You can create multiple functions, one for each operation, and then map each of the HTTP Verbs (GET, POST, PUT, DELETE) to the appropriate function.

In this post, I'll show how we can create a REST API to manage TODO list items. It will support five methods:

Get all TODO items (GET)

Get a TODO item by id (GET)

Create a new TODO item (POST)

Update a TODO item (PUT)

Delete a TODO Item (DELETE)

Yes, I know, it's not exactly the most imaginative demo scenario, so to make things a bit more interesting we'll support four different backing stores

In-memory

Azure Table Storage

Azure Blob Storage (a JSON file per TODO item)

Cosmos DB

The Azure Function app will use C# precompiled-functions with the attribute-based binding syntax. All the code is available on GitHub so do feel free to download and experiment.

In-Memory Implementation

For our first implementation, we'll just store the data in memory. This approach is great for very simple experiments, but comes with obvious limitations. When the Azure Functions runtime de-allocates the server instance running our function app, all items are lost. And if we loaded it heavily enough that there were several instances running our function app, they'd each present a different view of the world.

However, this gives us the opportunity to focus how the routing is set up for our functions.

First of all, here's the code for our Todo entity, and we've created a static List<Todo> list that our functions will store the items in.

First, to get all Todo items is nice and easy. We specify the Route for our HttpTrigger to be todo and that we support the HTTP GET verb. Then we can simply return the entire contents of our in-memory list with the OkObjectResult:

To get a specific Todo item by id, we adjust our Route to include an id with the syntax todo/{id}. This allows us to bind the value of the route that triggered this function to another parameter with the name id, that we can use to look up in our list. If the item doesn't exist, we'll return a NotFoundResult:

To create our Todo items, users POST to the todo route, and we deserialize the body of the request into an object defining the new item.

I've chosen to deserialize to a type I've called TodoCreateModel - so that the caller of this function can only specify the properties I allow on creation of a new item - in this case just the task description.

Then when we've added the new Todo item to our list, we return the entire new object so the caller can discover the id of the new item:

Again, I've defined a custom model called TodoUpdateModel as I'm allowing the TaskDescription and IsCompleted flag to be updated, but nothing else. The verb in this case will be PUT, and the id is specified in the route as we did with the get by id function.

I've made supplying an updated description optional, and I return the new state of the entire Todo item in the OK response:

Our final method is deleting a Todo item, and things are very familiar. We use the same todo/{id} syntax for the route, and this time its the HTTP DELETE verb we want. The implementation is just to remove it from the in-memory list, and return an OK result.

Another benefit of Table Storage is that if you're using Azure Functions, you've already got a Storage Account that the Azure Functions runtime is making use of, so it's nice and convenient to use that for your tables, especially in the prototyping phase. And when testing locally, the Storage Emulator works just fine.

Before we can get started with Table Storage, we do need to adjust our entity definitions slightly.

It will help us to inherit from TableEntity so that we have a RowKey and PartitionKey property (which Table storage requires as it uses them as a combined key, and also has an ETag which is needed when we update rows).

I've created a TodoTableEntity plus two mapping extension methods, to convert between Todo and TodoTableEntity. For our purposes, it's sufficient to hard-code the PartitionKey to a constant value, as we're not anticipating the need to support many thousands of items in the table.

With these entities in place, let's look at the code for each of our five functions.

First, to get all Todo items, we can use the Table binding. I wanted to bind this to an IEnumerable<TodoTableEntity> parameter, but this does not appear to be supported in Azure Functions v2. So instead, I bind to a CloudTable, and use ExecuteQuerySegmentedAsync to query for all rows in the table.

The Table binding attribute has already specified the table name (todos) and the name of the connection string (AzureWebJobsStorage). And for simplicity I assume that the first segment returned by the query contains all the Todo items.

You may also notice that I'm using a route of todo2. That's simply so that this table-storage implementation of the API doesn't clash with the in-memory one.

Next, we want to get a Todo item by id, and the Table binding helps us out a bit more here. By specifying the PartitionKey (hard-coded to TODO) and the row key (dynamically extracted from the route with the {id} syntax) we can simply bind directly to an instance of TodoTableEntity. This will be set to null if there was no matching row.

Creating a new Todo item is also simplified by using a binding. Since our function is asynchronous, we need to bind to an instance of IAsyncCollector<TodoTableEntity> and call AddAsync to add a row to the table. But otherwise the technique is exactly the same as for the in-memory implementation - deserialize the request body into a TodoCreateModel and turn that into a TableEntity we can store in our table.

Updating a Todo item is where things get a bit painful, as the binding syntax doesn't give us an easy way to do this.

We need to bind directly to CloudTable, and perform two operations. First, a Retrieve to get hold of the item to be updated by its PartitionKey (hard-coded to TODO) and RowKey (the id). Then, assuming we found the item, we update its properties, and then implement a Replace operation to replace the row in Table Storage.

So that's how to get a basic CRUD REST-style API backed by Table Storage up and running. It's a shame that the bindings don't help us out quite as much as I'd like for all the operations. Where you'd really start running into limitations with table storage is if you wanted to support some kind of complex querying as its really optimized for lookup by row and partition keys only.

Before we move onto a real database, let's see how we can do the same thing, but using Blob Storage as the backing store.

Blob Storage Implementation

Of course, Blob Storage isn't a database, so this is not a backing store you're likely to want to use in a real-world application. I'm simply using a Blob Storage container and storing each Todo item as a JSON file. This might be handy if you like the idea of quickly backing up and restoring your "database" just by moving JSON files in and out of a container with Azure Storage Explorer.

Let's see how the Blob storage bindings work to let us implement this backing store:

To get all Todo items, we need to bind with the Blob binding to a CloudBlobContainer. Then we use ListBlobsSegmentedAsync to list all the blobs in the container, and for each one, we download the contents of that blob as text and deserialize it into a Todo item, which we then put in a list to be returned.

You can also see that I'm creating the container if it doesn't exist to avoid us needing to do this manually in advance. The container name is todos and once again I'm using the storage account in the AzureWebJobsStorage connection string.

To get a specific Todo item by id, we can make use of a nice feature of the Blob binding that we can use a the {id} parameter from the function route in the binding to specify which blob we are interested in. In our case it's path will be todos/{id}.json.

We bind to a string parameter, so the contents of the blob (if it exists) will get automatically read into a string for us. All that remains is for us to deserialize it into a Todo (ironically so that it can be immediately serialized back into JSON).

For creating a Todo item in blob storage, I guess it might have been possible to use IAsyncCollector, but I found it easier to bind to a CloudBlobContainer, and then use UploadTextAsync to upload the newly created Todo item serialized as JSON into a text file in the blob container:

As with Table Storage, the update operation is the most complex. We can however bind to a specific blob by using the same todos/{id}.json syntax we saw when we got an item by id. And we bind to an instance of CloudBlockBlob.

If the blob doesn't exist, we return a NotFoundResult, but otherwise, we download the blob contents with DownloadTextAsync, update the deserialized Todo item, and then re-serialize it and replace it with UploadTextAsync.

Finally, to delete a Todo item from Blob Storage, we can use the same binding technique as for updating and bind to CloudBlockBlob, only we now simply need to check if the blob exists and delete it with DeleteAsync.

So that's how you can implement a CRUD REST-style API backed by Blob Storage. Not something you'd likely do very often, but it does at least show off a few of the ways of binding to Blob Storage in Azure Functions. The code for this implementation is available here.

Cosmos DB implementation

Finally, let's use a real database as the backing store. Although I've heard lots of good things about Cosmos DB, I've not played with it very much, so this was my first attempt at using it in Azure Functions.

I downloaded and installed the Cosmos DB emulator which is a great option to have at your disposal for experiments, as running even a basic Cosmos DB instance in the cloud isn't particularly cheap. Once you have the Cosmos DB emulator running, you will need to add a connection string pointing at it to your local.settings.json file.

Also, since I'm using Azure Functions v2, the Cosmos DB bindings don't come out of the box, and so I needed to add a PackageReference in my csproj file to install the necessary extension. It's currently in pre-release:

To get all Todo items from Cosmos DB, I needed to use the CosmosDB binding, specifying the database name (in my case tododb) and collection name (tasks). I'd already created these in the emulator UI, and decided that I would use the "SQL API" for this database.

We need to specify the name of the connection string, and a SqlQuery which already shows the benefits of using a real database - I can make use of an ORDER BY statement to order the Todo items by their timestamp.

We bind to an IEnumerable<Todo> and since Cosmos DB supports us storing schemaless documents, there is no problem with it deserializing directly into our own custom Todo entity.

Retrieving an item by id is something that the CosmosDb binding has good support for. We can set the Id property of the CosmosDB binding to use the {id} from the route. We can bind directly to our Todo entity, making it really easy to return the requested item if it exists.

Creating our Todo item in theory ought to have been simple, but there was a nasty gotcha.

We bind to an IAsyncCollector which we've seen before, but if I bound to an IAsyncCollector<Todo> I ended up creating documents that had two id properties. One was called Id (capital I) which was my own id, and one called id (all lower case) which was the one auto-generated by CosmosDb.

The workaround I show here is a bit ugly. I simply create an anonymous object with all the same properties, but with a lower case id property, so that my own generated Todo item id is the one that gets used. I'm sure there's other ways to work round this, so do let me know what you prefer to do in the comments.

As with all our binding types, updating an existing item proves the most complicated. We need to bind to a DocumentClient to allow us to perform a query to find the item to update. We do this with CreateDocumentQuery, and can use a LINQ Where clause to find the document with our id.

Although CreateDocumentQuery returns an IQueryable, it doesn't support all the LINQ extensions you might expect, so trying to pick out the document we want directly with FirstOrDefault doesn't work - hence the need for AsEnumerable().FirstOrDefault().

Once we've found the document, we need to update it, but the syntax to change properties is a little cumbersome. We call SetPropertyValue on each field in the document we want to change.

Next, we can update a CosmosDb document with ReplaceDocumentAsync, but we also want our update method to return the new Todo item. There's a handy trick you can use to convert a Document to a strongly typed class, and that's to cast it to a dynamic and then assign it to a strongly typed variable of the desired type. This is easier than calling GetPropertyValue which I've shown how to do in the commented out code:

Finally, to delete items from our Cosmos DB database, we again bind to a DocumentClient, and in order to call DeleteDocumentAsync we need to get hold of the item to delete so we can access its "self link". We can use exactly the same technique as we did in the update function to call CreateDocumentQuery to find the document with the specified id.

As you can see, the Azure Functions bindings for Cosmos DB offer us some help, but the experience could do with some improvements. I have noticed that some people have made Cosmos DB helper libraries to simplify these operations. Let me know in the comments if you have a favourite one you can recommend. The code for my Cosmos DB implementation is available here.

Summary

With Azure Functions its really quick and easy to create a basic CRUD REST-style API, and the built-in bindings can save you time working with Blob Storage, Table Storage and Cosmos DB. However, operations like updating existing items are currently not well supported by the bindings, so you have to be prepared to do a little bit of additional work yourself.

Also, my implementations here all ignore the concerns of paging and filtering when retrieving all Todo items, which you'd likely want to do in a real-world application. In that scenario, having a proper database like Cosmos DB would start paying dividends, while you'd quickly run into performance issues with Table or Blob Storage.

About Mark Heath

I'm a Microsoft MVP and software developer based in Southampton, England, currently working as a Software Architect for NICE Systems. I create courses for Pluralsight and am the author of several open source libraries. I currently specialize in architecting Azure based systems and audio programming. You can find me on: