A Cloudflare Workers example: How to flag a client as malicious

What are Cloudflare Workers?

In this madness that is the evolution of Cloud Computing services without a doubt the newest and coolest thing are Serverless computing services. In this new paradigm mode of consumption of computing resources, we no longer have servers, underlying operating system, file system, or even system administrators. Welcome to the No-Ops world! Cloudflare offers us a new serverless computing service called Cloudflare Workers that fulfills all these characteristics and many more.

What do Cloudflare Workers offer?

Cloudflare Workers is a solution that allows any developer to run your code at the edge of Cloudflare’s content distribution network. Other Serverless solutions allow developers’ code to run in the different geographical areas, at best a dozen of them per provider. Code can run as close to the remote client as possible thanks to the Serverless solutions at the edge (Cloudflare Workers is not the only one). As result, user experience improves due to better latency and other optimizations like customization of content based on geographic location.

From the developer’s point of view, Serverless computing services offer a very closed and controlled development environment. In fact, some people name this type of service as FaaS (Function as a Service) because that is exactly what you can develop in this super-restrictive framework.

In the case of Cloudflare Workers the programming language chosen is Javascript. And it’s Javascript because they have made the decision to implement the Service Workers API on the server and not in the browser. Service Workers are the W3C standard API for scripts that run in the background in a web browser and intercept HTTP requests. Cloudflare Workers follow the same standard API but run on Cloudflare’s servers, not in a browser.

As an example what a developer can do check the announcement blog post. They also claim that Cloudflare Workers are really fast: A typical Worker script executes in less than one millisecond. The difference between a response of a page with and without a Worker is negligible. Most users are unable to measure any latency difference.

Cloudflare Workers Billing model

Workers are a paid add-on to Cloudflare. The first 10 million requests cost $5 per month and $0.50 per month for every extra million requests. There is no free tier for testing and this is a big change for Cloudflare billing policy, a company with a track record of a very generous free tier.

If you want to test Workers without paying for the service, have a look at this site (You can test the example of this blog in this page too), it’s Cloudflare Workers Playground where developers can test-drive this new service without an account. It’s totally developers oriented but can give you a glimpse of all the environment.

If the request returns a list of blacklists, then create a new header parameter Apilityio-Badip and pass them to the origin server

If the request is empty, then create the header parameter Apilityio-Badip empty,

If the Apility API server returns an error pass ‘ERROR’ to the origin server.

Finally, pass as a header parameter Apilityio-Elapsed-Time with the milliseconds it took to perform the request to Apility API server.

Hence, when a request is made to the website where the Worker has a route configured, the Worker will figure out if the IP is malicious and will pass to the origin server (never back to the remote client!) the new header parameters created.

Let’s get started!

Enable Cloudflare Workers

Workers is a paid feature, so developers have to enter the billing details first, and enable the Workers feature:

Once enabled, we have to open the Launch Editor where developers can code and test the Workers:

The Editor is a Web Application where developers can code, test and preview the Workers. They can also configure the routes. A route is the URL to intercept. A Worker is like a man-in-the-middle: it sits between the browser of the remote clients and the origin server. Then, the route controls what resources at the origin server will be handled.

My opinion is that the Editor is the less advanced part of the service: it smells like a Minimum Value Product developed by a start-up looking for funding (Obviously this is not the case for Cloudflare…). I hope this will improve in the future to catch up other competitors like AWS Lambad@Edge for example.

Coding the Worker

Now go and paste in the Script window the code you can find in this gist. Don’t forget to replace the APILITYIO_API_KEY with your API KEY in Apility.io!

To test this Worker, we will use RequestBin. This tool gives you a URL that will collect requests made to it and let you inspect them in a human-friendly way.
We will see what the HTTP client is sending to inspect and debug the requests. It will create a random URL, we will paste this random in the textbox right under Preview. Now, every time we click on Update Preview, the request made from our browser will try to connect the random URL generated by RequestBin, and will return OK if it worked.

Now, if we go to the URL given by RequestBin and refresh the page, we will see the list of header parameters and we should see our new parameters Apilityio-Badip and Apilityio-Elapsed-Time:

The Apilityio-Badip is empty because my public IP address is clean, and the time it took to complete the search was 63ms as Apilityio-Elapsed-Time says.

Our Worker is ready, so now we have to add a route with the URL we want to intercept. If we want to catch several URLs, then you can use wildcards. In this example we will use our old documentation site apidocs.apility.io:

And we are done! Now when a user tries to connect to our site apidocs.apility.io, the remote server will receive these two new parameters. If the Apilityio-Badip contains any blackist, then it’s up to the code at the server side to perform any action required. For example ban the access or redirect to a different page.

Why is this solution better than an integration in our code?

With this approach, the developer will only have to look for the information contained in the Apilityio-Badip and decide what to do. A more classic approach will need to implement the request to Apility API services on every page, plus the logic to decide what to do. The solution with Workers look more simple and flexible and probably easier to maintain in the long run.

Still, this is only an example of the new capabilities that Workers can offer and I think there is a lot of room for improvement: for example redirecting to captcha pages before continuing, blocking access right at the edge or lowering access levels to suspicious users. Do you have more ideas? Let us know in the comments section!