Developing an API in the healthcare field to monetize that functionality that will delight your end users, is a sure bet of success. But you don’t want anyone to use it without proper authorization: you must protect it.

Enabling the endpoint access mechanisms of your API is one of the fundamental aspects to consider in the design phase and one of the most costly in resources and time. Nubentos provides the ideal interface to save you this job, but still, it is necessary to apply some security measures on your side.

In this series of articles we will show you different mechanisms to secure the endpoint of your API for Health and how you should make the configuration in the Nubentos platform to connect to it.

We will start with the most basic mechanism, hiding the endpoint from the world. Simple, fast, but still too exposed.

We will continue to incorporate an additional basic mechanism: a username and password to make requests to the endpoint with authorization.

Finally, we will see an even more refined solution that will help you solve the shortcomings of the previous solutions.

Hide the endpoint to the world

We start from the following architecture, where the API provider (you) has an application server that handles requests. It also has a load balancer or reverse proxy, which will be responsible for routing requests according to the requested context.

In this way, when requesting http://endpoint/api the request will reach the application server that will be responsible for responding. If another context is requested, for example http://endpoint/doc, it will direct the request to some static content.

The REST API you have developed can be exposed using an URL. A first step to hide the existence of your API is to give the domain a cryptic name and carefully manage who we provide this URL.

We could generate a domain name like: d635865a-dab5-4254-804f-7f65eb99aca2.apiprovider.com

You can use the uuidgen command to generate a random string. You just have to make sure that the first character is a letter to comply with RFC 1035.

This method is not a good practice, because sooner or later someone will be able to identify the URL, either by the configuration used by the clients that access the API or by listening to the traffic generated on the network.

How do we perform this configuration?

For this example, the load balancer functions will be in charge of an instance of nginx (although we could also use Apache WebServer 2.4).

We assume that you have the IP address of your application server where you are serving a REST API (for example 192.168.0.10), which allows us to consult a list of doctors by their identifier.

Using the resource “/ doctor” and passing it the identifier “/ 1”, the result will be a response in json format with the main fields of the doctor entity.

We will start from a nginx installation that is working correctly on the server as a load balancer. Instructions on how to install nginx in the most common distributions can be found at this link http://docs.nginx.com. For this example we will use the installation on Red Hat Linux.

In essence, all you need to do is configure nginx with instructions on what type of connections to listen to and where to redirect them.

To do this, we create a new configuration file using the text editor of your choice, for example with nano:

sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf file, define two sections, upstream and server, as in the examples that follow.

http {

upstream applicationserver {

server 192.168.0.10:8080;

}

server {

listen 80;

server_name d635865a-dab5-4254-804f-7f65eb99aca2.apiprovider.com;

proxy_set_header Host $ http_host;

location / api {

proxy_pass http: // applicationserver;

}

}

Where upstream refers to the IP address and port where the application server is listening. And server defines the port where the nginx server rises and the host to which it is associated.