Teaching How to use Nginx to frontend your backend services with Trusted CA certificates on HTTPS

Now days with the adoption of Serverless architectures, microservices are becoming a great way to breakdown problem into smaller pieces. One situation that is common to find, is multiple backend services running on technologies like NodeJS, Python, Go, etc. that need to be accessible via HTTPS. It is possible to enable these internal microservices directly with SSL over HTTPS, but a cleaner approach is to use a reverse proxy that front ends these microservices and provides a single HTTPS access channel, allowing a simple internal routing.

In this blog, I am showing how simple it is to create this front end with Nginx and leveraging “Let’s encrypt” to generate trusted certificates attached to it, with strong security policies, so that our website can score an A+ on cryptographic SSL tests conducted by third party organizations.

Pre-requisites

You should have a non-root user who has sudo privileges.

You must own or control the registered domain name that you wish to use the certificate with.

Note: I am using Ubuntu 16.04 – Adjust accordingly if using other OS.

The instructions to install Nginx on Ubuntu 16.04 and how to setup SSL certificates are based on these great articles 1 and 2. Special thanks to Mitchell Anicas for his great work that made possible this post.

Let’s Install Nginx

Install Nginx:

sudo apt-get update

sudo apt-get install nginx

Note: Nginx installation also installs a firewall (ufw) that simplifies to enable and disable ports. Given I am running this environment in the Oracle Public cloud under a separate firewall, I don’t want to add unnecessary security controls. However, if you are directly connected to the public internet, it is recommended that you enable a firewall.

Verify that the Nginx is up and running:

systemctl status nginx

Also, in a browser, make sure that you can see the Nginx Welcome Page by typing your public IP Address or domain (default port 80):

Basic management commands.

The next commands will help you manage Nginx:

To stop it:

sudo systemctl stop nginx

To start it when it is stopped:

sudo systemctl start nginx

To restart it:

sudo systemctl restart nginx

If you are making configuration changes and want to reload Nginx without dropping connections:

sudo systemctl reload nginx

Nginx is configured to start automatically when the server boots. In order to disable this behaviour:

sudo systemctl disable nginx

To re-enable the service to start up at boot:

sudo systemctl enable nginx

Let’s add SSL into the mix

We are going to use “Let’s Encrypt” plugin to obtain a valid SSL certificate. For this, first install Certbot:

sudo add-apt-repository ppa:certbot/certbot (You will have to press ENTER to continue)

sudo apt-get update

sudo apt-get install certbot

As part of this installation, it will be required to create a file under /.well-known for security validation. Make a change to Nginx configuration to point to this file location:

For this, edit once again the file /etc/nginx/sites-available/default – Inside the server block enter:

location ~ /.well-known {

allow all;

}

Once you added this snippet, you can validate for syntax errors by typing:

sudo nginx -t

Make sure you get a successful test.

Restart Ngnix

sudo systemctl restart nginx

Use the Webroot plugin to request an SSL certificate. In this case I am specifying all domains I want to work with this certificate.

Notice that by doing so, the current body of the server definition will become body of the new definition.

It is recommended to configure Nginx to redirect all non-secure requests to automatically redirect to encrypted HTTPS. However, if you need to serve both HTTP and HTTPS, use the following configuration instead:

server {

listen 80 default_server;

listen [::]:80 default_server;

listen 443 ssl http2 default_server;

listen [::]:443 ssl http2 default_server;

server_name [YOUR-DOMAIN];

include snippets/ssl-[YOUR-DOMAIN].conf;

include snippets/ssl-params.conf;

. . .

Once you are done, validate that the file is accurate by running a test: sudo nginx -t

Also make sure that you adjust your firewall to allow 443 and 80 (in case you are allowing 80) ports into your server.

That’s it, now you can restart Nginx:

sudo systemctl restart nginx

Test your domain in a browser using plain HTTP and if you decided to redirect, it should automatically present the Welcome Nginx page with HTTPS

Setting Up Auto Renewal

“Let’s Encrypt’s” certificates are only valid for ninety days. You need to think of renewing your certificate. Let’s create a simple cron that can help us with such task.

Create a cron job:

sudo crontab -e

Copy and paste the following line at the end of the file. We are basically setting up a check up every day at 12:00am

The renew command for Certbot will check all certificates installed on the system and update any that are set to expire in less than thirty days. –quiet tells Certbot not to output information nor wait for user input. –renew-hook “/bin/systemctl reload nginx” will reload Nginx to pick up the new certificate files, but only if a renewal has actually happened.

That’s it. All installed certificates will be automatically renewed and reloaded when they have thirty days or less before they expire.

Validate how secure your site is now

You can use the Qualys SSL Labs Report to see how your server configuration scores:

After various strong cipher security assessments, you should score a beautiful A+ rating!!!

Finally, let’s make our reverse proxy configuration

Now that we are fully secured, let’s create our reverse proxy configuration to route to our internal services.

Edit file /etc/nginx/sites-available/default

Within server add a “location / ” configuration. This basically represents an external URI being mapped to an internal one. In this case we are assuming that on the same machine where Nginx runs on, there is a service running on port 3000.

Notice that the internal second URL (proxy_pass) can be different to the one used externally. That is, externally you can use “location /newservice/”, while internally it wraps: “proxy_pass http://127.0.0.1:3001/a/b/c/service&#8221;.

Share this:

Like this:

LikeLoading...

Author: Carlos Rodriguez Iturria

I am extremely passionate about people, technology and the most effective ways to connect the two by sharing my knowledge and experience.
Working collaboratively with customers and partners inspires and excites me, especially when the outcome is noticeable valuable to a business and results in true innovation. I enjoy learning and teaching, as I recognise that this is a critical aspect of remaining at the forefront of technology in the modern era.
Over the past 10+ years, I have developed and defined solutions that are reliable, secure and scalable, working closely with a diverse range of stakeholders. I enjoy leading engagements and am very active in the technical communities – both internal and external. I have stood out as a noticeable mentor running technology events across major cities in Australia and New Zealand, including various technology areas such as, Enterprise Integrations, API Management, Cloud Integration, IaaS and PaaS adoption, DevOps, Continuous Integration, Continuous Automation among others.
In recent years, I have shaped my role and directed my capabilities towards educating and architecting benefits for customers using Oracle and AWS Cloud technologies. I get especially excited when I am able to position both as a way to exceed my customers’ expectations.
I hold a bachelor degree in Computer Science and certifications in Oracle and AWS Solutions Architecture.
View all posts by Carlos Rodriguez Iturria

Blogroll

SolutionsANZ blog and contributors. All Rights Reserved. The views expressed in this blog are our own and do not necessarily reflect the views of Oracle Corporation. All content is provided on an 'as is' basis, without warranties or conditions of any kind, either express or implied, including, without limitation, any warranties or conditions of title, non-infringement, merchantability, or fitness for a particular purpose. You are solely responsible for determining the appropriateness of using or redistributing and assume any risks.

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.