Archive for the ‘Docker’ Category

As Docker containerization is starting to become a standard for installing a brand new servers especially servers who live in Self-Made Clusters based on Orchestration technologites like Kubernetes (k8s) (check out – http://kubernetes.io),

Recently, I've had the task to set-up a Squid Cache (Open Proxy) server on a custom Port number to make it harder for internet open proxy scanners to identify and ship it to a customer.

What is Squid Open Proxy?

An open proxy is a proxy server that is accessible by any Internet user, in other words anyone could access the proxy without any authentication.

Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP and other protocols.
It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages.
Squid has extensive access controls and makes a great server accelerator.
It runs on most available operating systems, including Windows and is licensed under the GNU GPL.

What is Docker?

For those who hear about Docker for a first time, Docker is an open-source software platform to create, deploy and manage virtualized application containers on a common OS such as GNU / Linux or Windows, it has a surrounding ecosystem of tools. Besides its open source version there is also a commercial version of the product by Docker Inc. the original company that developed docker and is today in active help of the project.

Docker components – picture source docker.com

What is Kubernetes?

Kubernetes, in short, is an open source system for managing clusters of containers. To do this, it provides tools for deploying applications, scaling those application as needed, managing changes to existing containerized applications, and helps you optimize the use of the underlying hardware beneath your containers.
Kubernetes is designed to be extensible and fault-tolerant by allowing application components to restart and move across systems as needed.

Kubernetes is itself not a Platform as a Service (PaaS) tool, but it serves as more of a basic framework, allowing users to choose the types of application frameworks, languages, monitoring and logging tools, and other tools of their choice. In this way, Kubernetes can be used as the basis for a complete PaaS to run on top of; this is the architecture chosen by the OpenShift Origin open source project in its latest release.

1. Install Docker Containerization Software Community Edition

Docker containers are similar to virtual machines, except they run as normal processes (containers), that does not use a Hypervisor of Type 1 or Type 2 and consume less resources than VMs and are easier to manage, nomatter what the OS environment is.

Docker uses cgroups and namespace to allow independent containers to run within a single Linux instance.

Docker Architecture – Picture source docker.com

Below docker install instructions are for Debian / Ubuntu Linux, the instructions for RPM package distros Fedora / CentOS / RHEL are very similar except yum or dnf tool is to be used.

a) Uninstall older versions of docker , docker-engine if present

apt-get -y remove docker docker-engine docker.io

! Previously running docker stuff such as Volumes, Images and networks will be preserved in /var/lib/docker/

Previously running docker stuff such as Volumes, Images and networks will be preserved in /var/lib/docker/

2. Build Docker image with Ubuntu Linux OS and Squid inside

To build a docker image all you need to do is have the Dockerfile (which is docker definitions build file), an Official image of Ubuntu Linux OS (that is provided / downloaded from dockerhub repo) and a bunch of docker commands to use apt / apt-get to install the Squid Proxy inside the Docker Virtual Machine Container

In dockerfile it is common to define for use an entrypoint.sh which is file with shell script commands definitions, that gets executed immediately after Docker fetches the OS from its remote repository on top of the newly run OS. It is pretty much like you have configured your own Linux distribution like using Linux from Scratch! to run on a bare-metal (hardware) server and part of the installation OS process you have made the Linux to run a number of scripts or commands during install not part of its regular installation process.

a) Go to https://hub.docker.com/ and create an account for free

The docker account is necessery in order to push the built docker image later on.
Creating the account creates just few minutes time.

b) Create a Dockerfile with definitions for Squid Open Proxy setup

I'll not get into details on the syntax that Dockerfile accepts, as this is well documented on Docker Enterprise Platform official website but in general gettings the basics and starting it is up to a 30 minutes to maximum 1h time.

After playing a bit to achieve the task to have my Linux distribution OS (Ubuntu Xenial) with Squid on installed on top of it with the right configuration of SQUID Cacher to serve as Open Proxy I've ended up with the following Dockerfile.

Apart from that I've used the following entrypoint.sh (which creates and sets necessery caching and logging directories and launches script on container set-up) permissions for SQUID proxy file that is loaded from the Dockerfile on docker image build time.
To have the right SQUID configuration shipped up into the newly built docker container, it is necessery to prepare a template configuration file – which is pretty much a standard squid.conf file with the following SQUID Proxy configuration for Open Proxy

The script uses the docker login command to authenticate non-interactively to https://hub.docker.com
docker build command with properly set DOCKER_ACC (docker account – which is the username of your hub.docker.com account as I've pointed earlier in article), then DOCKER_REPO (docker repository name) – you can get it either from a browser, after you've logged in to dockerhub or assuming you know your username, it should look like:
https://hub.docker.com/u/your-username-name – for example mine is hipod with repository name squid-ubuntu, my squid-ubuntu docker image build is here, you'll also need to provide the password inside the script or if you consider it a security concern, instead type manually from command line docker login and authenticate in advance before running the script, finally the last line docker push pushes to remote docker hub the new build of Ubuntu + SQUID Proxy with a predefined TAG that in my case is latest (as this is my latest build of Squid – if you need a multiple version number of Squid repository just change the tag to the version tag line number.

d) Use the script to build Squid docker image

Next run the script to make and push into docker your new image:

sh build-docker-image.sh

Please consider that in order to work with docker hub push / pull, you will need to have a firewall that allows connection to dockerhub site repo, if for some reason the push / pull fails, check closely your firewall as it is the most likely cause for failure.

3. Run the new docker image to test Squid runs as expected

To make sure the docker image runs properly, you can test it on any machine that has docker.io installed, this is done with a simple cmd:

docker run -d –restart=always -p 3128:3128 hipod/squid-ubuntu:latest

The -d option tells docker to background process /run in detached mode -p option tells docker to expose port (e.g. make NAT with iptables from the docker virtual container with Linux OS + SQUID listening inside the container on port 3128 to the TCP / IP 3128 server port).
You can use iptables to check the created Network Address Translation rules.

–restart=always option sets the docker restart policy (e.g. when the container is terminating it tells the container to restart the container (OS) after exit), there, you can use as a resetart policy (no, on-failure[:max_retries] , unless-stopped)

The task included to deploy two different Open Proxy squid servers on separate ports in order to add them external cluster Ingress load balancing via Amazon AWS, thus I actually used following 2 yaml files.

The service is externally exposed via later configured LoadBalancer to make the 2 squid servers deployed into k8s cluster accessible from the Internet by anyoneone without authorization (as a normal open proxies) via TCP/IP ports 33128 and 33129.

Conclusion

Though it all looks quite simplistic I should say creating the .yaml file took me long. Creating system configuration is not as simple as using the good old .conf files and getting used with the identation takes time.

Now once the LB are configured to play with k8s, you can enjoy the 2 proxy servers. If you need to do some similar task and you don't have to do it for a small fee, contact me.