Posts Tagged ‘necessery’

As Docker containerization is starting to become a standard for installing a brand new servers especially servers who live in Self-Made Clusters based on Orchestration technologites like Kubernetes (k8s) (check out – http://kubernetes.io),

Recently, I've had the task to set-up a Squid Cache (Open Proxy) server on a custom Port number to make it harder for internet open proxy scanners to identify and ship it to a customer.

What is Squid Open Proxy?

An open proxy is a proxy server that is accessible by any Internet user, in other words anyone could access the proxy without any authentication.

Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP and other protocols.
It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages.
Squid has extensive access controls and makes a great server accelerator.
It runs on most available operating systems, including Windows and is licensed under the GNU GPL.

What is Docker?

For those who hear about Docker for a first time, Docker is an open-source software platform to create, deploy and manage virtualized application containers on a common OS such as GNU / Linux or Windows, it has a surrounding ecosystem of tools. Besides its open source version there is also a commercial version of the product by Docker Inc. the original company that developed docker and is today in active help of the project.

Docker components – picture source docker.com

What is Kubernetes?

Kubernetes, in short, is an open source system for managing clusters of containers. To do this, it provides tools for deploying applications, scaling those application as needed, managing changes to existing containerized applications, and helps you optimize the use of the underlying hardware beneath your containers.
Kubernetes is designed to be extensible and fault-tolerant by allowing application components to restart and move across systems as needed.

Kubernetes is itself not a Platform as a Service (PaaS) tool, but it serves as more of a basic framework, allowing users to choose the types of application frameworks, languages, monitoring and logging tools, and other tools of their choice. In this way, Kubernetes can be used as the basis for a complete PaaS to run on top of; this is the architecture chosen by the OpenShift Origin open source project in its latest release.

1. Install Docker Containerization Software Community Edition

Docker containers are similar to virtual machines, except they run as normal processes (containers), that does not use a Hypervisor of Type 1 or Type 2 and consume less resources than VMs and are easier to manage, nomatter what the OS environment is.

Docker uses cgroups and namespace to allow independent containers to run within a single Linux instance.

Docker Architecture – Picture source docker.com

Below docker install instructions are for Debian / Ubuntu Linux, the instructions for RPM package distros Fedora / CentOS / RHEL are very similar except yum or dnf tool is to be used.

a) Uninstall older versions of docker , docker-engine if present

apt-get -y remove docker docker-engine docker.io

! Previously running docker stuff such as Volumes, Images and networks will be preserved in /var/lib/docker/

Previously running docker stuff such as Volumes, Images and networks will be preserved in /var/lib/docker/

2. Build Docker image with Ubuntu Linux OS and Squid inside

To build a docker image all you need to do is have the Dockerfile (which is docker definitions build file), an Official image of Ubuntu Linux OS (that is provided / downloaded from dockerhub repo) and a bunch of docker commands to use apt / apt-get to install the Squid Proxy inside the Docker Virtual Machine Container

In dockerfile it is common to define for use an entrypoint.sh which is file with shell script commands definitions, that gets executed immediately after Docker fetches the OS from its remote repository on top of the newly run OS. It is pretty much like you have configured your own Linux distribution like using Linux from Scratch! to run on a bare-metal (hardware) server and part of the installation OS process you have made the Linux to run a number of scripts or commands during install not part of its regular installation process.

a) Go to https://hub.docker.com/ and create an account for free

The docker account is necessery in order to push the built docker image later on.
Creating the account creates just few minutes time.

b) Create a Dockerfile with definitions for Squid Open Proxy setup

I'll not get into details on the syntax that Dockerfile accepts, as this is well documented on Docker Enterprise Platform official website but in general gettings the basics and starting it is up to a 30 minutes to maximum 1h time.

After playing a bit to achieve the task to have my Linux distribution OS (Ubuntu Xenial) with Squid on installed on top of it with the right configuration of SQUID Cacher to serve as Open Proxy I've ended up with the following Dockerfile.

Apart from that I've used the following entrypoint.sh (which creates and sets necessery caching and logging directories and launches script on container set-up) permissions for SQUID proxy file that is loaded from the Dockerfile on docker image build time.
To have the right SQUID configuration shipped up into the newly built docker container, it is necessery to prepare a template configuration file – which is pretty much a standard squid.conf file with the following SQUID Proxy configuration for Open Proxy

The script uses the docker login command to authenticate non-interactively to https://hub.docker.com
docker build command with properly set DOCKER_ACC (docker account – which is the username of your hub.docker.com account as I've pointed earlier in article), then DOCKER_REPO (docker repository name) – you can get it either from a browser, after you've logged in to dockerhub or assuming you know your username, it should look like:
https://hub.docker.com/u/your-username-name – for example mine is hipod with repository name squid-ubuntu, my squid-ubuntu docker image build is here, you'll also need to provide the password inside the script or if you consider it a security concern, instead type manually from command line docker login and authenticate in advance before running the script, finally the last line docker push pushes to remote docker hub the new build of Ubuntu + SQUID Proxy with a predefined TAG that in my case is latest (as this is my latest build of Squid – if you need a multiple version number of Squid repository just change the tag to the version tag line number.

d) Use the script to build Squid docker image

Next run the script to make and push into docker your new image:

sh build-docker-image.sh

Please consider that in order to work with docker hub push / pull, you will need to have a firewall that allows connection to dockerhub site repo, if for some reason the push / pull fails, check closely your firewall as it is the most likely cause for failure.

3. Run the new docker image to test Squid runs as expected

To make sure the docker image runs properly, you can test it on any machine that has docker.io installed, this is done with a simple cmd:

docker run -d –restart=always -p 3128:3128 hipod/squid-ubuntu:latest

The -d option tells docker to background process /run in detached mode -p option tells docker to expose port (e.g. make NAT with iptables from the docker virtual container with Linux OS + SQUID listening inside the container on port 3128 to the TCP / IP 3128 server port).
You can use iptables to check the created Network Address Translation rules.

–restart=always option sets the docker restart policy (e.g. when the container is terminating it tells the container to restart the container (OS) after exit), there, you can use as a resetart policy (no, on-failure[:max_retries] , unless-stopped)

The task included to deploy two different Open Proxy squid servers on separate ports in order to add them external cluster Ingress load balancing via Amazon AWS, thus I actually used following 2 yaml files.

The service is externally exposed via later configured LoadBalancer to make the 2 squid servers deployed into k8s cluster accessible from the Internet by anyoneone without authorization (as a normal open proxies) via TCP/IP ports 33128 and 33129.

Conclusion

Though it all looks quite simplistic I should say creating the .yaml file took me long. Creating system configuration is not as simple as using the good old .conf files and getting used with the identation takes time.

Now once the LB are configured to play with k8s, you can enjoy the 2 proxy servers. If you need to do some similar task and you don't have to do it for a small fee, contact me.

If you're into IT industry even if you don't like installing frequently Windows or you're completely Linux / BSD user, you will certainly have a lot of friends which will want help from you to re-install or fix their Windows 7 / 8 / 10 OS. At least this is the case with me every year, I'm kinda of obliged to install fresh windowses on new bought friends or relatives notebooks / desktop PCs.

Of course according to for whom the new Windows OS installed the preferrences of necessery software varies, however more or less there is sort of standard list of Windows Software which is used daily by most of Avarage Computer user, such as:

– (WinDirStat or SpaceSniffer – Tools that can show you which is the biggest files and directory inside a directory tree)
– WinGrep (Grep: Grep for Windows)
– Everest Home Edition ( Hardware System Information – Shows you what is the PC hardware )

I tend to install on New Windows installs and thus I have more or less systematized the process.

I try to usually stick to free software where possible for each of the above categories as a Free Software enthusiast and luckily nowadays there is a lot of non-priprietary or at least free as in beer software available out there.

For Windows sysadmins or College and other public institutions networks including multiple of Windows Computers which are not inside a domain and also for people in computer repair shops where daily dozens of windows pre-installs or a set of software Automatic updates are necessery make sure to take a look atNinite

As official website introduces Ninite:

Ninite – Install and Update All Your Programs at Once

Of course as Ninite is used by organizations as NASA, Harvard Medical School etc. it is likely the tool might reports your installed list of Windows software and various other Win PC statistical data to Ninite developers and most likely NSA, but this probably doesn't much matter as this is probably by the moment you choose to have installed a Windows OS on your PC.

For Windows System Administrators managing small and middle sized network PCs that are not inside a Domain Controller, Ninite could definitely save hours and at cases even days of boring install and maintainance work. HP Enterprise or HP Inc.Employees or ex-employees would definitely love Ninite, because what Ninite does is pretty much like the well known HP Internal Tool PC COE.

Ninite could also prepare an installer containing multiple applications based on the choice on Ninite's website, so that's also a great thing especially if you need to deploy a different type of Users PCs (Scientific / Gamers / Working etc.)

Perhaps there are also other useful things to install on a new fresh Windows installations, if you're using something I'm missing let me know in comments.

If you happen to be installing Qmail Mail server on a Debian or Ubuntu (.deb) based Linux, you will notice by default there will be some kind of MTA (Mail Transport Agent) already installed mail-transfer-agent package will be installed and because of Debian .deb package depedency to have an MTA always installed on the system you will be unable to remove Exim MTA without installing some other MTA (Postix / Qmail) etc.

This will be a problem for those like me who prefer to compile and install Qmail from source, thus to get around this it is necessery to create a dummy package that will trick the deb packaging depencies that actually mta-local MTA package is present on the server.

The way to go here is to use equivs(Circumvent debian package dependencies):

debian:~# apt-cache show equivs|grep -i desc -A 10

Description: Circumvent Debian package dependencies
This package provides a tool to create trivial Debian packages.
Typically these packages contain only dependency information, but they
can also include normal installed files like other packages do.
.
One use for this is to create a metapackage: a package whose sole
purpose is to declare dependencies and conflicts on other packages so
that these will be automatically installed, upgraded, or removed.
.
Another use is to circumvent dependency checking: by letting dpkg
think a particular package name and version is installed when it

Btw creating a .deb dummy package will be necessery in many other cases when you have to install from some third party debian repositories or some old and alrady unmaintaned deb-src packages for the sake of making some archaic software to resurrect somewhere, so sooner or later even if you're not into Mail servers you will certainly need equivs.

Then install equivs and go on proceeding creating the dummy mail-transport-agent package

Above command will build and package /tmp/mta-local_1.0_all.deb dummy package.
So continue and install it with dpkg as you use to install debian packages

debian:~# dpkg -i /tmp/mta-local_1.0_all.deb
…

From then on you can continue your standard LWQ – Life with Qmail or any other source based qmail installation with:

./config-fast mail.yourmaildomain.net
…

So that's it now .deb packaging system consistency will be complete so standard security package updates with apt-get and aptitude updates or dpkg -i third party custom software insatlls will not be breaking up any more.

If you administer a university shared free shell Linux server, have a small community of *NIX users offering free accounts for them, or responsible for Linux software company with development servers, where programmers login and use daily to program software / websites its necessery to have tightened security rules with a major goal to keep the different user accounts processes separate one from other (hide all system and user processes from single logged in user).

Preventing users to see other users processes is essential for Linux servers which are at high risk to be hacked. At earlier times to achieve hiding all processes besides own ones from a logged in user was possible by using A kernel security module Grsecurity.In latest currenlt Linux kernel version 3.2+ (on both Debian (unstable) / Ubuntu 14.04 / RHEL/CentOS v6.5+ above) you can hide process from other user so only root (useruser) can see all running process with (ps auxwwf) with a native kernel option hidepid.

Configuring Hidepid

To enable hidepid option you have toremount the /proc filesystem with the Linux kernel hardening hidepid option, to make it one time setting on already running server issue:

mount -o remount,rw,hidepid=2 /proc

To make the hidepid setting permanently active its necessery to modify /proc filesystem settings in /etc/fstab

hidepid=1 – Means users may not access any /proc/ / directories, but only ones owned by them.Important files like cmdline, sched*, status are now protected to read from other other users.

hidepid=2 – Means hidepid=1 plus all /proc/PID/ will be invisible to other users besides logged in. Using this options stops Cracker's from gathering info about running processes, indication of daemon (services) which runs with elevated privileges, other user running processes (some might contain password) passed as argument or some sensitive data. Revealing such data is frequently used to get versions of local / remote running services that can be exploited.

Below is output of htop of a logged in user on hidepid activated server:

I'm experimenting with different Virtual Machines these days, because often running VMWare together with other Virtual Machines (like VirtualBox) might be causing crashes or VM instability – hence it is always best to have VMWare completely stopped. Unfortunately VMWare keeps running a number of respawning processes (vmnat.exe, vmnetdhcp.exe, vmware-authd.exe, vmware-usbarbitrator64.exe) which cannot be killed from Task Manager with Process Kill – End Tree option. Thus to make this services stop it is necessery run from cmd.exe (which is Run as Administrator):

This of course is in case, if necessery to run websites which are written to usephp code which is not thread safety (Use Apache child prefork technology to manage processes); For websites writen to be thread safety (not use some forking php functions like: php – exec(); fork(); etc. – I install apache2-mpm-prefork for better Webserver performance and speed.

This minimum collection of packages is good only for basic, websites and most Joomla, WordPress, Drupal or whatever custom PHP websites has to be hosted usually require much more PHP functions which are not part of this basic bundle. Hence as I said prior on almost all new Linux debian / ubuntu deb package based servers need to install following list of extra PHP deb packages:

The cause of this warning error is because of way /etc/init.d/mysql script is written and in particular the custom MySQL (Debian specific start-up philosophy).

/etc/init.d/mysql is written in a way that on every restart a check of Database consistency is done. There in the script the user debian-sys-maint (a user with mysql administrator root privileges) is used to do the quick consistency check. The debian-sys-maint password which is used on start-up is stored in /etc/mysql/debian.cnf:

The whole problem is that during, the old SQL import the password set for user debian-sys-maint is different and once SQL starts the init script reads this pass and fails to login to SQL server.

The warning (error):

ERROR 1045 (28000): Access denied for user
'debian-sys-maint'@'localhost' (using password: YES)hence appears on every SQL start (including on every system boot). The err is generally harmless and SQL seems to work fine with or without it. However since the consistency check is not done at start up, if there are some CORRUPT tables not initiating the start up check is not a good idea.

There are two options to get rid of the warning one and better one is to check in /etc/mysql/debian.cnf for password string and change the pwd with mysql cli e.g.:

I’ve used K3B just recently to RIP an Audio CD with music to MP3. K3b has done a great job ripping the tracks, the only problem was By default k3b RIPs songs in OGG Vorbis (.ogg) and not mp3. I personally prefer OGG Vorbis as it is a free freedom respecting audio format, however the problem was the .ogg-s cannot be read on many of the audio players and it could be a problem reading the RIPped oggs on Windows. I’ve done the RIP not for myself but for a Belarusian gfriend of mine and she is completely computer illiterate and if I pass her the songs in .OGG, there is no chance she succed in listening the oggs. I’ve seen later k3b has an option to choose to convert directly to MP3 Using linux mp3 lame library this however is time consuming and I have to wait another 10 minutes or so for the songs to be ripped to shorten the time I decided to directly convert the existing .ogg files to .mp3 on my (Debian Linux). There are probably many ways to convert .ogg to mp3 on linux and likely many GUI frontends (like SoundConverter) to use in graphic env.

I however am a console freak so I preferred doing it from terminal. I’ve done quick research on the net and figured out the good old ffmpeg is capable of converting .oggs to .mp3s. To convert all mp3s just ripped in the separate directory I had to run ffmpeg in a tiny bash loop.

Those who are in familiar with older UNIXes, UNIX BSD derivatives and GNU Linux should certainly remember the times, when we hackers used to talk to each other using talk service.

Those who don't know what talk command is it is a simple console / ssh utility to talk to another logged in users.

Talk is very similar to write and mesg one liner messasing utilities available for *nixes, the difference is it is intendted to provide interactive chat between the two logged in users. People who came to know UNIX or free software in older times most likely don't know talk, however I still remember how precious this tool was for communication back in the day.

I believe still it can be useful so I dediced to install ot on one FreeBSD host.

In order to have the talk service running on BSD it is necessery to have /usr/libexec/ntalkd installed on the system this however is installed by default with standard BSD OS installs, so no need for any external ports install to run it.

talk doesn't have it's own init script to start is not written to run as it own service but in order to run it is is necessery to enable it via inetd

Enabling it is done by;;;

1 — Editting /etc/inetd.conf

Inside the conf the line::

#ntalk dgram udp wait tty:tty /usr/libexec/ntalkd ntalkd

should be uncommented e.g, become ;;;

ntalk dgram udp wait tty:tty /usr/libexec/ntalkd ntalkd

2 — Restart inetd

freebsd# /etc/rc.d/inetd restart
Stopping inetd.
Starting inetd.

talk is planned to be used for peer to peer conversations over SSH so in a way it is the GRANDFATHER 🙂 of IRC, ICQ and Skype;;;

Here is an example on how talk is used ,, Let's say there are three logged in users