The overall picture is like this, we have 2 servers Unix / BSD / Linux OS Server 1 and Server 2 and both has up and running recent version of OpenSSH service.
Server 1 has to login without password to Server 2 and be able to run only a specific lets say privileged command for the sake of security only this restricted command should be runnable.

Allowing a server (LDAP or Local User account) to authorize without password to a remote server (and being able to execute) multiple set of commands poses a security risk, thus it is a good idea to Restict OpenSSH connection to remote server to be able to run only one single command.

This is luckily possible by by default in any modern OpenSSH (which allows authenticated remote users to be limited to rights for running just single sys admin scripts or a just a set of defined scripts only to remote system, this has been a very good alternative to using a complicated sudo /etc/sudoers rules files which anyways could almost always be compromised relatively easy.

Here is the task lets say a username 'sync' has to be able to run commands on your Linux server Server 1 (with a hostname – remote-server.com) to execute a remote script mysqlbackupper.sh that does a daily Linux MySQL backupvia a predefined cronjob scheduled task that is triggered daily.

1. DSA or RSA SSH encryption pros and cons which one to choose?

Note this few interesting tihngs on which one to choose:

DSA (Digital Signature Algorithm) is faster to generate keys where in terms of encryption RSA is faster thanDSA

RSA is faster than DSA in verifying digital signature

DSA is faster than RSA generating digital signature

In Data Decryption DSA is faster than RSA

Due to fact encryption is faster in RSA and decryption faster in DSA, if performance on client side is targetted a better one to use will be RSA but if it is targetted to offload remote sshd server (lets say it is old hardware or busy machine where you don't want to put extra load on it then DSA is better choice. id_rsa and id_rsa.pub (public and private keys) are used to encrypt and decrypt the ssh (tunnel) session between the client and the server so most people would be curious which one is more secure RSA or DSA encryption.
Though there are some claims that in terms of security they're more or less the same RSA is generally preferred, because encryption can be up to 4096 bits, where DSA has to be exactly 1024 bits (in the opinion of ssh-keygen). 2048 bits is ssh-keygen's default length for RSA keys, and I don't see any particular reason to use shorter ones. (The minimum possible is 768 bits; whether that's "acceptable" is situational and not recommended).

2. Generating the SSH restricted PRIV and .PUB user key pair

The authorized_key added key string value is to be generated earlier by ssh-keygen command with which we generate a key pair files:

id_dsa and id_dsa.pub

or

id_rsaid_rsa.pub

What kind of files will be generated depends on the type of encryption strength choosen be it DSA or RSA etc. the full list of available ones you can read manual (man ssh-keygen)

$ ssh Remote-Server-1 -v

$ cd /home/postgresqlback

$ ssh-keygen -t dsa

or

$ ssh-keygen -t rsa

Provide a filename and passphrase, the output files will be id_dsa / id_dsa.or id_rsa / id_rsa.pub pub key-pair and stored in ~/.ssh ofusername with which the command was run.

3. Set up /home/username/authorized_keys on Server 2

$ ssh Remote-server2 -v

Next you will have to create the authorized_keys file on the remote server wherehy you will be accessing without password and copy the content of id_rsa.pub / id_dsa.pub key to /home/username/authorized_keys (in that case it will be /home/postgresqlback/.ssh/authorized_keys) the postgresqlback was previously created with adduser on Server 2
$ chmod 600 /home/postgreback/authorized_keys

There is no special need to do anything too much special to make the SSH command restriction functionality available but just, the right record in /homeusername/.ssh/authorized_keys or if supposed to be run as root user in /home/postgreback/.ssh/authorized_keys on the server where you want to place the restriction following about a file syntax like OPTIONS-1 KEY_TYPE / OPTIONS-2 KEY_TYPE / OPTIONS-3 KEY_TYPE.

To authenticate with a key from a remote PC using the sshd service on Server 1 you need to have copied the key to /home/user/.ssh/authorized_keys (with a favourite text editor lets say vim from Server 1 to Server 2 so the file should contains the ssh public keys and users (list) allowed to passwordless login to server.

authorized_keys's data ordering (as mentioned above) is in form:

OPTIONS-1 KEY_TYPE PUBLIC_KEY_STRING COMMENT-1

…
OPTIONS-2 KEY_TYPE PUBLIC_KEY_STRING COMMENT-2

…

OPTIONS-3 KEY_TYPE PUBLIC_KEY_STRING COMMENT-3

…

I've placed this example for more clarity the 4 fields of a public key string are marked with 1,2,3,4.

the reason for them is as we want to restrict port forwarding and agent-forwarding (No X will be used at all) and we don't want to have SSH local / remote orDynamic SSH tunneling enabled because of obvious (improve) security reasons.

In case if no interactive terminal will be used by the script as is the case it is also a good idea to put the no-pty next to the OPTIONS string.

If you wonder ssh-dss is not ssh-dsa, its actually a naming convention as the Digital Signature Algorithm (DSA) is published in the Digital Signature Standard (DSS) by the NIST in FIPS 186.

3. Testing restricted SSH command user run set-up

From Server-1 (after lets say logging in via ssh to it) issue:

$ ssh -i FILENAME_with_private_auth_key username@Linux-Server2 -v

Here note that the FILENAME_with_private_auth_key (is your earlier generated id_rsa / id_dsa) as
this file will let anyone who have it at hands able to login to Linux-Server2 without any password
authentication prompt you have to make sure this file permissions are good restricted readable only
for its user owner or if run with root by root (chmod 600) might a be very good idea here.

For further executing the script via a simple user ssh to Linux-Server2 you might want to use in your trigger script or cronjob (situated on Linux Serve 1) also the -q (ssh quiet output) cmd argument:

This will make the remote script understand SHELL variables might contain anything which the remote script postgresqlbackup.sh (on Server 2) will accept as pipeline input from Server 1.
Be aware that passing string arguments with has spaces or special characters inside might be problematic so always try to use a straightforward SHELL variables such as PATH, TEMP, PWD etc.

If not only predefined strings should be accepted as arguments but any arbitrary argument should be allowed to be passed to the command there is a special variable
understood by sshd daemon

$SSH_ORIGINAL_COMMAND

The $SSH_ORIGINAL_COMMAND variable used in authorized_keys is a very interesting one and it really puzzled me the first time I've seen it in a Bash Shell script as I couldn't fully grasp the meaning but it turned out to be very simple as it can be used inside /usr/sbin/postgresqlbackup.sh to return any number of passed arguments lets say backup locations directories ( /usr/local /var/log /usr/bin /bin …) to the and that would be red and processed by the script

!!! AGAIN BE CAUTIOUS AND BE WARNED that without a properly crafted script anyminor error in it might be fatal, for example if the script is running with superuser credentials (root) on remote machine, some local user or a malicious attacker that gets access to the server might decide to run something likea bove's rm -rf /* might destroy your server !!!

Instead for a /usr/sbin/run-script.sh you might Contain something like:

For me as a GNU / Linux sysadmin it is intuitive to check on a server the number of established connections / connections in time_wait state and so on .

I will not explain why this is necessery as every system administrator out there who had a performance or network issues due to server / applications connection overload or have been a target of Denial of Service (DoS)or Distributed Denial of Service attacks (DDoS) is well aware that a number of connections in different states such as SYN_ACK / TIME_WAIT or ESTABLISHED state could be very nasty thing and could cause a productive application or Infrastructure service to be downed for some time causing from thousands of Euros to even millions to some bussinesses as well as some amount of data loss …

To prevent this therefore sysadmins should always take a look periodically on the Connection states on the adminned server (and in this number I say not only sys admins but DevOps guys who are deploying micro-services for a customer in the Cloud – yes I believe Richard Stallman is right here they're clouding your minds :).

Even though cloud services could provide a very high amount of Hardware (CPU / Memory / Storage) resources, often for custom applications migrating the application in the Cloud does not solve it's design faults or even problems on a purely classical system administration level.

1. Get a statistic for FIN_WAIT1, FOREIGN, SYNC_RECV, LAST_ACK, TIME_WAIT, LISTEN and ESTABLISHED Connections on GNU / Linux

On GNU / Linux and other Linux like UNIXes the way to do it is to grep out the TCP / UDP connection type you need via netstat a very useful cmd in that case is:

As I'm forced to optimize a couple of Microsoft Windows DNS servers which are really slow to resolve theThe logical question for me was how the Established and TIME_WAIT state connections then could be checked on Windows OS, after a quick investigation online I've come up with this:

C:\Users\admin> netstat -nao | find /i "estab" /c
78

C:\Users\admin> netsatt -nao | find /i "time_wait" /c
333

If you're used to Linux watch command, then to do same on Windows OS (e.g. check the output of netstat) command every second
and print output use:

netstat –an 1 | find “3334”

Below commands will show stats for services listening on TCP port 3334

To find out which process on system sends packets to remote destination:

netstat –ano 1 | find “Dest_IP_Addr”

The -o parameter outputs the process ID (PID) responsible for the connection.
then if you need further you can find the respective process name with tasklist< cmd.
Another handy Windows netstat option is -b which will show EXE file running as long as
the related used DLL Libraries which use TCP / UDP .

Other useful netsatat Win example is to grep for a port and show all established connections for it with:

I'll be intested to hear from sysadm colleagoes for other useful ways to track connections perhaps with something like ss tool (a utility to investigate sockets).Also any optimization hints that would cause servers less downtime and improve network / performance thouroughput is mostly welcome.

As I've started on job position – Linux Architect in last November 2018 in Itelligence AG as a contractor (External Service) – a great German company who hires the best IT specialists out there and offers a flexible time schedules for emploees doing various very cool IT advanced operations and Strategic advancement of SAP's Cloud used Technology and Services improvements for SAP SE – SAP S4HANA and HEC (HANA Enterprise Cloud) and been given for work hardware a shiny Lenovo Thinkpad 500 Laptop with Windows 10 OS (SAP pre-installed), I needed to make some SSH Tunnels to machines to (Hop Station / Jump hosts) for that purpose, after some experimenting with MobaXterm Free (Personal Edition 11.0) and the presumable limitations of tunnels of the free client as well as my laziness to add the multiple ssh tunnels to different ssh / rdp / vnc etc. servers, finally I decided to just copy all the tunnels from a colleague who runs Putty and again use the good old Putty – old school Winblows SSH Terminal Client but just for creating the SSH tunnels and for rest use MobaXterm, just like in old times while still employe in Hewlett Packard. For that reason to copy the Tunnels from my dear German Colleague Henry Beck (A good herated collegue who works in field of Storage dealing with NetApps / filer Clusters QNap etc.).

Till that moment I had no idea how copying a saved SSH Tunnels definition is possible, I did a quick research just to find out this is done not with Putty Interface itself but, insetead through dumping Windows Putty Stored Registry records into a File, then transfer to the PC where Tunnels needs to be imported and then again (either double click the registry file) to load it, into registry or use Windows registry editor command line interface reg, here is how:

These days my home server is experiencing a lot of issues due to Electricity Power Outages, a construction dig operations to fix / change waterpipe tubes near my home are in action and perhaps the power cables got ruptered by the digger machine.
The effect of all this was that my server networking accessability was affected and as I didn't have network I couldn't access it remotely anymore at a certain point the electricity was restored (and the UPS charge could keep the server up), however the server accessibility did not due restore until I asked a relative to restart it or under a more complicated cases where Tech aquanted guy has to help – Alexander (Alex) a close friend from school years check his old site here – alex.pc-freak.net helps a lot.to restart the machine physically either run a quick restoration commands on root TTY terminal or generally do check whether default router is reachable.

This kind of Pc-Freak.net downtime issues over the last month become too frequent (the machine was down about 5 times for 2 to 5 hours and this was too much (and weirdly enough it was not accessible from the internet even after electricity network was restored and the only solution to that was a physical server restart (from the Power Button).

To decrease the number of cases in which known relatives or friends has to physically go to the server and restart it, each time after network or electricity outage I wrote a small script to check accessibility towards Default defined Network Gateway for my server with few ICMP packages sentwith good old PING command
and trigger a network restart and system reboot (in case if the network restart does fail) in a row.

1. Create reboot-if-nwork-is-downsh script under /usr/sbin or other dir

Here is the script itself:

#!/bin/sh
# Script checks with ping 5 ICMP pings 10 times to DEF GW and if so
# triggers networking restart /etc/inid.d/networking restart
# Then does another 5 x 10 PINGS and if ping command returns errors,
# Reboots machine
# This script is useful if you run home router with Linux and you have
# electricity outages and machine doesn't go up if not rebooted in that case

As you see in script successful runs as well as its failures are logged on server in /var/log/reboot.log with respective timestamp.
Also a counter to 5 is kept in /tmp/rebooted.txt, incremented on each and every script run (rebooting) if, the 5 times increment is matched
a sleep is executed for 30 minutes and the counter is being restarted.
The counter check to 5 guarantees the server will not get restarted if access to Gateway is not continuing for a long time to prevent the system is not being restarted like crazy all time.

2. Create a cron job to run reboot-if-nwork-is-down.sh every 15 minutes or so

I've set the script to re-run in a scheduled (root user) cron job every 15 minutes with following job:

To add the script to the existing cron rules without rewriting my old cron jobs and without tempering to use cronta -u root -e (e.g. do the cron job add in a non-interactive mode with a single bash script one liner had to run following command:

I know restarting a server to restore accessibility is a stupid practice but for home-use or small client servers with unguaranteed networks with a cheap Uninterruptable Power Supply (UPS) devices it is useful.

Summary

Time will show how efficient such a "self-healing script practice is.
Even though I'm pretty sure that even in a Corporate businesses and large Public / Private Hybrid Clouds where access to remote mounted NFS / XFS / ZFS filesystems are failing a modifications of the script could save you a lot of nerves and troubles and unhappy customers / managers screaming at you on the phone 🙂

I'll be interested to hear from others who have a better ideas to restore ( resurrect ) access to inessible Linux server after an outage.?

Today I had a report of a server whose Load Avarage keeps at the high level of 86, the machine runs on a bare metal rock solid hardware and even with such high Loads of the kernel it runs fine, but due to the I/O overhead the SANs red from a remote NetApp storage device started to be sluggish and hence it needed to be reviewed, thus I jumped in via the hop station (jump host) into the server.

1. Short investation on root cause for high server load

After a short investigation, I've found an rsync job set by someone on a cron job to be routinely run every 30 minutes, thus the old scheduled rsync, which seemed to run multiple times on the server (about 50 processes) of same rsync (file system synchronization was running) and as expected the storage was saddled with mutiple Input / Output requests.

A process list showed the following high number of running mirrored rsyncs:

server:~# ps axuwwf | grep -i rsync | wc -l
80

2. The Fix – Set Rsync to only via cron only in case if it is not already running in background

In order to fix it, I had to kill all current running rsync (here luckily only same single instance of rsync was running, but generally I was cautious to check no other rsync jobs are running – otherwise I would have mistakenly killed some other rsync job ongoing …)

Then I set the following new cron job one liner quick shell script that does the job to assign a pid file that is created before rsync and deleted after rsync completion.

Just in case if you're wondering
a trap should be used to verify that the lock file is removed when the script is exited for any reason.
This way the lock file will be removed even if the script exits before the end of the script.

Ansible can be used to configure or deploy GNU / Linux tools and services such as Apache / Squid / Nginx / MySQL / PostgreSQL. etc. It is pretty much like Puppet (server / services lifecycle management) tool , except its less-complecated to start with makes it often a choose as a tool for mass deployment (devops) automation.

Ansible is used for multi-node deployments and remote-task execution on group of servers, the big pro of it it does all its stuff over simple SSH on the remote nodes (servers) and does not require extra services or listening daemons like with Puppet. It combined with Docker containerization is used very much for later deploying later on inside Cloud environments such as Amazon AWS / Google Cloud Platform / SAP HANA / OpenStack etc.

0. Instaling ansible on Debian / Ubuntu Linux

Ansible is a python script and because of that depends heavily on python so to make it running, you will need to have a working python installed on local and remote servers.

sshpass needs to be installed only if you plan touse ssh password prompt authentication with ansible.

Ansible is also installable via python-pip tool, if you need to install a specific version of ansible you have to use it instead, the package is available as an installable package on most linux distros.

Ansible has a lot of pros and cons and there are multiple articles already written on people for and against it in favour of Chef or Puppet As I recently started learning Ansible. The most important thing to know about Ansible is though many of the things can be done directly using a simple command line, the tool is planned for remote installing of server services using a specially prepared .yaml format configuration files. The power of Ansible comes of the use of Ansible Playbooks which are yaml scripts that tells ansible how to do its activities step by step on remote server. In this article, I'm giving a quick cheat sheet to start quickly with it.

1. Remote commands execution with Ansible

First thing to do to start with it is to add the desired hostnames ansible will operate with it can be done either globally (if you have a number of remote nodes) to deploy stuff periodically by using /etc/ansible/hosts or use a custom host script for each and every ansible custom scripts developed.

a. Ansible main config files

A common ansible /etc/ansible/hosts definition looks something like that:

There is a lot of things to say about playbooks, just to give the brief they have there own language like a templates, tasks, handlers, a playbook could have one or multiple plays inside (for instance instructions for deployment of one or more services).

The downsides of playbooks are they're so hard to write from scratch and edit, because yaml syntaxing is much more stricter than a normal oldschool sysadmin configuration file.
I've stucked with problems with modifying and writting .yaml files and I should say the community in #ansible in irc.freenode.net was very helpful to help me debug the obscure errors.

yamllint (The YAML Linter tool) comes handy at times, when facing yaml syntax errors, to use it install via apt:

# apt-get install –yes yamllint

a) Running ansible in "dry mode" just show what ansible might do but not change anything

The available packages you can use as a template for your purpose are not so much as with Puppet as Ansible is younger and not corporate supported like Puppet, anyhow they are a lot and does cover most basic sysadmin needs for mass deployments, besides there are plenty of other unofficial yaml ansible scripts in various github repos.

Those who remember the times of IRC chatting long nights and the need to be c00l guy and enter favorite IRC server through a really bizarre hostname, you should certainly remember the usefulness of Reverse SSH Tunnels to appear in IRC /whois like connecting from a remote host (mask yourself) from other IRC guys where are you physically.

The idea of Reverse SSH is to be able to SSH (or other protocols) connect to IPs that are situated behind a NAT server/s.
Creating SSH Reverse Tunnel is an easy task and up to 2 simple SSH commands,

To better explain how SSH tunnel is achieved, here is a scenario:

A. Linux host behind NAT IP: 192.168.10.70 (Destination host)
B. (Source Host) of Machine with External Public Internet IP 83.228.93.76 through which SSH Tunnel will be established to 192.168.10.70.

If you have generated a .pem formatted SSL certificate or you have multiple .pem SSL certificates and you're not sure which .pem file is generated for which domain / subdomain it is useful to Display content of SSL Certificate .PEM file with openssl command.

Viewing certificate's content is also very useful if you have hosted multiple websites hosted on a server and you want to check which of the SSLs assigned in the Virtualhosts has Expired (for example if you have domains that expire in short term period (365 days).

X509v3 Subject Alternative Name: DNS:mail.pc-freak.net
X509v3 Certificate Policies:
Policy: 2.23.140.1.2.1
Policy: 1.3.6.1.4.1.44947.1.1.1
CPS: http://cps.letsencrypt.org
User Notice: Explicit Text: This Certificate may only be relied upon by Relying Parties and only in accordance with the Certificate Policy found at https://letsencrypt.org/repository/

If you're a sysadmin / developer whose boss requires a migration of Stored Data, Database structures or Web Objects to Amazon Web Services / Google Clourd or you happen to be a DevOps Engineer you will certainly need to have installed as a minimumum amazon AWS and Google Clouds clients to do daily routines and script stuff in managing cloud resources without tampering to use the Web GUI interface.

Here is how to install the aws, gcloud, oc, az and cf next to your kubernetes client (kubectl) on your Linux Desktop.

and play with it to install software create services on the Redhat cloud.

Closure

This are just of the few of the numerous tools available and I definitely understand there is much more to be said on the topic.
If you can remember other tools tor interesting cloud starting up tips about stuff to do on a fresh installed Linux PC to make life easier with Cloud / PaaS / SaaS / DevOps engineer please drop a comment.

As most system administrators and perhaps most people :), I dislike Java Virtual Machine. However because of its multi-platform support it is largely adopted and so many things are already written in java, even though I hate it I need it to run things every now and then on my personal desktop machine with Debian Linux 9 (Stretch).

From a programmer point of view Java applications are scalable and flexible and from a point of view a person who had to support computers and servers iwth Java it sucks.
To have a running Java Virtual Machine and run Java applications on your Linux PC you could use JRE (the Java Runtime Environment) and JDK (Java Development Kit) which is a set of Java tools and compilers to translate Java code to a .JAR .WAR and the rest of the Java Machine running formats.

OpenJDK (Open JDK) is free (open source) implementation of Oracle Sun Microsystems of Java SE 7 mostly licensed under GPLv2 (but with some linking to a Java class library that is not truly free. OpenJDK includes as components the backend Virtual Machine (HotSpot), the Java Class Library, javac (the java compiler) and IcedTea (which is Redhat's free implementation of Java Web Start plugin.

Install OpenJDK 8 JDK and JRE

OpenJDK is installable by default on Debian and most other distros, to install it on Debian:

I have used openjdk but as there are issues with some Java programs because of Java compitability issues. Nowadays most of the timeI choose to usually install the Official LatestOracle Java8. The reason is I often have to install on servers application servers such as:

Tomcat

JBoss

WildFly

Jetty

Glassfish

WebLogic

Cassandra

Jenkins

Install Latest Official Oracle Java 8

1. Download Oracle Java installable binary

To download latest official release check out Oracle's download page and copy the link to latest Java archive and select the appropriate architecture x64 / 32 bit / arm etc., as of time of writting this article latest Stable Java version is JDK-8U181.

If you have used Java OpenJDK beforehand and installed Oracle Java according to the instructions above you might end up with multiple Javas installed by default, however Debian Linux will have a symlinks to java javac (java compiler), javaws (Java web start).
Thus just executed java will return