The overall picture is like this, we have 2 servers Unix / BSD / Linux OS Server 1 and Server 2 and both has up and running recent version of OpenSSH service.
Server 1 has to login without password to Server 2 and be able to run only a specific lets say privileged command for the sake of security only this restricted command should be runnable.

Allowing a server (LDAP or Local User account) to authorize without password to a remote server (and being able to execute) multiple set of commands poses a security risk, thus it is a good idea to Restict OpenSSH connection to remote server to be able to run only one single command.

This is luckily possible by by default in any modern OpenSSH (which allows authenticated remote users to be limited to rights for running just single sys admin scripts or a just a set of defined scripts only to remote system, this has been a very good alternative to using a complicated sudo /etc/sudoers rules files which anyways could almost always be compromised relatively easy.

Here is the task lets say a username 'sync' has to be able to run commands on your Linux server Server 1 (with a hostname – remote-server.com) to execute a remote script mysqlbackupper.sh that does a daily Linux MySQL backupvia a predefined cronjob scheduled task that is triggered daily.

1. DSA or RSA SSH encryption pros and cons which one to choose?

Note this few interesting tihngs on which one to choose:

DSA (Digital Signature Algorithm) is faster to generate keys where in terms of encryption RSA is faster thanDSA

RSA is faster than DSA in verifying digital signature

DSA is faster than RSA generating digital signature

In Data Decryption DSA is faster than RSA

Due to fact encryption is faster in RSA and decryption faster in DSA, if performance on client side is targetted a better one to use will be RSA but if it is targetted to offload remote sshd server (lets say it is old hardware or busy machine where you don't want to put extra load on it then DSA is better choice. id_rsa and id_rsa.pub (public and private keys) are used to encrypt and decrypt the ssh (tunnel) session between the client and the server so most people would be curious which one is more secure RSA or DSA encryption.
Though there are some claims that in terms of security they're more or less the same RSA is generally preferred, because encryption can be up to 4096 bits, where DSA has to be exactly 1024 bits (in the opinion of ssh-keygen). 2048 bits is ssh-keygen's default length for RSA keys, and I don't see any particular reason to use shorter ones. (The minimum possible is 768 bits; whether that's "acceptable" is situational and not recommended).

2. Generating the SSH restricted PRIV and .PUB user key pair

The authorized_key added key string value is to be generated earlier by ssh-keygen command with which we generate a key pair files:

id_dsa and id_dsa.pub

or

id_rsaid_rsa.pub

What kind of files will be generated depends on the type of encryption strength choosen be it DSA or RSA etc. the full list of available ones you can read manual (man ssh-keygen)

$ ssh Remote-Server-1 -v

$ cd /home/postgresqlback

$ ssh-keygen -t dsa

or

$ ssh-keygen -t rsa

Provide a filename and passphrase, the output files will be id_dsa / id_dsa.or id_rsa / id_rsa.pub pub key-pair and stored in ~/.ssh ofusername with which the command was run.

3. Set up /home/username/authorized_keys on Server 2

$ ssh Remote-server2 -v

Next you will have to create the authorized_keys file on the remote server wherehy you will be accessing without password and copy the content of id_rsa.pub / id_dsa.pub key to /home/username/authorized_keys (in that case it will be /home/postgresqlback/.ssh/authorized_keys) the postgresqlback was previously created with adduser on Server 2
$ chmod 600 /home/postgreback/authorized_keys

There is no special need to do anything too much special to make the SSH command restriction functionality available but just, the right record in /homeusername/.ssh/authorized_keys or if supposed to be run as root user in /home/postgreback/.ssh/authorized_keys on the server where you want to place the restriction following about a file syntax like OPTIONS-1 KEY_TYPE / OPTIONS-2 KEY_TYPE / OPTIONS-3 KEY_TYPE.

To authenticate with a key from a remote PC using the sshd service on Server 1 you need to have copied the key to /home/user/.ssh/authorized_keys (with a favourite text editor lets say vim from Server 1 to Server 2 so the file should contains the ssh public keys and users (list) allowed to passwordless login to server.

authorized_keys's data ordering (as mentioned above) is in form:

OPTIONS-1 KEY_TYPE PUBLIC_KEY_STRING COMMENT-1

…
OPTIONS-2 KEY_TYPE PUBLIC_KEY_STRING COMMENT-2

…

OPTIONS-3 KEY_TYPE PUBLIC_KEY_STRING COMMENT-3

…

I've placed this example for more clarity the 4 fields of a public key string are marked with 1,2,3,4.

the reason for them is as we want to restrict port forwarding and agent-forwarding (No X will be used at all) and we don't want to have SSH local / remote orDynamic SSH tunneling enabled because of obvious (improve) security reasons.

In case if no interactive terminal will be used by the script as is the case it is also a good idea to put the no-pty next to the OPTIONS string.

If you wonder ssh-dss is not ssh-dsa, its actually a naming convention as the Digital Signature Algorithm (DSA) is published in the Digital Signature Standard (DSS) by the NIST in FIPS 186.

3. Testing restricted SSH command user run set-up

From Server-1 (after lets say logging in via ssh to it) issue:

$ ssh -i FILENAME_with_private_auth_key username@Linux-Server2 -v

Here note that the FILENAME_with_private_auth_key (is your earlier generated id_rsa / id_dsa) as
this file will let anyone who have it at hands able to login to Linux-Server2 without any password
authentication prompt you have to make sure this file permissions are good restricted readable only
for its user owner or if run with root by root (chmod 600) might a be very good idea here.

For further executing the script via a simple user ssh to Linux-Server2 you might want to use in your trigger script or cronjob (situated on Linux Serve 1) also the -q (ssh quiet output) cmd argument:

This will make the remote script understand SHELL variables might contain anything which the remote script postgresqlbackup.sh (on Server 2) will accept as pipeline input from Server 1.
Be aware that passing string arguments with has spaces or special characters inside might be problematic so always try to use a straightforward SHELL variables such as PATH, TEMP, PWD etc.

If not only predefined strings should be accepted as arguments but any arbitrary argument should be allowed to be passed to the command there is a special variable
understood by sshd daemon

$SSH_ORIGINAL_COMMAND

The $SSH_ORIGINAL_COMMAND variable used in authorized_keys is a very interesting one and it really puzzled me the first time I've seen it in a Bash Shell script as I couldn't fully grasp the meaning but it turned out to be very simple as it can be used inside /usr/sbin/postgresqlbackup.sh to return any number of passed arguments lets say backup locations directories ( /usr/local /var/log /usr/bin /bin …) to the and that would be red and processed by the script

!!! AGAIN BE CAUTIOUS AND BE WARNED that without a properly crafted script anyminor error in it might be fatal, for example if the script is running with superuser credentials (root) on remote machine, some local user or a malicious attacker that gets access to the server might decide to run something likea bove's rm -rf /* might destroy your server !!!

Instead for a /usr/sbin/run-script.sh you might Contain something like:

If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.

For those who hear about Linux Jail for a first time,chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this case is the possible cracker).

BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/, /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't about HDD space), you can also copy whole /lib into the jail, like so:

server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib

NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …

This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)

8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail

a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:

server:~# killall -9 nginx

b)Test the chrooted nginx installation is correct and ready to run inside the chroot environment

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf

I want to do test MySQL Cluster following MySQL Cluster Install Guide for that purpose, I've installed 2 version of CentOS 6.5 inside Virtualbox and I wanted to make the 2 Linux hosts reachable inside a local LAN network, I consulted some colleagues who adviced me to configure two Linux hosts to use Bridget Adapter Virtualbox networking (Network configuration in Virtualbox is done on a Virtual Machine basis from):

Devices -> Network Settings

(Attached to: Bridged Adapter)

Note!: that by default Cable Connected (tick) is not selected so when imposing changes on Network – tick should be set)
After Specifying Attached to be Bridged Adapter to make CentOS linux refresh network settings run in gnome-terminal:

To test whether there is connection between the 2 VM hosts tried ping-ing 192.168.10.2 (from 192.168.10.1) and tested with telnet if I can access remotely SSH (protocol), from CentOS VM2 1 to CentOS VM2 and vice versa, i.e.:

to choose in Attached todrop down menu. According to Internal Networking Virtualbox instructions toput two Virtual Machine hosts inside an Internal network they should be both set in Internal network with identical name.
P. S. It is explicitly stated that using Internal Network will enable access between Guest Virtual Machines OS, but hosts will not have access to the Internet (which in my case doesn't really mattered as I needed the two Linux VMs just as a testbed)

I tried this option but it doesn't work for me for some reason, after some time of research online on how to create local LAN network between 2 Virtual Machines luckily I decided to test all available Virtualbox Networking choices and noticed Host-only adapter.

Selecting Host-only Adapter and using terminal to re-fetch IP address over dhcp:

On CentOS VM1

dhclient eht0

On CentOS VM2

dhclient eth1

assigned me two adjoining IPs – (192.168.56.101 and 192.168.56.102).

Connection between the 2 IPs 192.168.56.101 and 192.168.56.102 on TCP and UDP and ICMP protocol works, now all left is to install MySQL cluster on both nodes.

If you're configuring a new Webserver or adding a new VirtualHost to an existing Apache configuration you will need to restart Apache with or without graceful option once Apache is restarted to assure Apache is continuously running on server (depending on Linux distribution) issue:

Meaning of 0.0.0.0 is that Apache is configured to Listen on Any Virtualhost IPs and interfaces. This output is usually returned whether in Apache config httpd.conf / apache2.conf webserver is configured with directive.

Listen *:80

If in netstat output there is some IP poping up for example "192.168.1.1:http", this means that only connections to the "192.168.1.1" IP address will be accepted by Apache.

Another way to look for Apache in netstat (in case Apache is configured to listen on some non-standard port number) is with:

netstat -l |grep -E 'http|www'

tcp 0 0 *:www *:* LISTEN

As sometimes it might be possible that Apache is listening but its processes are in in defunct (Zommbie) state it is always a good idea, also to check if pages server by Apache are opening in browser (check it with elinks, lynx or curl)

To get more thorough information on Apache listened ports, protocol, user with which Apache is running nomatter of Linux distribution use lsof command:

Now you probably wonder why are there two lines in /etc/inetd.conf for ))

in.talkd and in.ntald

in.talkd daemon's aim is to deliver talk sessions between logged in users on one Linux host with few logged in users willing to talk to each other locally;;
Wheter in.ntalkd is designed to serve interactive user talks between the host where in.ntalkd is installed and remote systems ruwhich have the talk client program installed. Of course in order for remote talks to work properly the firewall (if such has to be modified to allow in.ntalkd chats. I've never used in.ntalkd and on most machines having in.ntald hanging around from inetd, could be a potential security hole so, for people not planning to initiate remote TALKs between Unix / Linux / BSD hosts on a network it is a good practice the ntalkd line seen above in inetd.conf to be commented out ::;

Onwards to use talk between two users the syntax is same like on other BSD, as a matter of fact TALK – console / terminal interactive chat originally was developed for the 4.2BSD UNIX release ;; the Linux code is a port of this BSD talk and not rewrite from scratch.

Using talk between two logged in users on pts/1 (lets say user test) and tty1 (user logged as root) is done with:

noah:~$ tty
noah:~$ talk root@localhost tty1
/dev/pts/1

On tty1 the user has to have enabled Talk sessions request, by default this behaviour in Debian and probably other Debian based Linuxes (Ubuntu) for instance is configured to have talks disabled, i,e ,,,

root@noah:~# mesg
is n

Enabling it on root console is done with:

root@noah:~# mesg y

Once enabled the root will be able to see the TALK service requests on tty1 otherwise, the user gets nothing. With enabled messaging the root user will get on his tty:

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS(Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65536

– Alternative way to check the current kernel values for nf_conntrack is through:

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

II. Remove completely nf_conntrack support if it is not really necessery

It is a good practice to limit or try to omit completely use of any iptables NAT rules to prevent yourself from ending with flooding your kernel log with the messages and respectively stop your system from dropping connections.

Another option is to completely remove any modules related to nf_conntrack, iptables_nat and nf_nat.
To remove nf_conntrack support from the Linux kernel, if for instance the system is not used for Network Address Translation use:

Once the modules are removed, be sure to not use iptables -t nat .. rules. Even attempt to list, if there are any NAT related rules with iptables -t nat -L -n will force the kernel to load the nf_conntrack modules again.

Btw nf_conntrack: table full, dropping packet. message is observable across all GNU / Linux distributions, so this is not some kind of local distribution bug or Linux kernel (distro) customization.

III. Fixing the nf_conntrack … dropping packets error

– One temporary, fix if you need to keep your iptables NAT rules is:

linux:~# sysctl -w net.netfilter.nf_conntrack_max=131072

I say temporary, because raising the nf_conntrack_max doesn't guarantee, things will get smoothly from now on.
However on many not so heavily traffic loaded servers just raising the net.netfilter.nf_conntrack_max=131072 to a high enough value will be enough to resolve the hassle.

– Increasing the size of nf_conntrack hash-table

The Hash table hashsize value, which stores lists of conntrack-entries should be increased propertionally, whenever net.netfilter.nf_conntrack_max is raised.

linux:~# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize
The rule to calculate the right value to set is:hashsize = nf_conntrack_max / 4

– To permanently store the made changes ;a) put into /etc/sysctl.conf:

Note: Be careful with this variable, according to my experience raising it to too high value (especially on XEN patched kernels) could freeze the system.
Also raising the value to a too high number can freeze a regular Linux server running on old hardware.

Generally, the default value for nf_conntrack_* time-outs are (unnecessery) large.
Therefore, for large flows of traffic even if you increase nf_conntrack_max, still shorty you can get a nf_conntrack overflow table resulting in dropping server connections. To make this not happen, check and decrease the other nf_conntrack timeout connection tracking values:

All the timeouts are in seconds. net.netfilter.nf_conntrack_generic_timeout as you see is quite high – 600 secs = (10 minutes).
This kind of value means any NAT-ted connection not responding can stay hanging for 10 minutes!

The value net.netfilter.nf_conntrack_tcp_timeout_established = 432000 is quite high too (5 days!)
If this values, are not lowered the server will be an easy target for anyone who would like to flood it with excessive connections, once this happens the server will quick reach even the raised up value for net.nf_conntrack_max and the initial connection dropping will re-occur again …

With all said, to prevent the server from malicious users, situated behind the NAT plaguing you with Denial of Service attacks:

Lower net.ipv4.netfilter.ip_conntrack_generic_timeout to 60 – 120 seconds and net.ipv4.netfilter.ip_conntrack_tcp_timeout_established to stmh. like 54000

This timeout should work fine on the router without creating interruptions for regular NAT users. After changing the values and monitoring for at least few days make the changes permanent by adding them to /etc/sysctl.conf

I decided to start this post with this picture I found on onlamp.com article called “Simplify Your Life with Apache VirtualHosts .I put it here because I thing it illustrates quite well Apache’s webserver internal processes. The picture gives also a good clue when Virtual Hosts gets loaded, anways I’ll go back to the main topic of this article, hoping the above picture gives some more insight on how Apache works.;
Here is how to list all the enabled virtualhosts in Apache on Debian GNU / Linux serving pages:

The line *:* is a NameVirtualHost, means the Apache VirtualHosts module will be able to use Virtualhosts listening on any IP address (configured on the host), on any port configured for the respective Virtualhost to listen on.

The next output line:port * namevhost exampleserver2.com (/etc/apache2/sites-enabled/000-default Shows requests to the domain on any port will be accepted (port *) by the webserver as well as indicates the <VirtualHost> in the file /etc/apache2/sites-enabled/000-default:2 is defined on line 2 (e.g. :2).

To see the same all enabled VirtualHosts on FreeBSD the command to be issued is:

On Fedora and the other Redhat Linux distributions, the apache2ctl -S should be displaying the enabled Virtualhosts.

One might wonder, what might be the reason for someone to want to check the VirtualHosts which are loaded by the Apache server, since this could be also checked if one reviews Apache / Apache2’s config file. Well the main advantage is that checking directly into the file might sometimes take more time, especially if the file contains thousands of similar named virtual host domains. Another time using the -S option is better would be if some enabled VirtualHost in a config file seems to not be accessible. Checking directly if Apache has properly loaded the VirtualHost directives ensures, there is no problem with loading the VirtualHost. Another scenario is if there are multiple Apache config files / installs located on the system and you’re unsure which one to check for the exact list of Virtual domains loaded.

I’ve recently had to build a Linux server with some other servers behind the router with NAT.
One of the hosts behind the Linux router was running a Window GRE encrypted tunnel service. Which had to be accessed with the Internet ip address of the server.
In order < б>to make the GRE tunnel accessible, a bit more than just adding a normal POSTROUTING DNAT rule and iptables FORWARD is necessery.

As far as I’ve read online, there is quite of a confusion on the topic of how to properly configure the GRE tunnel accessibility on Linux , thus in this very quick tiny tutorial I’ll explain how I did it.

These two modules are an absolutely necessery to be loaded before the remote GRE tunnel is able to be properly accessed, I’ve seen many people complaining online that they can’t make the GRE tunnel to work and I suppose in many of the cases the reason not to be succeed is omitting to load this two kernel modules.

2. Make the ip_nat_pptp and ip_nat_pptp modules to load on system boot time

In the above example rules its necessery to substitute the 111.222.223.224 ip address withe the external internet (real IP) address of the router.

Also the IP address of 192.168.1.3 is the internal IP address of the host where the GRE host tunnel is located.

Next it’s necessery to;

4. Add iptables rule to forward tcp/ip traffic to the GRE tunnel

linux-router:~# /sbin/iptables -A FORWARD -p gre -j ACCEPT

Finally it’s necessery to make the above iptable rules to be permanent by saving the current firewall with iptables-save or add them inside the script which loads the iptables firewall host rules.
Another possible way is to add them from /etc/rc.local , though this kind of way is not recommended as rules would add only after succesful bootup after all the rest of init scripts and stuff in /etc/rc.local is loaded without errors.

Afterwards access to the GRE tunnel to the local IP 192.168.1.3 using the port 1723 and host IP 111.222.223.224 is possible.
Hope this is helpful. Cheers 😉