Wednesday, December 30, 2009

If you can’t get tail command to continuously monitor a file, then read on. I was working on a script yesterday, a part of which depended on continuous monitoring of a text file.

I had used our trusty old “tail” command for this but while testing by manually putting in some data into the file, it was failing but curiously it was working fine when used in actual scenario.

Befuddled, I did a simple test. I created a simple text file “a.txt” with a few lines of data and then ran the following command.

# tail -f a.txt

It showed the last few lines of the file and kept waiting. So far so good. Then I opened the file in vim editor, wrote a few more lines, saved the file and then waited but nothing in the window that was running the tail command.

Thinking that the data might be buffered and not flushed to the disc yet, I ran the sync command but still nothing.

Then I got a hint that when I used the “-F” or “–follow=name” option instead of “-f”, the tail command was able to detect the change just fine, the only problem being that in this mode, it prints the last few lines again, not just the newly added line.

The main difference in these new options is that tail command tracks the file for changes by its name and not by the file descriptor, and then it dawned on me.

The problem is not in the tail command but my testing method itself. When I save the file opened in vim, it creates a new file with a new inode while the one opened by tail is still the old one (which is now a temporary file which has actually been deleted).

When I quit tail, then the kernel deletes the file automatically. This is also confirmed by running “lsof | grep a.txt” (lsof lists the open files and then we find the ones related to a.txt).

This gets worked around when I use the -F option because then tail periodically reopens the file by name and reads it again, thus bypassing the above issue.

Then I simply tried running tail again on the same file and doing something like “echo abc >> a.txt” and I could see the behaviour as expected with tail immediately detecting the change and displaying it in its window.

Hope this helps if you have been pulling out your hair thinking you have gone crazy as your favourite little tool that you have been using for so many years has suddenly stopped working and no one else apart from you is even complaining

Tuesday, December 29, 2009

For most small businesses, a reliable connection to the Internet is vital for both communication and commerce.

A key component of Internet access is the Domain Name System (DNS), which allows you to reach sites using familiar and user-friendly names like smallbusinesscomputing.com rather than inscrutable and difficult to remember IP addresses like 63.236.73.55.

Whenever you access a Web site, send or receive e-mail, chat via instant messaging, or use any other type of Internet application, DNS is working behind the scenes matching domain names to IP addresses.

As you read this, your business is probably relying on ISP-provided DNS servers to reach sites and services on the Internet.

They often do an adequate job, but they’re prone to sluggishness (and sometimes outages). Switching your business over to the independent DNS service provider OpenDNS, on the other hand, can make Internet access a bit speedier and safer for everyone on your network, as well as provide added features like content filtering so you can determine which Web sites your employees can and can’t visit.

OpenDNS uses a combination of caching technology and a network of strategically located servers that generally perform DNS lookups much quicker than ISP servers do.

Considering that loading all the components of a single Web page can often involve lots of individual DNS lookups, saving even a fraction of a second on each can really add up.

OpenDNS also provides a phishing filter and checks every site you visit to make sure it’s legitimate before taking you to it.

Best of all, you can take advantage of OpenDNS for free (or at minimal cost) and without having to make any major configuration changes to your network or any of your computers.

Getting Started with OpenDNS

Getting up and running with OpenDNS ranges from easy to very easy.

If you’re a small firm that relies exclusively on ISP-provided DNS — that is, you don’t maintain your own DNS server — all you need to do is make a quick tweak to your router settings so that it uses OpenDNS’s DNS servers rather than your ISP’s.

Content Filtering
To take advantage of the aforementioned Web content filtering, you’ll need to take the extra step of creating an OpenDNS Basic account (still free), so that the service can identify your specific network and apply unique settings to it.

OpenDNS identifies your network by the public IP address assigned to it by your ISP.

Although it’s not too common with business-class Internet service, if your network’s public IP address is dynamic — i.e. subject to periodic changes — you’ll need to run a small utility on one of your systems (preferably one that’s left running all the time).

This will detect any changes to your public IP and update OpenDNS accordingly. (You’ll see a link to the Dynamic IP software when you set up your network, but you can also download the utility.)

For content filtering, you can use general settings —minimal, low, moderate, high — or customized ones to filter almost 60 specific categories of inappropriate or time-wasting content (e.g. adult, games, social networking, Webmail, etc.).

You’ll also have the option to block or allow access to particular domain names, known as whitelisting or blacklisting. (See Figure 2.)

Other benefits of using OpenDNS with an account include the capability to view statistics about your network’s DNS usage, such as which domains were visited most and which access attempts were blocked.

You’ll also be able to customize the message that’s displayed when the phishing or content filter blocks a site, as well as on the guide page, which presents a list of suggested alternatives when someone types in an invalid or unresponsive address.

It’s worth noting that OpenDNS only knows about your network and not its users, so it won’t allow you to apply different settings to individual employees.

Similarly, OpenDNS collects network stats in aggregate; it will be able to tell you when someone attempts to access a forbidden site, but not that it was Fred in accounting. (Sorry to narc on you, Fred.)

What’s the Catch?
At this point you might be wondering how OpenDNS manages to provide its service for free. As is so often this case, “free” really means “advertising supported,” and the upshot is that sponsored links will appear on every block and guide page.

If you’ d rather not deal with the ads and are willing to ante up $5 per user per year — still pretty cheap — to make them go away, you can upgrade to OpenDNS Deluxe.

Note: Google recently released a DNS service of its own called Google Public DNS, which promises speed and security benefits similar to OpenDNS, but it doesn’t currently offer any advanced/customizable features.

While switching to OpenDNS isn’t going to make an Internet connection that’s inherently slow lightning-quick, nor will it protect you against every form of Internet-borne malady, if you want Internet access with more speed and security and you want more control and insight over how your small business’s Internet connection is used, it’s worth checking out.

Modern Linux distributions are capable of identifying a hardware component which is plugged into an already-running system.

There are a lot of user-friendly distributions like Ubuntu, which will automatically run specific applications like Rhythmbox when a portable device like an iPod is plugged into the system.

Hotplugging (which is the word used to describe the process of inserting devices into a running system) is achieved in a Linux distribution by a combination of three components: Udev, HAL, and Dbus.

Udev supplies a dynamic device directory containing only the nodes for devices which are connected to the system.

It creates or removes the device node files in the /dev directory as they are plugged in or taken out. Dbus is like a system bus which is used for inter-process communication.

The HAL gets information from the Udev service, when a device is attached to the system and it creates an XML representation of that device.

It then notifies the corresponding desktop application like Nautilus through the Dbus and Nautilus will open the mounted device’s files.

This article focuses only on Udev, which does the basic device identification.

What is Udev?
Udev is the device manager for the Linux 2.6 kernel that creates/removes device nodes in the /dev directory dynamically.

It is the successor of devfs and hotplug. It runs in userspace and the user can change device names using Udev rules.

Udev depends on the sysfs file system which was introduced in the 2.5 kernel. It is sysfs which makes devices visible in user space.

When a device is added or removed, kernel events are produced which will notify Udev in user space.

The external binary /sbin/hotplug was used in earlier releases to inform Udev about device state change. That has been replaced and Udev can now directly listen to those events through Netlink.

Why Do We Need It ?
In the older kernels, the /dev directory contained statics device files. But with dynamic device creation, device nodes for only those devices which are actually present in the system are created.

Let us see the disadvantages of the static /dev directory, which led to the development of Udev.

Problems Identifying the Exact Hardware Device for a Device Node in /dev
The kernel will assign a major/minor number pair when it detects a hardware device while booting the system. Let us consider two hard disks.

The connection/alignment is in such a way that one is connected as a master and the other, as a slave. The Linux system will call them, /dev/hdaand /dev/hdb.

Now, if we interchange the disks the device name will change.

This makes it difficult to identify the correct device that is related to the available static device node. The condition gets worse when there are a bunch of hard disks connected to the system.

Udev provides a persistent device naming system through the /dev directory, making it easier to identify the device.

The following is an example of persistent symbolic links created by Udev for the hard disks attached to a system.

Persistent device naming helps to identify the hardware device without much trouble.

Huge Number of Device Nodes in /dev
In the static model of device node creation, no method was available to identify the hardware devices actually present in the system.

So, device nodes were created for all the devices that Linux was known to support at the time. The huge mess of device nodes in /dev made it difficult to identify the devices actually present in the system.

Not Enough Major/Minor Number Pairs
The number of static device nodes to be included increased a lot in recent times and the 8-bit scheme, that was used, proved to be insufficient for handling all the devices.

As a result the major/minor number pairs started running out.

Character devices and block devices have a fixed major/minor number pair assigned to them. The authority responsible for assigning the major/minor pair is the Linux Assigned Name and Number Authority.

But, a machine will not use all the available devices. So, there will be free major/minor numbers within a system.

In such a situation, the kernel of that machine will borrow major/minor numbers from those free devices and assign those numbers to other devices which require it.

This can create issues at times. The user space application which handles the device through the device node will not be aware of the number change.

For the user space application, the device number assigned by LANANA is very important. So, the user space application should be informed about the major/minor number change.

This is called dynamic assignment of major/minor numbers and Udev does this task.

Udev’s Goals

Run in user space.

Create persistent device names, take the device naming out of kernel space and implement rule based device naming.

Create a dynamic /dev with device nodes for devices present in the system and allocate major/minor numbers dynamically.

Provide a user space API to access the device information in the system.

Installation of Udev

Udev is the default device manager in the 2.6 kernel. Almost all modern Linux distributions come with Udev as part of the default installation.

You can see that it provides a lot of information about the device. This includes the modalias variable that tells Udev to load a particular module.

The modalias data will look like :

MODALIAS=pci:v000010ECd00008169sv00001385sd0000311Abc02sc00i00

The modalias data contains all the information required to find the corresponding device driver :

pci :- Its a pci device
v :- vendor ID of the device. Here it is 000010EC ( ie 10EC )
d :- device ID of the device. Here it is 00008169 ( ie : 8169 )
sv and sd are subsystem versions for both vendor and device.

The best place to find the vendor/product from the id of a PCI device is http://www.pcidatabase.com.
Udev uses the modalias data to find the correct device driver from /lib/modules/`uname -r`/modules.alias.

Check out the line starting with “depends”. It describes the other modules which the r8169 module depends on. Udev will load these modules also.

Rule Processing and Device Node Creation

As already mentioned, Udev parses the rules in/etc/udev/rules.d/ for every device state change in the kernel.

The Udev rule can be used to manipulate the device node name/permission/symlink in user space.

Let us see some sample rules that will help you understand Udev rules better.

The data supplied by the kernel through netlink is used by Udev to create the device nodes. The data includes the major/minor number pair and other device specific data such as device/vendor id, device serial number etc.

The Udev rule can match all this data to change the name of the device node, create symbolic links or register the network link.

The following example shows how to write a Udev rule to rename the network device in a system.

Friday, December 25, 2009

There is an old saying that goes "you can't miss what you never had" meaning that for those who have never had something of these things they will have no idea what they are missing out on.

Typically I use Ubuntu or some Linux flavor as my operating system for every day tasks, however as most techs know using Windows is unavoidable at times. (Whether it be because I am fixing someone else's machine, at work/school, or queuing up some Netflix watch instantly on my home system)

That being said the following are the top ten features/programs I find myself grumbling about/missing the most when I am working on the Windows platform:

10.) Klipper/Copy & Paste Manager - I use this one alot when I am either coding or writing a research paper for school.

More often than not I find I have copied something new only to discover I need to paste a link or block of code again from two copies back.

Having a tray icon where I can recall the last ten copies or so is mighty useful.

9.) Desktop Notifications - This is something that was first largely introduced in Ubuntu 9.04 and something I quickly grew accustomed to having.

Basically it is a small message (notification) the pops up in the upper right hand corner of your screen for a few moments when something happens in one of your programs (a torrent finishes, you get a new instant message, ect.) or you adjust the volume/brightness settings on your system.

8.) "Always on Top" Window Option - This is something I find useful when I am instant messaging while typing a paper, surfing the net, or watching a movie on my computer.

Essentially what it does is make sure that the window you have this option toggled on is always at the top of your viewing regardless of what program you have selected/are working in.

It is useful because it allows me to read instant messages with out having to click out of something else that I am working on.

7.) Multiple Work Spaces - When I get to really heavy multitasking on a system having multiple different desktops to assign applications to is a god send.

It allows for better organization of the different things I am working on and keeps me moving at a faster pace.

6.) Scrolling in the Window/Application the Cursor is Over - This one again is mostly applicable when some heavy multitasking is going on (but hey - its almost 2010, who isn't always doing at least three things at once right?).

Basically in Ubuntu/Gnome desktop when I use the scroll on my mouse (whether it is the multi-touch on my track pad or the scroll wheel on my USB mouse) it will scroll in what ever program/window my mouse is currently over instead of only scrolling in what ever application I have selected.

5.) Gnome-Do - Most anyone who uses the computer in their everyday work will tell you that less mouse clicks means faster speed and thus (typically) more productivity.

Gnome-Do is a program that allows you to cut down on mouse clicks (so long as you know what program you are looking to load).

The jist of what it does is this: you assign a series of hot keys to call up the search bar (personally I use control+alt+space) and then you start typing in the name of an application or folder you want to open and it will start searching for it - once the correct thing is displayed all you need to do is tap enter to load it up.

The best part is that it remembers which programs you use most often. Meaning that most times you only need to type the first letter or two of a commonly used application for it to find the one you are looking for.

Tabs are very useful and are a much cleaner option when sorting through files as opposed to having several windows open on your screen.

3.) Removable Media Should Not Have a Driver Letter - The system Windows uses for assigning letters to storage devices was clearly invented before flash drives existed and I feel it works very poorly for handling such devices.

It is confusing to new computer users that their removable media appears as a different drive letter on most every machine (and even on the same machine sometimes if you have multiple drives attached).

A better solution is something like Gnome/KDE/OSX do: have the drive appear as an icon on the desktop and have the name of drive displayed not the drive letter (its fine if the letter still exists - I under stand the media needs a mount point, just it adds confusion displaying this letter instead of the drive name)

2.) Hidden Files that are Easy/Make Sense - I love how Linux handles hidden files. You simply prefix your file name with a "." and the poof its gone unless you have your file browser set to view hidden folders.

I think it is goofy to have it setup as a togglbe option within the file's settings. Beyond that Windows has "hidden" files and "hidden" files to further confuse things.

1.) System Updates that Install/Configure Once - I've done more than my fair share of Windows installs and the update process it goes through each time irks me beyond belief.

The system downloads and "installs" the updates, then it needs to restart. Upon shutting down it "installs" the updates again and then proceeds to "configure" them.

Then once it comes back online it "installs" and "configures" the updates one last time. Why? On Ubuntu the only update I need to restart for is a kernel update - even then most times I stick with my older kernel most times unless I have a specific reason for changing to the new one.

0.) Wobby Windows - This one doesn't effect productivity or use-ability like the other ten, but I must say after using mostly Ubuntu for the last year and a half not having the windows wobble when I drag them around the screen is a huge kill joy.

I'm aware that a few of my above mentioned things can be added to Windows through third party software- however like I said most times when I am using Windows it is at work, school, or for a few moments on a friends system. Meaning I'm not about to go installing extra things on them/changing configurations.

Anyone else have some other key things/features they miss when using the Windows platform when coming from else where?

Thursday, December 17, 2009

Good time keeping is not an obvious priority for network administrators, but the more you think about it the clearer it is that accurate clocks have a crucial role to play on any network.

Let the clocks on your networked devices get out of sync and you could end up losing valuable corporate data.

Here are just a few things that rely on hardware clocks which are accurately set and in sync with each other:

Scheduled data backupsSuccessful backups are vital to any organization. Systems that are too far out of sync may fail to back up correctly, or even at all.

Network accelerators

These and other devices that use caching and wide area file systems may rely heavily on file time stamps to work out which version of a piece of data is the most current.

Bad time syncing could cause these systems to work incorrectly and use the wrong versions of data.

Network management systems

When things go wrong, examining system logs is a key part of fault diagnosis. But if the timing in these logs is out of sync it can take much longer than necessary to figure out what went wrong and to get systems up and running again

Intrusion analysis

In the event of a network intrusion, working out how your network was compromised and what data was accessed may only be possible if you have accurately time-stamped router and server logs.

Hackers will often delete logs if they can, but if they don't the job will be far harder, giving hackers more time to exploit your network, if the time data is inaccurate.

Compliance regulations

Sarbanes Oxley, HIPAA, GLBA and other regulations do or may in the future require accurate time stamping of some categories of transactions and data.

Trading systems

Companies in some sectors may make thousands of electronic trades per second. In this sort of environment system clocks need to be very accurate indeed.

Many companies set and synchronize their devices using Network Time Protocol (NTP), with NTP clients or daemons connecting to time servers on the network known as stratum-2 devices.

To ensure these stratum-2 time servers are accurate, they are synced over the Internet through port 123 with a stratum-1 device.

This public time server is connected directly (i.e. not over a network) to one or more stratum-0 devices– extremely accurate reference clocks.

Unfortunately, there are a number of potential problems with this approach. The most basic one is that the time that a stratum-2 server on a corporate network receives over the Internet from a stratum-1 server is not very precise.

That's because the time data has to travel over the Internet - from the time server to the corporate time source - in an unpredictable way, and at an unpredictable speed.

This means it always has a varying, and unknown, error factor. Although all the devices on a local area network that update themselves from the same corporate stratum-2 time server may be reasonably well synchronized (to within anything from 1 to about 100 milliseconds), keeping the time synchronized between stratum-2 devices on different local area networks to a reasonable degree of accuracy can be difficult.

Security Risks with NTP Servers

There are also security risks involved in using public stratum-1 NTP servers, most notably:

NTP clients and daemons are in themselves a potential security risk. Vulnerabilities in this type of software could be (and have in the past been) exploited by hackers sending appropriately crafted packets through the corporate firewall on port 123.

Organizations that use public NTP servers are susceptible to denial of service attacks by a hacker sending spoofed NTP data, making time syncing impossible during the attack.

For companies involved in activities such as financial trading—which requires very precise timing information—this could be very damaging.

One way to both avoid these potential security issues and to get more accurate time data is simply to run one or more stratum-1 servers inside your network, behind your corporate firewall.

Running Your Own Stratum-1 Servers
Stratum-1 time servers are available in a single 1U rack-mountable form factor that can easily be installed in your server room or data center and connected to your network, and most have a way of connecting to a stratum-0 reference clock built in.

The most commonly used ways to connect to a stratum-0 device are by terrestrial radio or GPS signals.

Terrestrial radio based connections use radio signals such as WWVB out of Fort Collins, Colorado, MSF from Anthorn, UK, or DCF77 from Frankfurt, Germany.

This is similar to the way consumer devices such as watches and alarm clocks update themselves with signals from reference clocks to keep accurate time.

Statum-1 time servers that sync with GPS satellite signals are more accurate, but are less convenient to install as they need to be connected to an antenna fitted in a suitable position on the roof of the building.

Using time data from a number of satellites, and by calculating the distance of each satellite from the antenna, a stratum-1 time server that uses GPS reference clock signals is able to get the precise time to within 50 or so nanoseconds.

More importantly, two or more of these servers at separate locations and running on separate local area networks can also remain in sync with each other to a similar degree of accuracy.

To provide redundancy, some larger organizations install multiple GPS-based time servers at each location.

An alternative is to have a radio-based time server as a back up to a GPS-based one in case the GPS server itself fails or, more likely, the GPS antenna is damaged, perhaps during bad weather.

Given that most radio and GPS based time servers cost between $1,000 and $5,000, purchasing two or more time servers is not a major investment for a medium or large organization.

Smaller companies, including those at isolated sites which are not connected to the Internet, can also use a low cost stratum-1 GPS PCI card (connected to an appropriate antenna) to enable a standard PC to act as a time server for the local area network, using the satellites as an external time source.

In the concluding piece in this series we'll take a look at how to implement a GPS-based time server in your data center.

Anyone who has never made a mistake has never tried anything new. -- Albert Einstein.

Here are a few mistakes that I made while working at UNIX prompt. Some mistakes caused me a good amount of downtime.

Most of these mistakes are from my early days as a UNIX admin.

userdel Command
The file /etc/deluser.conf was configured to remove the home directory (it was done by previous sys admin and it was my first day at work) and mail spool of the user to be removed.

I just wanted to remove the user account and I end up deleting everything (note -r was activated via deluser.conf):

# userdel foo

Rebooted Solaris Box
On Linux killall command kill processes by name (killall httpd). On Solaris it kill all active processes.

As root I killed all process, this was our main Oracle db box:

# killall process-name

Destroyed named.conf
I wanted to append a new zone to /var/named/chroot/etc/named.conf file., but end up running:

# ./mkzone example.com > /var/named/chroot/etc/named.conf

Destroyed Working Backups with Tar and Rsync (personal backups)
I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x):

# cd /mnt/bacupusbharddisk
# tar -zcvf project.tar.gz functions

I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I’ve switched to rsnapshot)

# rsync -av -delete /dest /src

Again, I had no backup.

Deleted Apache DocumentRoot
I had sym links for my web server docroot (/home/httpd/http was symlinked to /www). I forgot about symlink issue. To save disk space, I ran rm -rf on http directory. Luckily, I had full working backup set.

Accidentally Changed Hostname and Triggered False Alarm
Accidentally changed the current hostname (I wanted to see current hostname settings) for one of our cluster node.

Within minutes I received an alert message on both mobile and email.

hostname foo.example.com

Public Network Interface Shutdown
I wanted to shutdown VPN interface eth0, but ended up shutting down eth1 while I was logged in via SSH:

# ifconfig eth1 down

Firewall Lockdown
I made changes to sshd_config and changed the ssh port number from 22 to 1022, but failed to update firewall rules.

After a quick kernel upgrade, I had rebooted the box. I had to call remote data center tech to reset firewall settings. (now I use firewall reset script to avoid lockdowns).

Typing UNIX Commands on Wrong Box
I wanted to shutdown my local Fedora desktop system, but I issued halt on remote server (I was logged into remote box via SSH):

# halt
# service httpd stop

Wrong CNAME DNS Entry
Created a wrong DNS CNAME entry in example.com zone file. The end result - a few visitors went to /dev/null:

Never use rsync with single backup directory. Create a snapshots using rsync or rsnapshots.

Use CVS to store configuration files.

Wait and read command line again before hitting the dam [Enter] key.

Use your well tested perl / shell scripts and open source configuration management software such as puppet, Cfengine or Chef to configure all servers. This also applies to day today jobs such as creating the users and so on.

Mistakes are the inevitable, so did you made any mistakes that have caused some sort of downtime? Please add them into the comments below.

/etc/hosts.allow and /etc/hosts.deny : Access controls lists that should be enforced by tcp-wrappers are defined here.

SSH default port : TCP 22

SSH Session in Action

#1: Disable OpenSSH Server

Workstations and laptop can work without OpenSSH server. If you need not to provide the remote login and file transfer capabilities of SSH, disable and remove the SSHD server. CentOS / RHEL / Fedora Linux user can disable and remove openssh-server with yum command:

# chkconfig sshd off
# yum erase openssh-server

Debian / Ubuntu Linux user can disable and remove the same with apt-get command:

Saying "don't login as root" is h******t. It stems from the days when people sniffed the first packets of sessions so logging in as yourself and su-ing decreased the chance an attacker would see the root pw, and decreast the chance you got spoofed as to your telnet host target, You'd get your password spoofed but not root's pw. Gimme a break. this is 2005 - We have ssh, used properly it's secure. used improperly none of this 1989 will make a damn bit of difference. -Bob

#8: Enable a Warning Banner

Set a warning banner by updating sshd_config with the following line:

Banner /etc/issue

Sample /etc/issue file:

----------------------------------------------------------------------------------------------
You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only.
By using this IS (which includes any device attached to this IS), you consent to the following conditions:
+ The XYZG routinely intercepts and monitors communications on this IS for purposes including, but not limited to,
penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM),
law enforcement (LE), and counterintelligence (CI) investigations.
+ At any time, the XYZG may inspect and seize data stored on this IS.
+ Communications using, or data stored on, this IS are not private, are subject to routine monitoring,
interception, and search, and may be disclosed or used for any XYZG authorized purpose.
+ This IS includes security measures (e.g., authentication and access controls) to protect XYZG interests--not
for your personal benefit or privacy.
+ Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching
or monitoring of the content of privileged communications, or work product, related to personal representation
or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work
product are private and confidential. See User Agreement for details.
----------------------------------------------------------------------------------------------

By default SSH listen to all available interfaces and IP address on the system. Limit ssh port binding and change ssh port (by default brute forcing scripts only try to connects to port # 22). To bind to 192.168.1.5 and 202.54.1.5 IPs and to port 300, add or correct the following line:

Port 300
ListenAddress 192.168.1.5
ListenAddress 202.54.1.5

A better approach to use proactive approaches scripts such as fail2ban or denyhosts (see below).

#10: Use Strong SSH Passwords and Passphrase
It cannot be stressed enough how important it is to use strong user passwords and passphrase for your keys.

Use public/private key pair with password protection for the private key.

See how to use RSA and DSA key based authentication. Never ever use passphrase free key (passphrase key less) login.

#12: Use Keychain Based Authentication
keychain is a special bash script designed to make key-based authentication incredibly convenient and flexible.

It offers various security benefits over passphrase-free keys. See how to setup and use keychain software.

#13: Chroot SSHD (Lock Down Users To Their Home Directories)
By default users are allowed to browse the server directories such as /etc/, /bin and so on. You can protect ssh, using os based chroot or use special tools such as rssh.

With the release of OpenSSH 4.8p1 or 4.9p1, you no longer have to rely on third-party hacks such as rssh or complicated chroot(1) setups to lock users to their home directories.

See this blog post about new ChrootDirectory directive to lock down users to their home directories.

#15: Disable Empty Passwords
You need to explicitly disallow remote login from accounts with empty passwords, update sshd_config with the following line:

PermitEmptyPasswords no

#16: Thwart SSH Crackers (Brute Force Attack)

Brute force is a method of defeating a cryptographic scheme by trying a large number of possibilities using a single or distributed computer network.

To prevents brute force attacks against SSH, use the following softwares:

DenyHosts is a Python based security tool for SSH servers. It is intended to prevent brute force attacks on SSH servers by monitoring invalid login attempts in the authentication log and blocking the originating IP addresses.

Brute Force Detection A modular shell script for parsing application logs and checking for authentication failures. It does this using a rules system where application specific options are stored including regular expressions for each unique auth format.

#19: Use Log Analyzer

Read your logs using logwatch or logcheck. These tools make your log reading life easier. It will go through your logs for a given period of time and make a report in the areas that you wish with the detail that you wish.

Make sure LogLevel is set to INFO or DEBUG in sshd_config:

LogLevel INFO

#20: Patch OpenSSH and Operating Systems

It is recommended that you use tools such as yum, apt-get, freebsd-update and others to keep systems up to date with the latest security patches.

Other Options
To hide openssh version, you need to update source code and compile openssh again. Make sure following options are enabled in sshd_config:

# Turn on privilege separation
UsePrivilegeSeparation yes
# Prevent the use of insecure home directory and key file permissions
StrictModes yes
# Turn on reverse name checking
VerifyReverseMapping yes
# Do you need port forwarding?
AllowTcpForwarding no
X11Forwarding no
# Specifies whether password authentication is allowed. The default is yes.
PasswordAuthentication no

GnuPG allows to encrypt and sign your data and communication, features a versatile key managment system as well as access modules for all kind of public key directories.

Fugu is a graphical frontend to the commandline Secure File Transfer application (SFTP). SFTP is similar to FTP, but unlike FTP, the entire session is encrypted, meaning no passwords are sent in cleartext form, and is thus much less vulnerable to third-party interception. Another option is FileZilla - a cross-platform client that supports FTP, FTP over SSL/TLS (FTPS), and SSH File Transfer Protocol (SFTP).

For example, SELinux provides a variety of security policies for Linux kernel.

#5.1: SELinux
I strongly recommend using SELinux which provides a flexible Mandatory Access Control (MAC). Under standard Linux Discretionary Access Control (DAC), an application or process running as a user (UID or SUID) has the user's permissions to objects such as files, sockets, and other processes.

Running a MAC kernel protects the system from malicious or flawed applications that can damage or destroy the system.

See the official Redhat documentation which explains SELinux configuration.

#6: User Accounts and Strong Password Policy
Use the useradd / usermod commands to create and maintain user accounts. Make sure you have a good and strong password policy.

For example, a good password includes at least 8 characters long and mixture of alphabets, number, special character, upper & lower alphabets etc. Most important pick a password you can remember.

#6.1: Password Aging
The chage command changes the number of days between password changes and the date of the last password change.

This information is used by the system to determine when a user must change his/her password. The /etc/login.defs file defines the site-specific configuration for the shadow password suite including password aging configuration.

Set BIOS and grub boot loader password to protect these settings. All production boxes must be locked in IDCs (Internet Data Center) and all persons must pass some sort of security checks before accessing your server.

#9: Disable Unwanted Services

Disable all unnecessary services and daemons (services that runs in the background). You need to remove all unwanted services from the system start-up. Type the following command to list all services which are started at boot time in run level # 3:

# chkconfig --list | grep '3:on'

To disable service, enter:

# service serviceName stop

# chkconfig serviceName off

#9.1: Find Listening Network Ports
Use the following command to list all open ports and associated programs:

#16.1: KerberosKerberos performs authentication as a trusted third party authentication service by using cryptographic shared secret under the assumption that packets traveling along the insecure network can be read, modified, and inserted.

Kerberos builds on symmetric-key cryptography and requires a key distribution center. You can make remote login, remote copy, secure inter-system file copying and other high-risk tasks safer and more controllable using Kerberos.

#17.2: System Accounting with auditd
The auditd is provided for system auditing. It is responsible for writing audit records to the disk. During startup, the rules in /etc/audit.rules are read by this daemon.

You can open /etc/audit.rules file and make changes such as setup audit file log location and other option. With auditd you can answers the following questions:

System startup and shutdown events (reboot / halt).

Date and time of the event.

User respoisble for the event (such as trying to access /path/to/topsecret.dat file).

#19: Install And Use Intrusion Detection System

A network intrusion detection system (NIDS) is an intrusion detection system that tries to detect malicious activity such as denial of service attacks, port scans or even attempts to crack into computers by monitoring network traffic.

It is a good practice to deploy any integrity checking software before system goes online in a production environment.

If possible install AIDE software before the system is connected to any network. AIDE is a host-based intrusion detection system (HIDS) it can monitor and analyses the internals of a computing system.Snort is a software for intrusion detection which is capable of performing packet logging and real-time traffic analysis on IP networks.

However, permissions set by the Linux are irrelevant if an attacker has physical access to a computer and can simply move the computer's hard drive to another system to copy and analyze the sensitive data.

You can easily protect files, and partitons under Linux using the following tools:

Other Recommendation:

Backups - It cannot be stressed enough how important it is to make a backup of your Linux system. A proper offsite backup allows you to recover from cracked server i.e. an intrusion. The traditional UNIX backup programs are dump and restore are also recommended.