As you can see above there are 6 lines (one for each virtual console), simpily placing "#" at the begning of the line will disable that particular console. Suppose I need to disable console no 4,5,6 so, in this case my inittab file will look like ..

Monitoring your hard disk health is a very important thing. You do not want to wake up one day, turn on your computer and suddenly your hard disk has crash and all your valuable data has gone with the wind. At that time crying would not get your data back. Like some people always say, prevention is better than cure. Apart from backing up your data regularly, monitoring the health of your hard disk is an essential task. It is to make sure any symptoms of bad sector or any failure can be detected earlier and steps to take care of it can be done sooner. One of the tool that can be used to do the job mentioned before is smartmontools. According to yum description, smartmontools are "Tools for monitoring SMART capable hard disks".

If problems occur booting your system using a boot manager or if the boot manager cannot be installed on the MBR of your hard disk or a floppy disk, it is also possible to create a bootable CD with all the necessary start-up files for Linux. This requires a CD writer installed in your system.

Creating Boot CDs
Change into a directory in which to create the ISO image, for example: cd /tmp

Create a subdirectory for GRUB:

mkdir -p iso/boot/grub

Copy the kernel, the files stage2_eltorito, initrd, menu.lst, and message to iso/boot/:

Adjust the path entries in iso/boot/grub/menu.lst to make them point to a CD-ROM device. Do this by replacing the device name of the hard disks, listed in the format (hdx,y), in the pathnames with the device name of the CD-ROM drive, which is (cd). You may also need to adjust the paths to the kernel and the initrd—they need to point to /boot/vmlinuz and /boot/initrd, respectively. After having made the adjustments, menu.lst should look similar to the following example:

Step Three
For i386 machines, download and install every rpm that you can find from this link
To install, simply# rpm -ivh curl-7.18.2-3.fc9.i386.rpm
# rpm -ivh curl-debuginfo-7.18.2-3.fc9.i386.rpm
# rpm -ivh libcurl-7.18.2-3.fc9.i386.rpm
# rpm -ivh libcurl-devel-7.18.2-3.fc9.i386.rpm
If you have any existing rpm package simply use rpm upgrade like so# rpm -Uvh curl-7.18.2-3.fc9.i386.rpm
# rpm -Uvh curl-debuginfo-7.18.2-3.fc9.i386.rpm
# rpm -Uvh libcurl-7.18.2-3.fc9.i386.rpm
# rpm -Uvh libcurl-devel-7.18.2-3.fc9.i386.rpm
If you your platform is not of i386 type, locate your machine’s architecture here and download those rpms located under platform folder and install it into your F9 using the same rpm command as shown recently.
If you have earlier Fedora versions, you could get libcurl rpm of your old Fedora versions here

Step Four
Now, get back to your terminal window. Ensure that curl library is within the search path# updatedb ; locate libcurl.so.3
/usr/lib/libcurl.so.3
Ensure also that Firefox binary can be found from default path as well
# which firefox
/usr/bin/firefox

Troubleshooting
Note that nspluginwrapper rpm is not required for successful installation using Fedora 9 OS. If the installation does not work, and you have earlier version of Fedora, try installing nspluginwrapper rpm using yum like so# yum -y install nspluginwrapper

The problem with traditional passwd files is that they had to be world readable in order for programs to extract information about the user: such as the users full name. This means that everyone can see the encrypted password in the second field. Anyone can copy any other user's password field and then try billions of different passwords to see if they match.

The shadow password file is used only for authentication and is not world readable -- there is no information in the shadow password file that a common program will ever need -- no regular user has permission see the encrypted password field. The fields are colon separated just like the passwd file.

Here is an example line from a /etc/shadow file:

nik:Q,Jpl.or6u2e7:10795:0:99999:7:-1:-1:134537220

nik - The user's login name.

Q,Jpl.or6u2e7 - The user's encrypted password known as the hash of the password.

10795 - Days since the January 1, 1970 that the password was last changed.

0 - Days before which password may not be changed. Usually zero. This field is not often used.

99999 - Days after which password must be changed. This is also rarely used, and will be set to 99999 by default.

7 - Days before password is to expire that user is warned of pending password expiration.

-1 - Days after password expires that account is considered inactive and disabled. -1 is used to indicate infinity -- i.e. to mean we are effectively not using this feature.

Proprietary software is often looked down upon in the free software world for many reasons:

* It is closed to external scrutiny.
* Users are unable to add features to the software
* Users are unable to correct errors (bugs) in the software

The result of this is that proprietary software,

* does not confirm to good standards for information technology.
* is incompatible with other proprietary software.
* is buggy.
* cannot be fixed.
* costs far more than it is worth.
* can do anything behind your back without you knowing.
* is insecure.
* tries to be better than other proprietary software without meeting real technical needs.
* wastes a lot of time duplicating the effort of other proprietary software.
* often does not build on existing software because of licensing issues or ignorance

GNU software on the other hand is open for anyone to scrutinize it. Users can (and do) freely fix and enhance software for their own needs, then allow others the benefit of their extensions. Many developers of different expertise collaborate to find the best way of doing things. Open industry and academic standards are adhered to, to make software consistent and compatible. Collaborated effort between different developers means that code is shared and effort is not replicated. Users have close and direct contact with developers ensuring that bugs are fixed quickly and users needs are met. Because source code can be viewed by anyone, developers write code more carefully and are more inspired and more meticulous.

Another partial reason for this superiority is that GNU software is often written by people from academic institutions who are in the centre of IT research, and are most qualified to dictate software solutions. In other cases authors write software for their own use out of their own dissatisfaction for existing proprietry software - a powerful motivation.

Backup Capabilities of Zmanda * ZRM for MySQL can backup multiple MySQL databases that are managed by the MySQL server.
* It can backup multiple databases hosted on multiple MySQL servers.
* It can also backup tables in a single database.
* It can perform hot backups of the databases.
* It supports multiple backup methods depending on the storage engine used by MySQL tables.
* It has two levels of backups : full and incremental database backups.
* It can use mysqldump, mysqlhotcopy, snapshots(Linux LVM/Solaris ZFS) and MySQL replication as various backup methods.
* It creates consistent backup of the database irrespective of the storage engines used by the databases tables.
* It supports SSL authentication between the local ZRM for MySQL and remote MySQL server to allow secure backups over Internet or across firewalls.
* It can verify backed up data images.
* Backup images can be compressed as well as encrypted using standard tool such as gzip, GPG, etc .
* System administrator can abort backup jobs.

Recovery Capabilities
* ZRM for MySQL makes it easy to recover backed up data.
* It supports the use of an backup index that stores information about each backup run.
* It has a reporting tool that can be used for browsing the index.
* It can recover full and incremental database backups.
* It does selective incremental restores based on binary log position or at a point in time . This permits recovery from database operator errors.
o Such a point could be a point in time or a point in the binary logs of the Database.
o ZRM for MySQL provides an easy way to filter in / filter out database events from the binary logs.
o This helps in deciding what to restore and what to keep out.
* Depending on the type of backups you have been doing, the backed up data could be recovered on to the same machine or to an entirely different machine.

If somebody accidentally drops a critical table in MySQL, the application no longer works. The solution to this problem is to utilize the (open source) Zmanda Recovery Manager.

You are a MySQL database administrator. You take regular backups of your MySQL database. Somebody drops a table critical to the MySQL application (for example, the "accounts" table in a SugarCRM application). The MySQL application no longer works. How can you recover from the situation?

The answer is MySQL binary logs. Binary logs track all updates to the database with minimal impact on database performance. MySQL binary logs have to be enabled on the server. You can use the mysqlbinlog MySQL command to recover from the binary logs.

A more comprehensive solution is to use the Zmanda Recovery Manager for MySQL. The mysql-zrm tool allows users to browse the binary logs and selectively restore the database from incremental backups:

Here we're doing selective recovery for incremental backups without the DROP customer table from the SugarCRM database. Do two selective restore commands to restore from the incremental backup done on Sept 15, 2006, without executing the database event DROP TABLE at log position 11159:

Let me start off by saying that openSUSE 11.0 is the best Linux distribution I have ever used. There are some rough edges surrounding KDE 4, but the package management in openSUSE 11.0 makes huge strides over that offered in previous versions. If you want to get up and running with openSUSE 11.0 then there are likely a few customizations you’ll want to make.

Setup Multimedia
This is a perennial setup step on Linux distributions. We’ll install the codecs needed to watch DVDs, handle MP3s, etc. We’ll also setup firefox to be able to handle Windows media streams.

Install NVIDIA drivers
If you have an NVIDIA card, then you’ll want to install the drivers.

YaST > “Software” > “Software Repositories”

Click “Add”

Select “Community Repositories”

Select “NVIDIA Repository”

YaST > “Software” > “Software Management”

Install “nvidia-gfxGO1-kmp-default”

Install CD ripper and ID3 tagger
For some reason, openSUSE 11.0 no longer ships with KAudioCreator or an ID3 tagger installed by default. My guess would be that they haven’t been ported to KDE4 yet, but they’re nice to have, so we’ll go ahead and install them anyway. We’ll also change KAudioCreator’s (stupid) default setting of not looking up CDDB information that hasn’t been cached on the local system.

YaST > “Software” > “Software Repositories”

Click “Add”

Select “Community Repositories”

Select “openSUSE BuildService - KDE:Community”

YaST > “Software” > “Software Management”

Install “kid3″ and “kdemultimedia3-CD”

Open kaudiocreator

Select “Settings” > “Configure KAudioCreator …” > “CDDB”

Set lookup to “Cache and remote”

Upgrade WINE
WINE is continuing to evolve and getting closer every day to reaching maturity. You’ll likely want the latest version instead of the one that was the latest when openSUSE shipped.

YaST > “Software” > “Software Repositories”

Click “Add”

Select “Community Repositories”

Select “openSUSE BuildService - Wine CVS Builds”

YaST > “Software” > “Software Management”

Do a search for wine and click the check mark until version upgrade is selected

Setup a static IP address
Having a static IP address is very nice when you want to remote desktop to your server or access it in some other way without worrying about what the IP address is. There may also need to be some configuration done on your router for this one. Or you may prefer to investigate DHCP reservations if your router supports them.

YaST > “Network Devices” > “Network Settings”

Under “Overview”, select your network card and click “Edit”

Enter your static IP and save it

Select “openSUSE BuildService - Wine CVS Builds”

Under “Hostname/DNS”, enter your DNS servers and hit “Finish”

Setup remote desktop through NX
The two main remote desktop softwares for Linux are VLC and NX. NX is much faster and KDE’s VLC server, KRfb, is broken openSUSE 11.0. An NX server ships with openSUSE 11.0, but we want to install at least version 3.0 in order to do desktop sharing. We’ll also open the SSH (NX is built on top of SSH) port in the firewall so that we can connect from another machine.

Copy the contents of “/usr/NX/share/keys/default.id_dsa.key” into the key window and save it

Open “/usr/NX/etc/server.cfg”

Change line 563 from ‘EnableSessionShadowingAuthorization = “1″‘ to ‘EnableSessionShadowingAuthorization = “0″‘ which will enable you to select “Shadow” in the client under the “General” tab’s “Desktop” framebox if you’d like to do desktop sharing

YaST > “Security and Users” > “Firewall” > “Allowed Services”

Allow “Secure Shell Server”

Setup Network File Share using Samba
WINE is continuing to evolve and getting closer every day to reaching maturity. You’ll likely want the latest version instead of the one that was the latest when openSUSE shipped.

YaST > “Software” > “Software Management”

Install “samba” if it is not already installed

YaST > “Network Services” > “Samba Server”

Change sharing settings as you’d like and hit “Finish”

Add a user to Samba by running “smbpasswd -a username” where username is the user you’d like to create.

OpenVAS stands for Open Vulnerability Assessment System and is a network security scanner with associated tools like a graphical user front-end. The core component is a server with a set of network vulnerability tests (NVTs) to detect security problems in remote systems and applications.

About OpenVAS Server

The OpenVAS Server is the core application of the OpenVAS project. It is a scanner that runs many network vulnerability tests against many target hosts and delivers the results. It uses a communication protocol to have client tools (graphical end-user or batched) connect to it, configure and execute a scan and finally receive the results for reporting. Tests are implemented in the form of plugins which need to be updated to cover recently identified security issues.

The server consists of 4 modules: openvas-libraries, openvas-libnasl, openvas-server and openvas-plugins. All need to be installed for a fully functional server.

OpenVAS server is a forked development of Nessus 2.2. The fork happened because the major development (Nessus 3) changed to a proprietary license model and the development of Nessus 2.2.x is practically closed for third party contributors. OpenVAS continues as Free Software under the GNU General Public License with a transparent and open development style.
About OpenVAS-Client

OpenVAS-Client is a terminal and GUI client application for both OpenVAS and Nessus. It implements the Nessus Transfer Protocol (NTP). The GUI is implemented using GTK+ 2.4 and allows for managing network vulnerability scan sessions.OpenVAS-Client is a successor of NessusClient 1.X.

The fork happened with NessusClient CVS HEAD 20070704. The reason was that the original authors of NessusClient decided to stop active development for this (GTK-based) NessusClient in favor of a newly written QT-based version released as proprietary software.

OpenVAS-Client is released under GNU GPLv2 and may be linked with OpenSSL.

Grendel-Scan is an open-source web application security testing tool. It has automated testing module for detecting common web application vulnerabilities, and features geared at aiding manual penetration tests. The only system requirement is Java 5; Windows, Linux and Macintosh builds are available.

SQL Dump
The idea behind the SQL-dump method is to generate a text file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:

pg_dump dbname > outfile

As you see, pg_dump writes its results to the standard output

Restoring the dump
The text files created by pg_dump are intended to be read in by the psql program. The general command form to restore a dump is

psql dbname < infile

where infile is what you used as outfile for the pg_dump command. The database dbname will not be created by this command, you must create it yourself

If you need to simulate a load on an Apache server (or any web server actually), you can use Apache Bench, which is included in the standard Apache HTTPd distribution. This tool will launch connections to your webserver as instructed to simulate multiple users and will help you to tune your Apache settings.

You can find the synopsis at the Apache website. Most common options are :

* -n : number of requests to perform
* -c : number of concurrent requests

Other options allow you to control precisely the request to send, proxy settings, user authentication, cookies and much more.

Due to some security reason, you may require to block certain user SSH access to Linux box.

Edit the sshd_config file, the location will sometimes be different depend on Linux distribution, but it’s usually in /etc/ssh/.

Open the file up while logged on as root:

vi /etc/ssh/sshd_config

Insert a line:

DenyUsers username1 username2 username3 username4

Referring to #man sshd_config:

DenyUsers
separated by spaces. Login is disallowed for user names that
match one of the patterns. â*â and â?â can be used as wildcards
in the patterns. Only user names are valid; a numerical user ID
is not recognized. By default, login is allowed for all users.
If the pattern takes the form USER@HOST then USER and HOST are
separately checked, restricting logins to particular users from
particular hosts.

While Kiba dock focuses on launchers, AWN supports both launchers and task list.
While both of the provide you with fancy desktop, they requires composite( like copiz fusion) to be enabled. So be aware if you have some old system.

nscd (Name Service Cache Daemon) is a GNU C Library -- A daemon which handles passwd, group and host lookups for running programs and caches the results for the next query. You should install this package only if you use slow Services like LDAP, NIS or NIS+

The nscd service comes as part of glibc , which means every Linux distribution will provide it. It is also extremely simple to set up. Once installed, edit the /etc/nscd.conf file to look similar to this:

enable-cache group yes
positive-time-to-live group 3600
negative-time-to-live group 60
suggested-size group 211
check-files group yes
persistent group yes
shared group yes

enable-cache hosts no

Now start the nscd service. The above configuration tells nscd to cache group and passwd entries and to let them persist for 3600 seconds.

Once nscd has started and has a few cached entries under its belt -- if you are already logged in and then disconnect from the network -- you will still be able to continue using the system just as if you were on the network -- apart from accessing shares and printers, utilising Kerberos, and performing new login sessions.

Quotas are defined per-filesystem. Most distros support quotas, although not all do it out-of-the-box, and you may have to install the quota package. To enable quota support, edit /etc/fstab as root and add the usrquota and grpquota options to the filesystems you wish to enable quota support for. For instance:

/dev/hda3 /home ext3 defaults,nosuid,nodev,usrquota,grpquota 1 2

Once you have made the changes, remount the filesystem(s) you have changed:

# mount -o remount /home

To check that quota support is indeed enabled, execute:

# quotacheck -augmv

This will instruct quotacheck to check all filesystems for user and group quotas without remounting them as read-only. Now you can enable quotas with the quotaon command:

# quotaon -augv

Once quotas have been turned on, use edquota to edit the quotas for a particular user:

# edquota -u nikesh

This will open the default system editor (usually vim) where you can edit the hard and soft limits for both blocks and inodes for each filesystem that supports quotas.

You can then view current quota usage by using the repquota tool:

# repquota -a

Once a soft quota has been exceeded, the user is notified once that they have exceeded their quota, but will be able to continue writing to the system unless they reach the hard quota; at which point, any new files created will be 0 bytes in size.

This guide will help newbies set up a fully working LAMP (Linux, Apache, MySQL, PHP) server using on Ubuntu 8.04 Hardy Heron. This will allow you to use various PHP applications such as the popular phpBB forums and WordPress blog in addition to the basic HTML pages and files.

Install Packages

First, install the required packages by typing the following into the terminal:

Wine-doors is an application designed to make installing windows software on Linux, Solaris or other Unix systems easier. Wine-doors is essentially a package management tool for windows software on Linux systems. Most Linux desktop users are familiar with package management style application delivery, so it made sense to apply this model to windows software.

Wine Doors has community that constantly tests existing windows applications for compatibility of wine and adds them to the Wine Doors repository. So these applications are available for you to be installed with a single click using synaptic package manager kind of interface. These are also known as application packs.

Locate the line #tcpip_socket = false and change it to tcpip_socket = true.

By default, the user credentials are not set for MD5 client authentication. So, first it is necessary to configure the PostgreSQL server to use trust client authentication, connect to the database, configure the password, and revert the configuration back to use MD5 client authentication. To enable trust client authentication, edit the file /etc/postgresql//main/pg_hba.conf

Comment out all the existing lines which use ident and MD5 client authentication and add the following line:

local all postgres trust username

Then, run the following command to start the PostgreSQL server:

sudo /etc/init.d/postgresql start

Once the PostgreSQL server is successfully started, run the following command at a terminal prompt to connect to the default PostgreSQL template database

psql -U postgres -d template1

The above command connects to PostgreSQL database template1 as user postgres. Once you connect to the PostgreSQL server, you will be at a SQL prompt. You can run the following SQL command at the psql prompt to configure the password for the user postgres.

AIDE (Advanced Intrusion Detection Environment) is a free replacement for Tripwire(tm). It generates a database that can be used to check the integrity of files on server. It uses regular expressions for determening which files get added to the database. You can use several message digest algorithms to ensure that the files have not been tampered with.

Default configuration of aide is quite fine. But we are going to tweak it slightly more.

Send the report

Reports which are created once a day can be sent to a custom address. you need to change the variable MAILTO to which ever address you like. Default is to send them to root on localhost.
To change it, open and edit /etc/default/aide

Configuring aide

Most AIDE configuration is in file /etc/aide/aide.conf. This file is pretty well documented and default rules are descent but we are going to make some slight changes.

AIDE aims at reporting files that changed since the last snapshot (/var/lib/aide/aide.db). A good security measure is to keep that file on a read-only device such as a floppy disk or a cdrom. If your machine has such a device, you could use the snapshot from that device. So let say that you have a copy of aide.db on a cdrom.

To use that snapshot, you could change:

database=file:/var/lib/aide/aide.db
to
database=file:/media/cdrom/aide.db

instead. That way, if an intruder get into your machine, he won’t be able to modify aide.db.

By default, AIDE checks for changes in Binaries and Libraries directories. Those changes are matched to the BinLib rule, which basically check for any changes in permissions, ownership, modification, access and creation date, size change, md5 and sha1 signature, inode, number of links and block count. Then, it also check for modifications in the log files against the rule Logs. Because log files tends to grow, you cannot use a signature there and you also have to asked aide not to check for size modification (S). Okie, this should be enough to get to understand how aide works. Reading through /etc/aide/aide.conf is a good place to learn more.

To make aide /etc/. To do so, added: /etc ConfFiles in /etc/aide/aide.conf, this will check for changes in /etc/.

Updating aide

aide is run on a daily basis through the script /etc/cron.daily/aide. Default settings in /etc/default/aide tells aide to update it’s database. Using database_out value in /etc/aide/aide.conf, aide is going to output a new database any time it runs in /var/lib/aide/aide.db.new if you kept the default settings.

Any time you will install new packages, change some configuration settings… it will be worth using an up-to-date database so aide won’t report any changes or addition in /etc/mynewsoft, /bin/mynewsoft …
So, when you install new softwares, make some configuration changes …, run:

# /etc/cron.daily/aide

Then, check in the report that modifications were only brought to the files you intended to modify and that added files are only coming from packages you have just installed.

Once you are sure that everything is fine, copy the new database to whatever place your database points to (cdrom, floppy, somewhere on you filesystem….).This way, you will get lighter reports next time aide runs.

There are cases when you get a lot of error messages about "running out of file handles", increasing this limit of this handler can solve this issue. To change the value, just write the new number into the file as below:

# cat /proc/sys/fs/file-max8192

# echo 943718 > /proc/sys/fs/file-max

# cat /proc/sys/fs/file-max943718

This value also can be changed using "sysctl" command. To make the change permanent, add the entries to /etc/sysctl.conf

Users can view the contents of the directory. Without this permission, users cannot list the contents of this directory with ls -l, for example. However, if they only have execute permission for the directory, they can nevertheless access certain files in this directory if they know of their existence.

Write (w)

Users can change the file: They can add or drop data and can even delete the contents of the file. However, this does not include the permission to remove the file completely from the directory as long as they do not have write permissions for the directory where the file is located.

Users can create, rename or delete files in the directory.

Execute (x)

Users can execute the file. This permission is only relevant for files like programs or shell scripts, not for text files. If the operating system can execute the file directly, users do not need read permission to execute the file. However, if the file must me interpreted like a shell script or a perl program, additional read permission is needed.

Users can change into the directory and execute files there. If they do not have read access to that directory they cannot list the files but can access them nevertheless if they know of their existence.

Tcpdump is a popular computer network debugging and security tool which allows the user to intercept and display TCP/IP packets being transmitted or received over a network to which the computer is attached. Tcpdump allows us to precisely see all the traffic and enables us to create statistical monitoring scripts.

At an ethernet segment, tcpdump operates by putting the network card into promiscuous mode in order to capture all the packets going through the wire. Using tcpdump we have a view on any TCP/UDP connection establishment and termination and we can measure the response time and the packet loss percentagesTo print

Some simple usage:

all packets arriving at or departing from 192.168.0.2
# tcpdump -n host 192.168.0.2

To print traffic between 192.168.0.2 and either 10.0.0.4 or 10.0.0.5:
# tcpdump -n host 192.168.0.2 and \( 10.0.0.4 or 10.0.0.5 \)

To print all IP packets between 192.168.0.2 and any host except 10.0.0.5:
# tcpdump ip -n host 192.168.0.2 and not 10.0.0.5

To print all traffic between local hosts and hosts at Berkeley:
# tcpdump net ucb-ether

"Since everybody seems to be having fun building new filesystems these days, I thought I should join the party, began Daniel Phillips, announcing the Tux3 versioning filesystem. He continued, "Tux3 is a write anywhere, atomic commit, btree based versioning filesystem. As part of this work, the venerable HTree design used in Ext3 and Lustre is getting a rev to better support NFS and possibly become more efficient." Daniel explained:

"The main purpose of Tux3 is to embody my new ideas on storage data versioning. The secondary goal is to provide a more efficient snapshotting and replication method for the Zumastor NAS project, and a tertiary goal is to be better than ZFS."

In his announcement email, Daniel noted that implementation work is underway, "much of the work consists of cutting and pasting bits of code I have developed over the years, for example, bits of HTree and ddsnap. The immediate goal is to produce a working prototype that cuts a lot of corners, for example block pointers instead of extents, allocation bitmap instead of free extent tree, linear search instead of indexed, and no atomic commit at all. Just enough to prove out the versioning algorithms and develop new user interfaces for version control."

The question is of course which file system will make the run: Tux3 is still at the beginning, while Btrfs could see a first beta in the next months. But there are still rumors that ZFS might be released under the GPL, and Hammer could also be implemented for Linux.