Login

Unix Host Security: Hacks 11-20

Security isn’t a noun, it’s a verb; not a product, but a process. Today, learn the hacks involved in reducing the risks involved in offering services on a Unix-based system. This the second part of chapter one in Network Security Hacks, by Andrew Lockhart (ISBN 0-596-00643-8, O’Reilly & Associates, 2004).

Use proftp with a MySQL Authentication Source Hack #11

Make sure that your database system’s OS is running as efficiently as possible with these tweaks.

proftpd is a powerful FTP daemon with a configuration syntax much like Apache. It has a whole slew of options not available in most FTP daemons, including ratios, virtual hosting, and a modularized design that allows people to write their own modules.

One such module is mod_sql, which allows proftpd to use a SQL database as its back-end authentication source. Currently, mod_sql supports MySQL and PostgreSQL. This can be a good way to help lock down access to your server, as inbound users will authenticate against the database (and therefore not require an actual shell account on the server). In this hack, we’ll get proftpd authenticating against a MySQL database.

The SQLConnectInfo line takes the form database user password. You could also specify a database on another host (even on another port) with something like:

SQLConnectInfo proftpd@dbhost:5678 somebody somepassword

The SQLAuthTypes line lets you create users with passwords stored in the standard Unix crypt format, or mysql’s PASSWORD( ) function. Be warned that if you’re using mod_sql’s logging facilities, the password may be exposed in plain text, so keep those logs private.

The SQLAuthTypes line as specified won’t allow blank passwords; if you need that functionality, also include the empty keyword. The SQLMinUserGID and SQLMinUserUID lines specify the minimum group and user ID that proftpd will permit on login. It’s a good idea to make this greater than 0 (to prohibit root logins), but it should be as low as you need to allow proper permissions in the filesystem. On this system, we have a user and group called www, with both its uid and gid set to 111. As we’ll want web developers to be able to log in with these permissions, we’ll need to set the minimum values to 111.

Finally, we’re ready to create users in the database. This will create the user jimbo, with effective user rights as www/www, and dump him in the /usr/local/apache/htdocs/ directory at login:

The password for jimbo is encrypted with mysql’s PASSWORD( ) function before being stored. The /bin/bash line is passed to proftpd to pass proftpd’s RequireValidShell directive. It has no bearing on granting actual shell access to the user jimbo.

At this point, you should be able to fire up proftpd and log in as user jimbo, with a password of sHHH. If you are having trouble getting connected, try running proftpd in the foreground with debugging on, like this:

# proftpd -n -d 5

Watch the messages as you attempt to connect, and you should be able to track down the source of difficulty. In my experience, it’s almost always due to a failure to set something properly in proftpd.conf, usually regarding permissions.

The mod_sql module can do far more than I’ve shown here; it can connect to existing mysql databases with arbitrary table names, log all activity to the database, modify its user lookups with an arbitrary WHERE clause, and much more.

In C and C++, memory for local variables is allocated in a chunk of memory called the stack. Information pertaining to the control flow of a program is also maintained on the stack. If an array is allocated on the stack and that array is overrun (that is, more values are pushed into the array than the available space provides), an attacker can overwrite the control flow information that is also stored on the stack. This type of attack is often referred to as a stack-smashing attack.

Stack-smashing attacks are a serious problem, since an otherwise innocuous service (such as a web server or FTP server) can be made to execute arbitrary commands. Several technologies have been developed that attempt to protect programs against these attacks. Some are implemented in the compiler, such as IBM’s ProPolice (http://www.trl.ibm.com/projects/security/ssp/) and the Stackguard (http://www.immunix.org/stackguard.html) versions of GCC. Others are dynamic runtime solutions, such as LibSafe (http://www.research.avayalabs.com/project/libsafe/). While recompiling the source gets to the heart of the buffer overflow attack, runtime solutions can protect programs when the source isn’t available or recompiling simply isn’t feasible.

All of the compiler-based solutions work in much the same way, although there are some differences in the implementations. They work by placing a “canary” (which is typically some random value) on the stack between the control flow information and the local variables. The code that is normally generated by the compiler to return from the function is modified to check the value of the canary on the stack; if it is not what it is supposed to be, the program is terminated immediately.

The idea behind using a canary is that an attacker attempting to mount a stack-smashing attack will have to overwrite the canary to overwrite the control flow information. By choosing a random value for the canary, the attacker cannot know what it is and thus cannot include it in the data used to “smash” the stack.

When a program is distributed in source form, the developer of the program cannot enforce the use of StackGuard or ProPolice, because they are both nonstandard extensions to the GCC compiler. It is the responsibility of the person compiling the program to make use of one of these technologies.

For Linux systems, Avaya Labs’s LibSafe technology is not implemented as a compiler extension, but instead takes advantage of a feature of the dynamic loader that causes a dynamic library to be preloaded with every executable. Using LibSafe does not require the source code for the programs it protects, and it can be deployed on a system-wide basis.

LibSafe replaces the implementation of several standard functions that are known to be vulnerable to buffer overflows, such as gets( ), strcpy( ), and scanf( ). The replacement implementations attempt to compute the maximum possible size of a statically allocated buffer used as a destination buffer for writing, using a GCC built-in function that returns the address of the frame pointer. That address is normally the first piece of information on the stack following local variables. If an attempt is made to write more than the estimated size of the buffer, the program is terminated.

Unfortunately, there are several problems with the approach taken by Lib- Safe. First, it cannot accurately compute the size of a buffer; the best it can do is limit the size of the buffer to the difference between the start of the buffer and the frame pointer. Second, LibSafe’s protections will not work with programs that were compiled using the -fomit-frame-pointer flag to GCC, an optimization that causes the compiler not to put a frame pointer on the stack. Although relatively useless, this is a popular optimization for programmers to employ. Finally, LibSafe will not work on SUID binaries without static linking or a similar trick.

In addition to providing protection against conventional stack-smashing attacks, the newest versions of LibSafe also provide some protection against format-string attacks. The format-string protection also requires access to the frame pointer because it attempts to filter out arguments that are not pointers into either the heap or the local variables on the stack.

In addition to user-space solutions, you can also opt to patch your kernel to use nonexecutable stacks and detect buffer overflow attacks. We’ll do just that in “Lock Down Your Kernel with grsecurity” [Hack #13].

If you’ve enjoyed what you’ve seen here, or to get more information, click on the “Buy the book!” graphic. Pick up a copy today!

Hardening a Unix system can be a difficult process. It typically involves setting up all the services that the system will run in the most secure fashion possible, as well as locking down the system to prevent local compromises. However, putting effort into securing the services that you’re running does little for the rest of the system and for unknown vulnerabilities. Luckily, even though the standard Linux kernel provides few features for proactively securing a system, there are patches available that can help the enterprising system administrator do so. One such patch is grsecurity (http://www.grsecurity.net).

grsecurity started out as a port of the OpenWall patch (http://www.openwall.com) to the 2.4.x series of Linux kernels. This patch added features such as nonexecutable stacks, some filesystem security enhancements, restrictions on access to /proc, as well as some enhanced resource limits. These features helped to protect the system against stack-based buffer overflow attacks, prevented filesystem attacks involving race conditions on files created in /tmp, limited a user to only seeing his own processes, and even enhanced Linux’s resource limits to perform more checks. Since its inception, grsecurity has grown to include many features beyond those provided by the OpenWall patch. grsecurity now includes many additional memory address space protections to prevent buffer overflow exploits from succeeding, as well as enhanced chroot( ) jail restrictions, increased randomization of process and IP IDs, and increased auditing features that enable you to track every process executed on a system. grsecurity adds a sophisticated access control list (ACL) system that makes use of Linux’s capabilities system. This ACL system can be used to limit the privileged operations that individual processes are able to perform on a case-by-case basis.

Configuration of ACLs is handled through the gradm utility. If you already have grsecurity installed on your machine, feel free to skip ahead to “Restrict Applications with grsecurity” [Hack #14].

To compile a kernel with grsecurity, you will need to download the patch that corresponds to your kernel version and apply it to your kernel using the patch utility.

While the command is running, you should see a line for each kernel source file that is being patched. After the command has finished, you can make sure that the patch applied cleanly by looking for any files that end in .rej. The patch program creates these when it cannot apply the patch cleanly to a file. A quick way to see if there are any .rej files is to use the find command:

# find ./ -name *.rej

If there are any rejected files, they will be listed on the screen. If the patch applied cleanly, you should be returned back to the shell prompt without any additional output.

After the patch has been applied, you can configure the kernel to enable grsecurity’s features by running make config to use text prompts, make menuconfig for a curses-based interface, or make xconfig to use a Tk-based GUI. If you went the graphical route and used make xconfig, you should then see a dialog similar to Figure 1-1. If you ran make menuconfig or make config, the relevant kernel options have the same name as the menu options described in this example.

To configure which grsecurity features will be enabled in the kernel, click the button labeled Grsecurity. After doing that, you should see a dialog similar to Figure 1-2.

To enable grsecurity, click the y radio button. After you’ve done that, you can enable predefined sets of features with the Security Level drop-down list, or set it to Custom and go through the menus to pick and choose which features to enable.

Choosing Low is safe for any system and should not affect any software’s normal operation. Using this setting will enable linking restrictions in directories with mode 1777. This prevents race conditions in /tmp from being exploited, by only following symlinks to files that are owned by the process following the link. Similarly, users won’t be able to write to FIFOs that they do not own if they are within a directory with permissions of 1777.

In addition to the tighter symlink and FIFO restrictions, the Low setting increases the randomness of process and IP IDs. This helps to prevent attackers from using remote detection techniques to correctly guess the operating system your machine is running (as in “Block OS Fingerprinting” [Hack #40]), and it also makes it difficult to guess the process ID of a given program. The Low security level also forces programs that use chroot( ) to change their current working directory to / after the chroot( ) call. Otherwise, if a program left its working directory outside of the chroot environment, it could be used to break out of the sandbox. Choosing the Low security level also prevents nonroot users from using dmesg, a utility that can be used to view recent kernel messages.

Choosing Medium enables all of the same features as the Low security level, but this level also includes features that make chroot( )-based sandboxed environments more secure. The ability to mount filesystems, call chroot( ), write to sysctl variables, or create device nodes within a chrooted environment are all restricted, thus eliminating much of the risk involved in running a service in a sandboxed environment under Linux. In addition, TCP source ports will be randomized, and failed fork( ) calls, changes to the system time, and segmentation faults will all be logged. Enabling the Medium security level will also restrict total access to /proc to those who are in the wheel group. This hides each user’s processes from other users and denies writing to /dev/kmem, /dev/mem, and /dev/port. This makes it more difficult to patch kernel-based root kits into the running kernel. Also, process memory address space layouts are randomized, making it harder for an attacker to successfully exploit buffer overrun attacks. Because of this, information on process address space layouts is removed from /proc as well. Because of these /proc restrictions, you will need to run your identd daemon (if you are running one) as an account that belongs to the wheel group. According to the grsecurity documentation, none of these features should affect the operation of your software, unless it is very old or poorly written.

To enable nearly all of grsecurity’s features, you can choose the High security level. In addition to the features provided by the lower security levels, this level implements additional /proc restrictions by limiting access to device and CPU information to users who are in the wheel group. Sandboxed environments are also further restricted by disallowing chmod to set the SUID or SGID bit when operating within such an environment. Additionally, applications that are running within such an environment will not be allowed to insert loadable modules, perform raw I/O, configure network devices, reboot the system, modify immutable files, or change the system’s time. Choosing this security level will also cause the kernel’s stack to be laid out randomly, to prevent kernel-based buffer overrun exploits from succeeding. In addition, the kernel’s symbols will be hidden—making it even more difficult for an intruder to install Trojan code into the running kernel—and filesystem mounting, remounting, and unmounting will be logged.

The High security level also enables grsecurity’s PaX code, which enables nonexecutable memory pages. Enabling this will cause many buffer overrun exploits to fail, since any code injected into the stack through an overrun will be unable to execute. However, it is still possible to exploit a program with buffer overrun vulnerabilities, although this is made much more difficult by grsecurity’s address space layout randomization features. PaX can also carry with it some performance penalties on the x86 architecture, although they are said to be minimal. In addition, some programs—such as XFree86, wine, and Java™ virtual machines—will expect that the memory addresses returned by malloc( ) will be executable. Unfortunately, PaX breaks this behavior, so enabling it will cause those programs and others that depend on it to fail. Luckily, PaX can be disabled on a per-program basis with the chpax utility (http://chpax.grsecurity.net).

To disable PaX for a program, you can run a command similar to this one:

# chpax -ps /usr/bin/java

There are also other programs that make use of special GCC features such as trampoline functions. This allows a programmer to define a small function within a function, so that the defined function is only in the scope of the function in which it is defined. Unfortunately, GCC puts the trampoline function’s code on the stack, so PaX will break any programs that rely on this. However, PaX can provide emulation for trampoline functions, which can be enabled on a per-program basis with chpax, as well by using the –E switch.

If you do not like the sets of features that are enabled with any of the predefined security levels, you can just set the kernel option to “custom” and enable only the features you need.

After you’ve set a security level or enabled the specific options you want to use, just recompile your kernel and modules as you normally would. You can do that with commands similar to these:

# make dep clean && make bzImage# make modules && make modules_install

Then reboot with your new kernel. In addition to the kernel restrictions already in effect, you can now use gradm to set up ACLs for your system.We’ll see how to do that in “Restrict Applications with grsecurity” [Hack #14].

As you can see, grsecurity is a complex but tremendously useful modification of the Linux kernel. For more detailed information on installing and configuring the patches, consult the extensive documentation at http://www.grsecurity.net/papers.php.

If you’ve enjoyed what you’ve seen here, or to get more information, click on the “Buy the book!” graphic. Pick up a copy today!

Use Linux capabilities and grsecurity’s ACLs to restrict applications on your system.

Now that you have installed the grsecurity patches, you’ll probably want to make use of its flexible ACL system to further restrict the privileged applications on your system, beyond what grsecurity’s kernel security features provide. If you’re just joining us and are not familiar with grsecurity, read “Lock Down Your Kernel with grsecurity” [Hack #13] first.

To restrict specific applications, you will need to make use of the gradm utility, which can be downloaded from the main grsecurity site (http://www.grsecurity.net). You can compile and install it in the usual way: unpack the source distribution, change into the directory that it creates, and then run make && make install. This will install gradm in /sbin, create the /etc/grsec directory containing a default ACL, and install the manpage.

After gradm has been installed, the first thing you’ll want to do is create a password that gradm will use to authenticate itself to the kernel. You can do this by running gradm with the -P option:

Once you’re finished setting up your ACLs, you’ll probably want to add that command to the end of your system startup. You can do this by adding it to the end of /etc/rc.local or a similar script that is designated for customizing your system startup.

The default ACL installed in /etc/grsec/acl is quite restrictive, so you’ll want to create ACLs for the services and system binaries you want to use. For example, after the ACL system has been enabled, ifconfig will no longer be able to change interface characteristics, even when run as root:

The easiest way to set up an ACL for a particular command is to specify that you want to use grsecurity’s learning mode, rather than specifying each ACL manually. If you’ve enabled ACLs, you’ll need to temporarily disable them for your shell by running gradm -a. You’ll then be able to access files within /etc/grsec; otherwise, the directory will be hidden to you.

Add an entry like this to /etc/grsec/acl:

/sbin/ifconfig lo { / h /etc/grsec h -CAP_ALL}

This is about the most restrictive ACL possible because it hides the root directory from the process and removes any privileges that it may need. The lo next to the binary to which the ACL applies says to use learning mode and to override the default ACL. After you’re done editing the ACLs, you’ll need to tell grsecurity to reload them by running gradm -R.

Now you can replace the learning ACL for /sbin/ifconfig in /etc/grsec/acl with this one, and ifconfig should work. You can then follow this process for each program that needs special permissions to function. Just make sure to try out anything you will want to do with those programs, to ensure that grsecurity’s learning mode will detect that it needs to perform a particular system call or open a specific file.

Using grsecurity to lock down applications can seem like tedious work at first, but it will ultimately create a system that gives each process only the permissions it needs to do its job—no more, no less. When you need to build a highly secured platform, grsecurity can provide very finely grained control over just about everything the system can possibly do.

If you’ve enjoyed what you’ve seen here, or to get more information, click on the “Buy the book!” graphic. Pick up a copy today!

One of the more exciting new features in NetBSD and OpenBSD is systrace, a system call access manager. With systrace, a system administrator can specify which programs can make which system calls, and how those calls can be made. Proper use of systrace can greatly reduce the risks inherent in running poorly written or exploitable programs. Systrace policies can confine users in a manner completely independent of Unix permissions. You can even define the errors that the system calls return when access is denied, to allow programs to fail in a more proper manner. Proper use of systrace requires a practical understanding of system calls and what functionality programs must have to work properly.

First of all, what exactly are system calls? A system call is a function that lets you talk to the operating-system kernel. If you want to allocate memory, open a TCP/IP port, or perform input/output on the disk, you’ll need to use a system call. System calls are documented in section 2 of the manpages.

Unix also supports a wide variety of C library calls. These are often confused with system calls but are actually just standardized routines for things that could be written within a program. For example, you could easily write a function to compute square roots within a program, but you could not write a function to allocate memory without using a system call. If you’re in doubt whether a particular function is a system call or a C library function, check the online manual.

You may find an occasional system call that is not documented in the online manual, such as break( ). You’ll need to dig into other resources to identify these calls (break( ) in particular is a very old system call used within libc, but not by programmers, so it seems to have escaped being documented in the manpages).

Systrace denies all actions that are not explicitly permitted and logs the rejection using syslog. If a program running under systrace has a problem, you can find out which system call the program wants to use and decide if you want to add it to your policy, reconfigure the program, or live with the error.

Systrace has several important pieces: policies, the policy generation tools, the runtime access management tool, and the sysadmin real-time interface. This hack gives a brief overview of policies; in “Automated Systrace Policy Creation” [Hack #16], we’ll learn about the systrace tools.

The systrace(1) manpage includes a full description of the syntax used for policy descriptions, but I generally find it easier to look at some examples of a working policy and then go over the syntax in detail. Since named has been a subject of recent security discussions, let’s look at the policy that Open- BSD 3.2 provides for named.

Before reviewing the named policy, let’s review some commonly known facts about the name server daemon’s system-access requirements. Zone transfers and large queries occur on port 53/TCP, while basic lookup services are provided on port 53/UDP. OpenBSD chroots named into /var/named by default and logs everything to /var/log/messages.

Each systrace policy file is in a file named after the full path of the program, replacing slashes with underscores. The policy file usr_sbin_named contains quite a few entries that allow access beyond binding to port 53 and writing to the system log. The file starts with:

# Policy for named that uses named user and chroots to /var/named# This policy works for the default configuration of named.Policy: /usr/sbin/named, Emulation: native

The Policy statement gives the full path to the program this policy is for. You can’t fool systrace by giving the same name to a program elsewhere on the system. The Emulation entry shows which ABI this policy is for. Remember, BSD systems expose ABIs for a variety of operating systems. Systrace can theoretically manage system-call access for any ABI, although only native and Linux binaries are supported at the moment.

The remaining lines define a variety of system calls that the program may or may not use. The sample policy for named includes 73 lines of system-call rules. The most basic look like this:

native-accept: permit

When /usr/sbin/named tries to use the accept( ) system call to accept a connection on a socket, under the native ABI, it is allowed. Other rules are far more restrictive. Here’s a rule for bind( ), the system call that lets a program request a TCP/IP port to attach to:

native-bind: sockaddr match “inet-*:53” then permit

sockaddr is the name of an argument taken by the accept( ) system call. The match keyword tells systrace to compare the given variable with the string inet-*:53, according to the standard shell pattern-matching (globbing) rules. So, if the variable sockaddr matches the string inet-*:53, the connection is accepted. This program can bind to port 53, over both TCP and UDP protocols. If an attacker had an exploit to make named attach a command prompt on a high-numbered port, this systrace policy would prevent that exploit from working.

The eq keyword compares one string to another and requires an exact match. If the program tries to go to the root directory, or to the directory /namedb,systrace will allow it. Why would you possibly want to allow named to access the root directory? The next entry explains why:

native-chroot: filename eq “/var/named” then permit

We can use the native chroot( ) system call to change our root directory to /var/named, but to no other directory. At this point, the /namedb directory is actually /var/named/namedb. We also know that named logs to syslog. To do this, it will need access to /dev/log:

native-connect: sockaddr eq “/dev/log” then permit

This program can use the native connect( ) system call to talk to /dev/log and only /dev/log. That device hands the connections off elsewhere.

Systrace aliases certain system calls with very similar functions into groups. You can disable this functionality with a command-line switch and only use the exact system calls you specify, but in most cases these aliases are quite useful and shrink your policies considerably. The two aliases are fsread and fswrite. fsread is an alias for stat( ), lstat( ), readlink( ), and access( ) under the native and Linux ABIs. fswrite is an alias for unlink( ), mkdir( ), and rmdir( ), in both the native and Linux ABIs. As open( ) can be used to either read or write a file, it is aliased by both fsread and fswrite, depending on how it is called. So named can read certain /etc files, it can list the contents of the root directory, and it can access the groups file.

Systrace supports two optional keywords at the end of a policy statement, errorcode and log. The errorcode is the error that is returned when the program attempts to access this system call. Programs will behave differently depending on the error that they receive. named will react differently to a “permission denied” error than it will to an “out of memory” error. You can get a complete list of error codes from the errno manpage. Use the error name, not the error number. For example, here we return an error for nonexistent files:

filename sub “<non-existent filename>” then deny[enoent]

If you put the word log at the end of your rule, successful system calls will be logged. For example, if we wanted to log each time named attached to port 53, we could edit the policy statement for the bind( ) call to read:

native-bind: sockaddr match “inet-*:53” then permit log

You can also choose to filter rules based on user ID and group ID, as the example here demonstrates.

native-setgid: gid eq “70” then permit

This very brief overview covers the vast majority of the rules you will see. For full details on the systrace grammar, read the systrace manpage. If you want some help with creating your policies, you can also use systrace’s automated mode [Hack #16].

In a true paranoid’s ideal world, system administrators would read the source code for every application on their system and be able to build system-call access policies by hand, relying only on their intimate under standing of every feature of the application. Most system administratorsdon’t have that sort of time, and would have better things to do with that time if they did.

Luckily, systrace includes a policy-generation tool that will generate a policy listing for every system call that an application makes. You can use this policy as a starting point to narrow down the access you will allow the application. We’ll use this method to generate a policy for inetd.

Use the -A flag to systrace, and include the full path to the program you want to run:

# systrace -A /usr/sbin/inetd

To pass flags to inetd, add them at the end of the command line.

Then use the program for which you’re developing a policy. This system has ident, daytime, and time services open, so run programs that require those services. Fire up an IRC client to trigger ident requests, and telnet to ports 13 and 37 to get time services. Once you have put inetd through its paces, shut it down. inetd has no control program, so you need to kill it by process ID.

Do not kill the systrace process (PID 12929 in this example)—that process has all the records of the system calls that inetd has made. Just kill the inetd process (PID 24421), and the systrace process will exit normally.

Now check your home directory for a .systrace directory, which will contain systrace’s first stab at an inetd policy. Remember, policies are placed in files named after the full path to the program, replacing slashes with underscores.

Here’s the output of ls:

# ls .systraceusr_libexec_identd usr_sbin_inetd

systrace created two policies, not one. In addition to the expected policy for /usr/sbin/inetd, there’s one for /usr/libexec/identd. This is because inetd implements time services internally, while ident calls a separate program to service requests. When inetd spawned identd, systrace captured the identd system calls as well.

By reading the policy, you can improve your understanding of what the program actually does. Look up each system call the program uses, and see if you can restrict access further. You’ll probably want to look for ways to further restrict the policies that are automatically generated. However, these policies make for a good starting point.

Applying a policy to a program is much like creating the systrace policy itself; just run the program as an argument to systrace, using the -a option:

# systrace -a /usr/sbin/inetd

If the program tries to perform system calls not listed in the policy, they will fail. This may cause the program to behave unpredictably. Systrace will log failed entries in /var/log/messages.

To edit a policy, just add the desired statement to the end of the rule list, and it will be picked up. You could do this by hand, of course, but that’s the hard way. Systrace includes a tool to let you edit policies in real time, as the system call is made. This is excellent for use in a network operations center environment, where the person responsible for watching the network monitor can also be assigned to watch for system calls and bring them to the attention of the appropriate personnel. You can specify which program you wish to monitor by using systrace’s -p flag. This is called attaching to the program.

For example, earlier we saw two processes containing inetd. One was the actual inetd process, and the other was the systrace process managing inetd. Attach to the systrace process, not the actual program (to use the previous example, this would be PID 12929), and give the full path to the managed program as an argument:

# systrace -p 12929 /usr/sbin/inetd

At first nothing will happen. When the program attempts to make an unauthorized system call, however, a GUI will pop up. You will have the options to allow the system call, deny the system call, always permit the call, or always deny it. The program will hang until you make a decision, however, so decide quickly.

Note that these changes will only take effect so long as the current process is running. If you restart the program, you must also restart the attached systrace monitor, and any changes you set in the monitor are gone. You must add those rules to the policy if you want them to be permanent.

Seize fine-grained control of when and where your users can access your system.

In traditional Unix authentication there is not much granularity available in limiting a user’s ability to log in. For example, how would you limit the hosts that users can come from when logging into your servers? Your first thought might be to set up TCP wrappers or possibly firewall rules [Hack #33] and [Hack #34]. But what if you wanted to allow some users to log in from a specific host, but disallow others from logging in from it? Or what if you wanted to prevent some users from logging in at certain times of the day because of daily maintenance, but allow others (i.e., administrators) to log in at any time they wish? To get this working with every service that might be running on your system, you would traditionally have to patch each of them to support this new functionality. This is where PAM enters the picture.

PAM, or pluggable authentication modules, allows for just this sort of functionality (and more) without the need to patch all of your services. PAM has been available for quite some time under Linux, FreeBSD, and Solaris, and is now a standard component of the traditional authentication facilities on these platforms. Many services that need to use some sort of authentication now support PAM.

Modules are configured for services in a stack, with the authentication process proceeding from top to bottom as the access checks complete successfully. You can build a custom stack for any service by creating a file in /etc/ pam.d with the same name as the service. If you need even more granularity, an entire stack of modules can be included by using the pam_stack module. This allows you to specify another external file containing a stack. If a service does not have its own configuration file in /etc/pam.d, it will default to using the stack specified in /etc/pam.d/other.

When configuring a service for use with PAM, there are several types of entries available. These types allow one to specify whether a module provides authentication, access control, password change control, or session setup and teardown. Right now, we are interested in only one of the types: the account type. This entry type allows you to specify modules that will control access to accounts that have been authenticated. In addition to the service-specific configuration files, some modules have extended configuration information that can be specified in files within the /etc/security directory. For this hack, we’ll mainly use two of the most useful modules of this type, pam_access and pam_time.

The pam_access module allows one to limit where a user or group of users may log in from. To make use of it, you’ll first need to configure the service you wish to use the module with. You can do this by editing the service’s PAM config file in /etc/pam.d.

Here’s an example of what /etc/pam.d/login might look like under Red Hat 9:

To add the pam_access module to the login service, you could add another account entry to the login configuration file, which would, of course, just enable the module for the login service. Alternatively, you could add the module to the system-auth file, which would enable it for most of the PAMaware services on the system.

To add pam_access to the login service (or any other service for that matter), simply add a line like this to the service’s configuration file after any preexisting account entries:

account required pam_access.so

Now that we’ve enabled the pam_access module for our services, we can edit /etc/security/access.conf to control how the module behaves. Each entry in the file can specify multiple users, groups, and hostnames to which the entry applies, and specify whether it’s allowing or disallowing remote or local access. When pam_access is invoked by an entry in a service configuration file, it will look through the lines of access.conf and stop at the first match it finds. Thus, if you want to create default entries to fall back on, you’ll want to put the more specific entries first, with the general entries following them.

The general form of an entry in access.conf is:

permission : users : origins

where permission can be either a + or –. This denotes whether the rule grants or denies access, respectively.

The users portion allows you to specify a list of users or groups, separated by whitespace. In addition to simply listing users in this portion of the entry, you can use the form user@host, where host is the local hostname of the machine being logged into. This allows you to use a single configuration file across multiple machines, but still specify rules pertaining to specific machines. The origins portion is compared against the origin of the access attempt. Hostnames can be used for remote origins, and the special LOCAL keyword can be used for local access. Instead of explicitly specifying users, groups, or origins, you can also use the ALL and EXCEPT keywords to perform set operations on any of the lists.

Here’s a simple example of locking out the user andrew (Eep! That’s me!) from a host named colossus:

– : andrew : colossus

Note that if a group that shares its name with a user is specified, the module will interpret the rule as applying to both the user and the group.

Now that we’ve covered how to limit where a user may log in from and how to set up a PAM module, let’s take a look at how to limit what time a user may log in by using the pam_time module. To configure this module, you need to edit /etc/security/time.conf. The format for the entries in this file are a little more flexible than that of access.conf, thanks to the availability of the NOT (!), AND (&), and OR (|) operators.

The general form for an entry in time.conf is:

services;devices;users;times

The services portion of the entry specifies what PAM-enabled service will be regulated. You can usually get a full list of the available services by looking at the contents of your /etc/pam.d directory.

For instance, here’s the contents of /etc/pam.d on a Red Hat Linux system:

To set up pam_time for use with any of these services, you’ll need to add a line like this to the file in /etc/pam.d that corresponds to the service that you want to regulate:

account required /lib/security/$ISA/pam_time.so

The devices portion specifies the terminal device that the service is being accessed from. For console logins, you can use !ttyp*, which specifies all TTY devices except for pseudo TTYs. If you want the entry to only affect remote logins, then use ttyp*. You can restrict it to all users (console, remote, and X11) by using tty*.

For the users portion of the entry, you can specify a single user or a list of users by separating each one with a | character. The times portion is used to specify the times that the rule will apply. Each time range is specified with a combination of two character abbreviations, which denote the days that the rule will apply, followed with a range of hours for that day. The abbreviations for the days of the week are Mo, Tu, We, Th, Fr, Sa, and Su. For convenience you can use Wk to specify weekdays and Wd to specify the weekend. In addition, you can use Al to specify every day of the week. These last three basically expand to the set of days that compose each time period. This is important to remember, since repeated days are subtracted from the set of days that the rule will apply to (e.g., WkSu would effectively be just Sa). The range of hours is simply specified as two 24-hour times, minus the colons, separated by a dash (e.g., 0630-1345 is 6:30 A.M. to 1:45 P.M.).

If you wanted to disallow access to the user andrew from the local console on weekends and during the week after hours, you could use an entry like this:

system-auth;!ttyp*;andrew;Wk1700-0800|Wd0000-2400

Or perhaps you want to limit remote logins through SSH during a system maintenance window lasting from 7 P.M. Friday to 7 A.M. Saturday, but want to allow a sysadmin to log in:

sshd;ttyp*;!andrew;Fr1900-0700

As you can see, there’s a lot of flexibility for creating entries, thanks to the logical Boolean operators that are available. Just make sure that you remember to configure the service file in /etc/pam.d for use with pam_time when you create entries in /etc/security/time.conf.

If you’ve enjoyed what you’ve seen here, or to get more information, click on the “Buy the book!” graphic. Pick up a copy today!

Sometimes a sandboxed environment [Hack #10] is overkill for your needs. If you want to set up a restricted environment for a group of users that only allows them to run a few particular commands, you’ll have to duplicate all of the libraries and binaries for those commands for each user. This is where restricted shells come in handy. Many shells include such a feature, which is usually invoked by running the shell with the -r switch. While not as secure as a system call–based sandbox environment, it can work well if you trust your users not to be malicious, but worry that some might be curious to an unhealthy degree.

Some common features of restricted shells are the ability to prevent a program from changing directories, to only allow the execution of commands using absolute pathnames, and to prohibit executing commands in other subdirectories. In addition to these restrictions, all of the command-line redirection operators are disabled. With these features, restricting the commands a user can execute is as simple as picking and choosing which commands should be available and making symbolic links to them inside the user’s home directory. If a sequence of commands needs to be executed, you can also create shell scripts owned by another user. These scripts will execute in a nonrestricted environment and can’t be edited within the environment by the user.

Restricted shells are incredibly easy to set up and can provide minimal restricted access. They may not be able to keep out determined attackers, but they certainly make a hostile user’s job much more difficult.

If you’ve enjoyed what you’ve seen here, or to get more information, click on the “Buy the book!” graphic. Pick up a copy today!

Whether it’s through malicious intent or an unintentional slip, having a user bring your system down to a slow crawl by using too much memory or CPU time is no fun at all. One popular way of limiting resource usage is to use the ulimit command. This method relies on a shell to limit its child processes, and it is difficult to use when you want to give different levels of usage to different users and groups. Another, more flexible way of limiting resource usage is with the PAM module pam_limits.

pam_limits is preconfigured on most systems that have PAM installed. All you should need to do is edit /etc/security/limits.conf to configure specific limits for users and groups.

The limits.conf configuration file consists of single-line entries describing a single type of limit for a user or group of users. The general format for an entry is:

domain type resource value

The domain portion specifies to whom the limit applies. Single users may be specified here by name, and groups can be specified by prefixing the group name with an @. In addition, the wildcard character * may be used to apply the limit globally to all users except for root. The type portion of the entry specifies whether the limit is a soft or hard resource limit. Soft limits may be increased by the user, whereas hard limits can be changed only by root. There are many types of resources that can be specified for the resource portion of the entry. Some of the more useful ones are cpu, memlock, nproc, and fsize. These allow you to limit CPU time, total locked-in memory, number of processes, and file size, respectively. CPU time is expressed in minutes, and sizes are in kilobytes. Another useful limit is maxlogins, which allows you to specify the maximum number of concurrent logins that are permitted.

One nice feature of pam_limits is that it can work together with ulimit to allow the user to raise her limit from the soft limit to the imposed hard limit.

Let’s try a quick test to see how it works. First we’ll limit the number of open files for the guest user by adding these entries to limits.conf:

guest soft nofile 1000guest hard nofile 2000

Now the guest account has a soft limit of 1,000 concurrently open files and a hard limit of 2,000. Let’s test it out:

There you have it. In addition to open files, you can create resource limits for any number of other resources and apply them to specific users or entire groups. As you can see, pam_limits is quite powerful and useful in that it doesn’t rely upon the shell for enforcement.

{mospagebreak title=Automate System Updates Hack #20}

Patch security holes in a timely manner to prevent intrusions.

Updating and patching a system in a timely manner is one of the most important things you can do to help protect your systems from the deluge of newly discovered security vulnerabilities. Unfortunately, this task often gets pushed to the wayside in favor of “more pressing” issues, such as performance tuning, hardware maintenance, and software debugging. In some circles, it’s viewed as a waste of time and overhead that doesn’t contribute to the primary function of a system. Coupled with management demands to maximize production, keeping a system up-to-date is often pushed even further down on the to-do list.

Updating a system can be very repetitive and time consuming if you’re not using scripting to automate it. Fortunately, most Linux distributions make their updated packages available for download from a standard online location. We can monitor that location for changes and automatically detect and download the new updates when they’re made available. To demonstrate how to do this on an RPM-based distribution, we’ll use AutoRPM (http://www.autorpm.org).

AutoRPM is a powerful Perl script that allows you to monitor multiple FTP sites for changes. It will automatically download new or changed packages and either install them automatically or alert you so that you may do so. In addition to monitoring single FTP sites, you can also monitor a pool of mirror sites, to ensure that you still get your updates in spite of a busy FTP server. This feature is especially nice in that AutoRPM will monitor busy FTP servers and keep track of how many times a connection to them has been attempted. Using this information, it assigns internal scores to each of the FTP sites configured within a given pool, with the outcome that the server in the pool that is available most often will be checked first.

To install AutoRPM, download the latest package and install it like this:

# rpm -ivh autorpm-3.3-1.noarch.rpm

Although a tarball is also available, installation is a little more tricky than the typical make; make install, and so it is recommended that you stick to installing from the RPM package.

By default, AutoRPM is configured to monitor for updated packages for Red Hat’s Linux distribution. However, you can configure it to monitor any file repository of your choosing, such as one for SuSE or Mandrake.

If you’ve enjoyed what you’ve seen here, or to get more information, click on the “Buy the book!” graphic. Pick up a copy today!