fetchmail and POP3 Correction

One of your answers in this month's letters column was slightly in
error.

Fetchmail no longer has the old popclient option to dump
retrieved mail to a file; I removed it. Fetchmail, unlike its ancestor
popclient, is designed to be a pure MTA, a pipefitting that connects a
POP or IMAP server to your normal, SMTP-based incoming-mail path.

Fetchmail's "multidrop" mode does what Moe Green wants. It allows
fetchmail, in effect, to serve as a mail collector for a host or
subdomain.

Fetchmail is available at Sunsite, under the system/mail/pop
directory.
Eric S. Raymond

Eric is the author (compiler) of _The_New_Hackers_Dictionary_
a maintainer of the Jargon file (on which the NHD is based)
and is the current maintainer of the termcap file that's
used by Linux (and probably other Unix' as well).
He's also the author of 'fetchmail'
-- Jim

Automated File Transfer over Firewall

Hi,
Because of the security risk involved when using rcp,
I disabled this service on our linux host. But the
main advantage of rcp (over the more secure ftp) is
that you can run it non-interactively (from cron
for example). Is there a way to "simulate" this
functionality with ftp?

Technically non-anonymous ftp isn't
more secure than rcp. The security
concerns are different. (Unless you're
using the "guestgroups" feature of wu-ftpd).
Under some circumstances it is less so.

FTP passes your account password across
the untrusted wire in "clear text" form.
Any sniffer on the same LAN segment can
search for the distinctive packets that mark
a new session and grab the next few packets --
which are almost certain to contain the password.

rcp doesn't send any sort of password. However
the remote host has to trust the IP addresses
and the information returned by reverse DNS
lookups -- and possibly the responses of the
local identd server. Thus it is vulnerable
to IP spoofing, and DNS hijaacking attacks.

Ultimately any automated file transfer will
involve storing a password, hash or key on each
end of the link or it will involve "trusting"
some meta information about the connection (
such as the IP address or reverse DNS lookups
of the incoming connections).

If the initiating host is compromised it can
always pass bad data to the remote host (the
target of the file transfers). If the
remote host (the target) is compromised it's
data can be replaced. So we'll limit our
discussion to how we can trust the wire.

I'd suggest that you look at ssh. Written
by Tatu Ylongen, in Europe (Finland?) this
is a secure replacement for rsh. It comes
with scp (a replacement for rcp).

ssh uses public key cryptographic methods for
authentication (RSA) and to exchange a random
session key. This key is then used with a
symmetrical algorithm (IDEA or your choice among
others) for the end-to-end encryption through out
the session.

It is free for non-commercial use. You can grab
a copy from ftp.cs.hut.fi (if I remember correctly)
or via http://www.cs.hut.fi. If you are in the
U.S. you should obtain a copy of the rsaref library
from mit.edu (I don't remember the exact hostname there)
and compile against that (this is to satisfy the patents
license from RSA). If you need a commercial license for
it you should contact Data Fellows -- look at those web
pages for details -- or look at http://www.ssh.com.

This combination may seem like overkill -- but
it is necessary over untrusted wires.

It is possible to run rdist (the remote file
distribution program) over an ssh link. This will
further automate the process -- allowing you to
push and pull files from or to multiple servers,
recurse through directories, automate the removal
of files, and only transfer new or changed files.
It is significantly more efficient than just rcp
scripts.

There are other methods by which you can automate
file transfers within your organization. One which
may seem downright baroque is to use the venerable old
UUCP.

UUCP can be used over tcp. You create accounts on each host
for each host (or you can have them share accounts in various
combinations -- as you like). In addition to allowing
cron driven and on demand file transfers using the 'uucp'
command (which uses the UUCP protocols -- if you catch the
distinction) you can also configure specific remote scripts
and allow remote job execution to specific accounts.

UUCP offers a great deal of flexibility in scheduling
and job prioritization. It is extremely automation friendly
and is reasonably secure (although the concerns about
text passwords over your ethernet are still valid).

You could also use a modern kermit (ckermit from Columbia
University) which can open sessions over telnet and perform
file tranfers through that. kermit comes with a rich
scripting language and is almost universally support.

It is also possible -- if you insist on sticking with
ftp as the protocol -- to automate ftp. You can use
the ncftp "macro" feature by putting entries in the
.ncftprc file. This allows you to create a "startup"
macro for each host your list in your rc file. It is
possible to have multiple "host" entries which actually
open connections to the same host to do different operations.

It is also possible to use 'expect' with your standard
ftp client shell. Expect is a programming languages built
around TCL which is specifically focused on automating
interactive programs.

Obviously these last three options would involve
storing the password in plain text on the host in
the script files. However you can initiate the
connection from either end and transfer files both
ways. So it's possible to configure the more
secure host to initiate all file transfer sessions
(the ones involving any password) and it's possible
to set up a variety of methods for the exposed host
to request a session. (an attacker might spoof a
connection request -- but the more secure host
will only connect to one of it's valid clients --
not some arbitrary host.

Example 1:
Internet users can upload a file on our public linux
host on the Internet. A cron job checks at 10 minute
intervals if there are files in the incoming files
directory (eg /home/ftp/incoming). If there are files,
they would be automaticaly transfered to another
host on our secure network (intranet) for further
processing. With rcp this would be easy, but rcp
is not a secure service, so can not be allowed on a
public Internet host. It's "competitor", ftp, is more
secure, but can it be done?

This is a "pull" operation.

In this context ftp, initiated from the exposed
host and going to a non-anonymous account on
your internal host, would be less secure than
rcp. (presuming that you are preventing
address spoofing at your exterior routers).

I'd use uucp over tcp (or even consider running
a null modem if the hosts are physically close
enough) and initiate session from the inside.
TCP wrappers can be used to ensure that all
requests to this protocol come from the appropriate
addresses (again, assuming you've got your anti-spoofing
in place at the routers).

TCP wrappers should also be used for your telnet,
ftp, and r* sessions.

The best security would be via rdist over ssh.

Example 2:
We extract data from our database on the intranet,
and translate them into HTML-pages for publishing
on our public WWW host on the Internet. Again,
we wish to do this automaticaly from cron. Normally,
one would use rcp, but for security reasons, we won't
allow it. Can ftp be used here?

This would be a "push" operation.

Exactly the same methods will work as I've discussed
above.

-- Jim

chown Question

Hi Jim....
My question concerns the chown command. The problem that I have is as
follows:

In a directory that I have access to I have several files that I own and
also have group ownership. I want to change the ownership and group to
something else. I am also webmastr and in the weaver group.

example: filename is country.html rw- rw- r tpaton owner tpaton group

I want to change to owner webmastr group weaver.
The command I used is chown webmastr.weaver country.html
The response the system gives is Operation not permitted.

Any ideas how come??

Of course. Under Unix there are two approaches to
'chown' -- "giveaway" and "privileged only." Linux
installations almost always take the later approach
(as do most systems which support quotas).

You want the 'chgrp' command.

You can use 'chgrp' to give group ownership of files
away to any group of which you are a member.

Another approach is to use the SGID bit on the
directory.

If you have a directory which you share among several
users -- such as a staging area for your web server --
you can set that directory to a group ownership of a
group (such as 'webauth') and use the 'chmod g+s'
to set the SGID bit. On a directory this has a special
meaning.

Any directory that is SGID will automatically set the
group ownership of any files created in that directory to
match that of the directory. This means that your
webauthors can just create or copy files into the directory
and not worry about using the chgrp (or chown) commands.

I suspect that this is what you really wanted. Note:
You'll want your web authors to adjust their umask to
allow g+rw to make the best use of these features.

Also note: if this doesn't seem to work you might want to
check your /etc/fstab or the mount options on that filesystem.
This behavior can be overridden with options to the mount
command and may not be available on some filesystem types.
It is the default on ext2 filesystems.

There is also a special meaning to the "t" (sticky) bit
when it is applied to directories. Originally (in the
era of PDP-7's and PDP-11's -- on which Unix was originally
written) the sticky bit was a hint to the kernel to keep
the images of certain executable files cached in preference
to "non-sticky" files. The sysadmin could then set this
bit on things like "grep" which were used frequently --
giving the system a slight performance boost.

Given modern caching techniques, usage patterns, and
storage systems the "sticky" bit has become useless on files.

However, most modern Unix systems still have a use for
the 't' bit on directories. It modifies the meaning of the
"write" bit so that users with the write option to a directory
can only affect *THEIR OWN* files.

You should always set the 't' bit on /tmp/ and similar
(world-writeable) directories.

Perhaps, one of these days will find a use for the 't'
bit on files again. I don't know of a meaning for the SUID
bit on directories (but there might be one in some forms
of Unix -- even Linux). Notice that "sticky" is not the
same as SUID or SGID. This is a fairly common misnomer.

-- Jim

Copy from Xterm to TkDesk

I have a question maybe someone know simpler solution for this.
I'm using TkDesk because very easy to use and most of the keystroke same
as in Win95.
If I want to copy something from xterm to an editble file I do
following:

Select area in xterm

Open Emacs

Paste recent selection

Save file

Open this file with TkDesk Editor and working with it comfortable
like in Win95 enviroment.

Is it any simpler procedure to copy something directly from xterm to
TkDesk Editor???

Thanks: Steve

The usual way to paste text in X is to use the
"middle" mouse button. If you're using a two-button
mouse you'd want your X server configured to
"Emulate3Buttons" -- allowing you to "chord"
the buttons (press and hold the left button then
click with the other).

I realize that this is different than Windows and
Mac -- where you expect a menu option to be
explicitly available for "Edit, Paste" -- but
this follows the X principle of "providing
mechanisms" rather than "dictating policy"
(requiring that every application have an
Edit menu with a Paste option would be a policy).

Personally I always preferred DESQview and
DESQview/X's "Mark and Transfer" feature -- which
was completely keyboard drive. It let me keep my
hands on the keyboard and it allowed me to make
interesting macros to automate the process.
It was also nice because the application wasn't
aware of the process -- if you could see text on
your screen -- you could mark and transfer it.

However this sort of interface doesn't currently exist
for Linux or XFree86 -- and I'm not enough of a programmer
yet to bring it to you. So try "chording" directly
into the text entry area of your TkDesk window after
making your text selection. Remember -- you'll probably
have to press on the left button first and hold it while
clicking on the other button. If you try that in the
other order it probably won't work (never does for me).

-- Jim

File System Debugger

What I want to do is take apart the CURRENT filing system down to the
layout of the superblock. On an AIX by IBM machine we used a program
called FSDB. I just want to try and get my hands on it and the filing
system layout.

FSDB would probably be "filesystem debugger."
The closest equivalent in Linux would probably be
the debugfs command.

If you start this with a command like:

debugfs /dev/hda1

... it will provide you with a shell-like interface
(similar to the traditional ftp client) which provides
you about forty commands for viewing and altering
links and inodes in your filesystem. You can also
select the filesytem you wish to use after you've
started the program.

From the man page:
debugfs was written by Theodore Ts'o, tytso@mit.edu.

There is another program that might be of interest to you.
It's called lde (Linux Disk Editor). This provides a
nice ncurses (with optional color) interface to many of the
same operations. You can find lde-2.3.tar.gz at any of
the Sunsite mirrors.

There is yet another editor which is included with
some versions of Red Hat (and probably other distributions)
called ext2ed.

There are also FAQ's and HOWTO's on the ext2fs structure
and internals available.

IP Fragmentation attack description

IP fragmentation is an old attack, used to send data to a port behind
a packet filtering 'firewall'.

Now, wouldn't be possible to prevent an attack by packet fragmentation by
simply adding a second router that would receive and recheck the packets
reassembled by the first one ?

Regards, Fabien.

Most routers don't do reassembly and most packet
filtering systems don't track connections. In
these each packet is judged purely on its own
merits.

There is a newer, more advanced class of packet
filtering packages which do "stateful inspection."

These are currently mostly implemented in software on
various sorts of Unix systems. From what I've heard
these are largely experimental at this point.

For those that are curious there is a team working
on a "stateful inspection module" for the Linux
2.x kernel. The "IP Masquerading" features that are
built into this kernel (A.K.A. "Network Address
Translation" or NAT) provide most of the support
that's necessary to "stateful inspection."

Here's a couple of links (courtesy of the Computer:
Security section of Yahoo, and Alta-Vista):

(There is also a package called the Mazama Packet Filters
for Unix/Linux -but I didn't see if it supports the
"stateful" stuff).

I didn't find anything on stateful packet filtering under
NT -- but Checkpoint's Firewall-1 (listed above) is
available for NT -- and might support it.

-- Jim

Mail Server Problem

From: Panoy Tan

Hi,
First let me say that I enjoy Linux Journal very much and get a lot
out of every issue, esp. 'Letters to the Editor'.
If you have time to help me, I will be very glad and here is my
trouble :
My mail server run Linux Red Hat with kernel 2.0 and I use Netscape
Mail (POP-user) to read my e-mails on the server.
POP was designed to support "offline" mail processing, not "online" and
"disconnected", therefor I have problem when I read my e-mails with
different computers. That, I need, is my mails have to leave on the
mail server, but whenever I delete one of my mails, which

This has become a recurring problem in the years
since POP (post office protocol) was created.

You can configure most POP clients to keep your
mail -- but then you'll be downloading a new
copy of every message to each machine -- each time
you connect.

Apparently (searching through Netscape's site) there
is a hack to the POP3 protocol which would allow
some of what you're looking for. This appears to be
called UIDL: Here's what I read:

Unfortunately they didn't have any pointers to a
POP server with UIDL support. A search at Yahoo!
sent me straight to Alta Vista -- so a number of USENet
and mailing list postings that referred to a variety of
patches. I'll leave that as an exercise to the reader.

I have read, it will be delete from the server.
I have heard that IMAP supports 'online' mail processing and that is
reason to my questions :

I've heard similar rumors. The question I was trying
to answer by looking at Netscape's site is whether they
support the client side of IMAP. Here's some more
background info:

IMAP (Internet Mail Access Protocol) is intended to be
a more advanced mail service. The proposed standards
are covered in RFC1730 through RFC1733 (which are
conveniently consecutive) and RFC2060. You can search
for RFC's at the ds.internic.net web site or use
ftp.isi.edu.

RFC's are the documents which become the standards of the
Internet. They start as "requests for comments" and
are revised and into STD's (standards documents) and
FYI's ("for your information" documents). In the anarchy
that is the 'net -- these are the results of the "rough
consensus and running code" that gets all of our systems
chatting with one another.

I did a quick Yahoo search using the keywords IMAP and Linux
and came up with the following:

whatisIMAP?
IMAP stands for Internet Message Access Protocol. It is a method of
accessing electronic mail or bulletin board messages that are kept on
a (possibly shared) mail server. In other words, it permits a "client"
email program to access remote message stores as if they were local.
For example, email stored on an IMAP server can be manipulated from a
desktop computer at home, a workstation at the office, and a notebook
computer while traveling, without the need to transfer messages or
files back and forth between these computers.

IMAP's ability to access messages (both new and saved) from more than
one computer has become extremely important as reliance on electronic
messaging and use of multiple computers increase, but this
functionality cannot be taken for granted: the widely used Post Office
Protocol (POP) works best when one has only a single computer, since
it was designed to support "offline" message access, wherein messages
are downloaded and then deleted from the mail server. This mode of
access is not compatible with access from multiple computers since it
tends to sprinkle messages across all of the computers used for mail
access. Thus, unless all of those machines share a common file system,
the offline mode of access that POP was designed to support

There is *much* more info at this site -- I only clipped
the first two paragraphs.

Some related work is the ACAP (Application Configuration
Access Protocol) and the IMSP (Internet Message Support
Protocol) which are other drafts that are currently on
the table at the IETF (www.ietf.org).

To quote another site that came up in my search:

ACAP is a solution for the problem of client mobility on the
internet. Almost all Internet applications currently store
user preferences, options, server locations, and other
personal data in local disk files. These leads to the
unpleasant problems of users having to recreate configuration
set-ups, subscription lists, addressbooks, bookmark files,
folder storage locations, and so forth every time they change
physical locations.

If you're getting confused -- don't worry -- we all
are. I've been bumping into references to IMAP, and
ACAP for a few months now. They are pretty new and
intended to address issues that only recently grew
up to be problems for enough people to notice them.

The short form is: IMAP is an advanced protocol for
accessing individual headers and messages from a remote
mail box. ACAP (which I guess replaces or is built over
IMSP) provides access to more advanced configuration
options to affect how IMAP (and potentially other
remotely accessed applications) behave for a given account.

1) Is there any IMAP to Linux, esp. Red Hat ?

There is an IMAP server included with Linux some
Linux distributions (Red Hat 3.03 or later I suspect).
I'm not sure about the feature set -- and the
man page on my Red Hat 3 system here is pretty sparse.

However the server is not the real problem here.
What you really need is a client program that can
talk to your IMAP server.

2) Where can I get it ?

The CMU (Carnegie-Mellon University) Cyrus IMAP project
looks promising -- so I downloaded a copy of that
as I typed this and looked up some of these other references.

It's about 400K and can be found somewhere at:

ftp://ftp.andrew.cmu.edu/

3) What must I be carefully when I install it ?

You must have a client that supports the IMAP features
that you're actually looking for. It's possible to
have a client that treats an IMAP server just like a
POP3 server (fetchmail for example). It may be that
Netscape's UIDL support is all you need for your
purposes.

I didn't find any reference to IMAP anywhere on
Netscape's site -- which suggests that they don't
offer it yet. I'm blind copying a friend of mine
that is a programmer for them -- and specifically
one who worked (works?) on the code for the mail
support in the Navigator. Maybe he'll tell me
something about this (or maybe it's covered by his
NDA).

I also looked at Eudora and Pegasus web pages and
found no IMAP support for these either. It was a
long shot since neither of these has a Linux port
(so far as I know) -- and I doubt you want to run
WABI to read all of your mail -- nor even DOSEmu
to run the Pegasus for DOS.

pine seems to support IMAP. XF-Mail (a popular
free X mail user agent) and Z-Mail (a popular
commercial one) also seem to have some support.
More info on IMAP clients is available at the
IMAP Info Center (see below).

The most informative web sites I visited in my
research for this question were:

The most active discussion about UIDL seems to have been on
the mh-users mailing list. Archives can be found at:
http://www.rosat.mpe-garching.mpg.de/mailing-lists/mh-users/

Thank you for your time to read my questions and hope to hear you
soon.
Regards, Nga

It's a hobby. I really only had about 2 hours to spare on this
research (and I took about three) -- and I don't have an
environment handy to do any real testing.

As I said -- I've been bumping into references about
IMAP and ACAP and wanted to learn more myself. At the last
IETF conference (in San Jose) I had lunch with one of the
sysadmins at CMU -- who talked a bit about it.

Sorry this article is so rambling and disorganized.
I basically tossed it together as I searched.
To paraphrase Blaise Pascal:

This letter is so long because I lack the
time to make it brief.

-- Jim

Mail & Sendmail

Hi There,
I just read your article on Linux Gazette, got a lot of
good tips on securing my Linuz machine, thanks. Like
always, I have one bit of question I was hoping you could
answer, I'd like to send mail from my Linux machine w/o
installing sendmail, and I need this e-mail to be sent
by a script initiated by crond.

Right now (w/ sendmail installed) I can do it with
a "mail -s subject noy@ayala.com.ph < my_message".
I'd really like to remove sendmail from my system.

Which article? I'm trying to submit at least
one a month.

Well, you can use smail or qmail. These are
replacements for sendmail.

I haven't installed either of these but I've
fetched a copy of qmail and read a bit of the
documentation. I might be implementing a
system with that pretty soon.

However I'm not sure how much you gain this
way. It's possible to configure 'sendmail'
to send only so that it doesn't listen to
incoming mail at all. This is most easily
done by simply changing the line in your
rc files that invokes sendmail (that would be
/etc/rc.d/init.d/sendmail.init on a typical
Red Hat or Caldera system). Just take the
"-bd" off of that line like so:

/usr/lib/sendmail -bd -q1h

... would become:

/usr/lib/sendmail -q1h

... or

/usr/lib/sendmail -q15m

(changing the queue processing frequency
from every hour to every 15 minutes).

You can also remove sendmail from memory entirely
and use a cronjob to invoke it like:

00,30 * * * * root /usr/lib/sendmail -q

(to process the queue on the hour and at
half past every hour).

If you concerns are about remote attacks through
your smtpd service than any of these methods will
be sufficient.

You should also double check your /etc/inetd.conf for
the smtp service line. This is normally commented out
since most hosts default to loading a sendmail daemon.
It should stay that way.

If you are using fetchmail (and getting your
mail via POP or IMAP) you either after to load
some sort of smtp listener (such as sendmail,
smail, or qmail) or you have to over-ride
fetchmail's defaults with some command line
options.

'fetchmail' defaults to a mode whereby it
connects to the remote POP or IMAP server,
and to the localhost's smtpd and relays the
mail from one through the other. This allows
for any aliases, .forwards, and procmail processing
to work properly on the local system and it
allows fetchmail to benefit from sendmail's
queue handling (to make sure you have sufficient
disk space etc).

However you can configure sendmail to run out
of in inetd.conf with TCP Wrappers (the tcpd entry that
appears on almost all of the other services in that file)
and limit the listener to only accept connections from
the local host.

(the -bs switch tells sendmail to "be" an "smtp"
handler for one transaction. It handles one
connection on stdin/stdout and exits).

All of this discussion assumes that you want to
be able to use local mailers (like elm, and mailx)
to send your mail and fetchmail to fetch it
from a POP or IMAP server.

If your client is capable of it (like the
mail reader in Netscape) you could configure
it to use a remote smtpd gateway directly
(it would make the connection to the remote
host's smtp port and let it relay the mail from
there). Then you'd have no sendmail, qmail, or
smail anywhere on the system.

pine might be able to send directly via smtp
(it does have an IMAP client so this would be a
logical complement to that).

I hope all of this discussion gives you some ideas.
As you can see there are lots of options.

Mounted vfat filesystems

I have 2 vfat filesystems mounted. They belong to root; is there any way
to give normal users read/write access to these filesystems? chown has
no effect on vfat directories and files.

man 8 mount

I think this answer was a waste of bandwidth.
Perhaps Andries didn't know this -- or perhaps he
tried and the man page didn't make any sense.

In either event it doesn't do a thing for any of us
(that didn't know the answer) and is an obvious and
public slap in the face.

You could have at least added:

'look for gid= and umask= under options'

Me, I don't know these well enough so let me
switch over to another VC, pull up the man page myself,
and play with that a bit...

mount -t msdos -ogid=10,umask=007 /dev/hda1 /mnt/c

This command mounts a file system of type msdos (-t)
with options (-o) that specify that all files are to
be treated as being owned by gid 10 ('wheel' on my system)
and that they should be have an effective umask of 007
(allowing members of group 'wheel' to read, write and
execute anywhere on the partition. My C: drive is
/dev/hda1 and I usually mount it under /mnt/c.

I tried specifying the gid by name -- no go. You have to
look up the numeric in the /etc/group file. I tried
different ownership and permissions on the underlying
directory -- they are ignored.

This set of parameters does seem to work with vfat and
umsdos mountings. Using the msdos or vfat at the time
means that chmod and chown/chgrp commands dont' work
on that fs. Using the -t umsdos allow me to change the
ownership and permissions -- and the changes seem to be
effective. However there are some oddities in what happens
when you umount and remount the drive (the move of the
write permission on files seems to stick but the ownership
changes are lost and the owner/group r-x bits seem to
"come back."

Obviously I haven't done much testing with this sort of
thing. I usually don't write to my DOS partitions
from in Linux. In fact I haven't see my DOS hard drive
partition on this system in months (ever since I started
compiling the msdos, vfat, and umsdos filesystems as
modules -- so I don't automount them).

I hope that helps.

Personally I wish that the mount command would take some
hints from the permissions of the directory that I'm
mounting onto. I'm copying you two on this in the hopes
that you'll share your thoughts on this idea.

What if the default for mount was to set the gid and umask
of an msdos/vfat directory based on the ownership and
permissions of the mount point. In other words I set up
/mnt/c to look like:

drwxrwx--- 2 root wheel 1024 Aug 5 1996 c

(which I have) and mount would look up the gid for
wheel and use that and the umask for the mount options.

Re: Answer Guy - POP3 Email

In reading your answer in LG#14 on "Dealing with e-mail on a pop3 server",
I have almost the same challenge. I have an ISP that is providing a 25
user POP3 Virtual Mail Server for 25 users. The problem is that each user
must connect with the ISP individually and then to the mail server.
I would like to find some method to allow Linux to connect with the Mail
Server, individually poll each users account, and then transfer it into a
POP3 server
on the local network (possibly on the Linux box itself). Any suggestions??

If I understand you correctly you have a LAN at your
place with about 25 users/accounts on it.
You're provider has set up 25 separate POP3 mailboxes.

You'd like to set up your Linux (or other Unix) box to
fetch the contents of all of these accounts (perhaps via
a cron job) and to have it process your outgoing mail queue.

Then your users would fetch their mail from the Linux
box (using their own Linux user agents or perhaps using
Pegasus or Eudora under Windows or from Macs.

This is relatively straightforward (especially the POP3
part).

First get a copy of 'fetchmail' (I'm using 2.5 from
ftp://sunsite.unc.edu). Build that.

Now, for each user, configure fetchmail using a
.fetchmailrc file in their home directory

Each will have a line that looks like:

poll $HOST.YOURISP.COM proto pop3 user $HISACCT password $HISPASS

The parts of the form $ALLCAPS you replace with
the name of the pop server, the account holder's name
and the account holder's password. (I presume that you,
as the admin for this Unix box, are already entrusted
with the passwords for these e-mail accounts -- since
the admin of any Unix box can read any of the mail flowing
through it anyway).

To get the list of users in a form suitable for
use in your 'for' loop.

Naturally my psuedo-code is closer to bash' syntax.

This script (the psuedo-code one) will just bring the
ppplink up, for each user in the list (perhaps from a
group named "popusers") it will check for a .fetchmailrc
file in their home directory and run fetchmail for those
that have one. It will then call sendmail to process your
outgoing queue and bring the ppplink down.

(Note: the su -c ... part of this is not
secure and there are probably some exploits
that could be perpetrated by anyone with
write access to any of those .fetchmailrc's.
However it's probably reasonably robust
-- and you could set these files to be
immutable (chattr +i) and you can write
a more secure SUID perl script to actually
execute fetchmail. My scripts, pppup and
pppdown are SUID perl scripts.

I haven't written this as real code and tested it since
I don't have a need of it myself. I recommend that
disconnected networks avoid using POP/SMTP for their
mail feed. UUCP has been solving the problems of
dialup mail delivery for 25 years and doesn't involve some
of the overhead and kludges necessary to do SMTP for
intermittently connected systems.

I do recommend POP/SMTP within the organization and
and it's absolutely necessary for the providers.

Anyway -- fetchmail will then have put each user's
mail into his or here local spool file (and processed
it through any procmail scripts that they might have
set up).

Now each of your users can use any method they prefer
(or that you dictate) to access their mail. DOS/Windows
and Mac users can use Pegasus or Eudora, Linux or other
Unix users can use fetchmail (or any of several other
popclient, getpop, etc, other programs) to get the
messages delivered to their workstation, or anyone in
the organization can use telnet into the mailhost and
user elm, pine, the old UCB mail, the RAND MH system
or whatever.

All of these clients point their POP and mail clients
to your mailhost. Your host then acts as their spool.
This is likely to result in fewer calls to your ISP and
more efficient mail handling all around.

You may want to ask your ISP -- or look around -- for
UUCP providers. On of the big benefits to this is that
you gain complete control of mail addressing within your
domain. Typical UUCP rates go for about $50/mo for a
low volume account and about $100/mo for anything over
100Mb per month. However it's still possible to find
bargains.

(Another nice thing about UUCP is that you can choose
specific sites, with which you exchange a lot of mail,
and configure your mail to be exchanged directly with
them -- if they have the technical know-how at their
end or are willing to let you do it for them. This
can be done via direct dialup or over TCP connections).

uu.net is the Cadillac of UUCP providers (which is
a bit pricey for me -- I use a small local provider
who gives me a suite of UUCP, PPP, shell, virtual hosting,
virtual ftp, and other services -- and is of little
interest to you unless you're in the Bay Area).

You can also find information on Yahoo! using
a search for "uucp providers" (duh!). I also
seem to recall that win.net used to provide
reasonable UUCP (and other) services.

Hope this helps. If you need more specific help in
writing these scripts you may want to consider paying
a consultant. It should be less than three hours work
for anyone whose qualified to do it (and not including
the configuration of all your local clients).

-- Jim

Pseudo Terminal Device Questions

From: Jeong Sung Won

Hello ?
My name is Jeong Sung Won. May I ask you a question ?
I'll make a program that uses PSEUDO TERMINAL DEVICE.

But linux has 8 bit MINOR NUMBER, so that total number of
pseudo terminal device DOESN'T OVERCOME 256.

That does seem to be true -- but it is a rather
obscure detail about he kernel's internals.

Linus' work on the 64-bit Alpha port may change
this.

Is there any possible way to OVERCOME THIS LIMITS ?

Only two that I can think of. Both would
involve patching the kernel.

You might be able to instantiate multiple
major devices -- which implement that same
semantics as major device number 4 (the
current driver for the virtual consoles and
all of the pty's).

I'm frankly not enough of a kernel hacker to tell
you how to do this or what sorts of problems it would
raise.

The other would involve a major overhaul of the
kernel code and all the code that depends on it.

For example,on HP9000, minor number is 24 bit,
and actually I used concurrently 800
pseudo terminal device. And more than 1000 is also possible.

I wonder what it is on RS/6000, DEC OSF/1, and Sun/Solaris.

On Linux, is it impossible to make it, let me know the way I counld tell
LINUS that upgrade minor number scheme from 8-bit to 16-bit or more-bit
is needed.

Linus Torvald's e-mail address has been included with
every copy of the sources ever distributed.

However it is much better to post a message to the
comp.os.linux.development.system newsgroup than
directly to him (or any of other developer).

As for "telling LINUS [to] upgrade" -- while it would
probably be reasonably well recieved as a suggestion --
I'm not sure that "telling" him what to do is appropriate.

It's easy to forget that Linus has done all of his work
on the Linux kernel for free. I'm not sure but I
imagine that the work he puts in just dealing with all the
people involved with Linux is more time consuming and
difficult than the actual coding.

As many of the people who are active in the Linux community
are aware Linus has been very busy recently. He's accepted
a position with a small startup and will be moving
to the San Francisco Bay Area (Silicon Valley, actually)
-- and he and Tove have just had a baby girl.

I will personally understand if these events keep him
from being as active with Linux as he as been for the
last few years.

-- Jim

root login Bug in Linux

The root password is an 8 character random series. For going live online
I updated the root password to a 16 character random series. I can log in
with the 16 character series, but also using the first eight and any
random characters after that, or just the first eight. This creates an
infinite number of root passwords and worries me more than a little.

About Unix Passwords and Security

This is a documented and well known limitation of
conventional Unix login and authentication.

You can overcome this limit if you upgrade to the
shadow password suite (replace all authenticating
programs with the corresponding shadow equivalents)
and enable the MD5 option (as opposed to the traditional
DES hash).

Note -- there is probably an "infinite" number of valid
passwords to either of these schemes. The password
entry on your system is not encrypted. That is a
common misconception. What is stored on your system
is a "hash" (a complex sort of checksum).

Specifically the traditional Unix DES hash uses your password as
the key to encrypt a string of nulls. DES is a one-way algorithm
-- so there is no known *efficient* way to reclaim the key even
if one has copies of the plaintext and the ciphertext.

'Crack' and it's brethren find passwords by trying dictionaries
of words and common word variations (reverse, replace certain
letters with visually similar numerics, various abbreviations,
prepending/appending one or two digits, etc) -- and using the
crypt() function (or an equivalent) on a string of nul's to find
matches. This isn't particularly "efficient" -- but it is several
orders of magnitude better than an exhaustive brute force attack.

The only two defenses against 'Crack' are:

Don't let anyone have copies of the password hashes
(which is why the shadow suite puts those in a separate
file -- that is only readable by SUID or SGID programs,
and not normal users)

Don't allow users to use words, names, or simple
variations of words as their passwords. This is don't
by installing npasswd or passwd+ (replacements for the
stock passwd program).

Use both of these strategies on all mult-user systems.
That way, if someone exploits some newly discovered
bug to get a copy of the shadow file, he is less likely
to get any good passwords (since that will entail a
password that is more clever than your npasswd rules and
less clever than your attackers custom 'crack' dictionaries).

It is possible that two different passwords (keys) will
result in the same hashed value (I don't know if there are
any examples with DES 56 bit within the domain of all
ASCII sequence up to eight characters -- but it is possible).

Using MD5 allows you to have passwords as long as you like.
Again -- it is possible (quite likely, in fact) that a
number of different inputs will hash to the same value.
Probably you would be looking at strings of incomprehensible
ASCII that were several thousand bytes long before you found
any collisions.

Considering that the best supercomputers and parallel
computer clusters that are even suspected to exist take
days or weeks to exhaustively brute force a single DES
hash (with a max of only 8 characters and only a 56-bit
key) -- it is unlikely that anyone will manage to find
one of the "other" valid keys for any well chosen password
without expending far more energy and computing time than
most of our systems are worth. (Even in these days of
cheap PC's -- computer time is a commodity with a
pricetag).

There other ways to get long password support on your
system. However the only reasonable one is to use the shadow
suite compiled with the MD5 option. This is the way that
FreeBSD (and it's derivatives) are installed by default --
so the code and systems have been reasonably well tested.

In fact -- if security and robustness are more important
to you than other features you may want to consider
FreeBSD or (or NetBSD, or OpenBSD) as an alternative.
These are freely distributed Unix implementations which
have been around as long as Linux. Obviously they have a
much smaller user base. However each has a tightly knit
group of developers and a devoted following which provides
or an extremely robust and well-tested system.

As much as I like Linux -- I often recommend FreeBSD for
dedicated web and ftp servers. Linux is better suited to
the desktop and to use with exotic hardware -- or in
situations where the machine needs to interact with
Netware, NT and other types of systems.
[Oh, Oh! Here come the fireballs!]

FreeBSD has a much more conservative set of features
(no gpm support for one example -- IP packet filtering
is a separate package in FreeBSD while it's built into
the Linux kernel).

Another consideration is the local expertise.
Linux and FreeBSD are both extremely similar in most
respects (as they both are to most other Unix
implementations). In some ways they are more similar
to one another than either is to any non-PC Unix.
However the little administrative difference might
very well drive your sysadmin crazy. Particularly if
he has a bunch of Linux machines and is used to them --
and you specify one or two FreeBSD systems for
your "DMZ" (Internet exposed LAN segment).

Back to your original question:

You said that you are using a "random" string of characters
for your password. In terms of cryptography and security
you should be quite careful of that word: "random"

Several cryptographically strong systems have been
compromised over the years by attacking the randomizer
that were used to generate keys. A perfect example of
this is the hack of SSL by a student in France (which
was published last spring). He cracked a Netscape
challenge and got a prize from them for the work
(and Netscape implemented a better random seed generation
algorithm).

In the context of creating "strong" passwords (ones
that won't be tested by the best crack dictionaries out
there) you don't need to go completely overboard.
However -- if a specific attacker knows a little bit
about how you generate your random keys -- he or she
can generate a special dictionary tailored for that
method.

Next bug:
Two users with consecutive login entries. Both simply information logins,
never to be logged in to, just for fingering to for status information.
If you finger the second, OK. But if you finger the first, it fingers
both. UID numbers 25 and 26. If I comment 26, but have a third login on
UID 27 then it is OK. I have tried unassigning the groups and reassigning
them. They both have real home directories, shell is dev/null, and are in
a group called 'private' on their own. There are no groups by the same
name as the login.

This sounds very odd. I would want to look at the
exact passwd entries (less the password hashes) and to
know alot about the specific implementation of 'finger'
that you were using (is it the GNU cfingerd?).

I would suggest that you look at the GNU cfingerd.
I think it's possible to configure it to do respond
to "virtual" finger requests (i.e. you can configure
cfingerd to respond to specific finger requests by
return specific files and program outputs without
having any such accounts on your system). This is
probably safer and easier than having a couple of
non-user psuedo accounts and using the traditional
finger daemon. (In additional the older fingerd is
notoriously insecure and overflows of it was one
of the exploits used by the "Morris Internet Worm"
almost a decade ago).

Given the concerns I would seriously consider
running a finger daemon in a chroot'd jail.
Personally I disable this and most other services
in the /etc/inetd.conf when ever I set up a new system.

When I perform RASA (risk assessment and security auditing)
/etc/inetd.conf is the second file I look at (after looking
for a /etc/README file -- which no one but me ever keeps; and
inspecting the /etc/passwd file).

-- Jim

Sendmail-8.8.4 and Linux

After setting up fetchmail and the PPP link to my ISP, everything has worked
perfectly retrieving mail from the POP3 account.

Now, I've stumbled on another problem I require some help with. Compiling
and Installing Sendmail-8.8.4 (or 8.8.5). I downloaded the 8.8.4 source
from sunsite and set it up in the /usr/src directory and using the O'Reilly
"Sendmail" book as my guide, I modified the Makefile.Linux for no DNS
support by setting ENVDEF = -DNAMED_BIND=0. And removing Berkeley DB support
(removing -DNEWDB).
After compiling and executing ./sendmail -d0.1 -bt < /dev/null in the obj
dir, I receive the following:

and the program hangs at this point.
I am running Linux.2.0.29 on a 486DX40 with 8 megs. My gcc is version 2.7.0.

Any hints you could provide are greatly appreciated!,

I fetched a copy of 8.8.5 and used the .../src/makesendmail
script -- and only encountered the problems with NEWDB
Removing that seemed to work just fine.

I noticed you said -- .../src/obj -- did you mean something
like: .../src/obj/obj.Linux.2.0.27.i386/

If you properly used the makesendmail script then the
resulting .o and binaries should have landed in a directory
such as that.

Other than that I don't know.

I don't disable the DNS stuff -- despite the fact that
my sendmail almost all done via uucp.

As for using this with fetchmail -- I have my sendmail
configured in /etc/inetd.conf like so:

# do not uncomment smtp unless you *really* know what you are doing.
# smtp is handled by the sendmail daemon now, not smtpd. It does NOT
# run from here, it is started at boot time from /etc/rc.d/rc#.d.
## jtd: But I *really do* know what I'm doing.
## jtd: I want fetchmail to handle mail transparently and I
## jtd want tcpd to enforce the local only restriction
smtp stream tcp nowait root /usr/sbin/tcpd /usr/local\
/sbin/sendmail -bs

(note -- the line back is for this mail only -- remove it
before attempting to use this line. Also note the -bs
"be an smtp handler on stdin/stdout")

This arrangement allows me to fetchmail, lets fetchmail
transparently talk to sendmail, and keeps the rest of the
world from testing their latest remote sendmail exploit
on me while my ppp link is up (I wouldn't recommend this
for high volume mail server!).

(which processes any mail that elm, pine, mh-e
or any other mailers have left in the local queue --
awaiting their trip through uucp's rmail out to the
rest of the world).

If you continue to have trouble compiling sendmail
then you may want to just rely on the RPM updates.
Compiling it can be tricky, so I avoid doing it
unless I see a bugtraq or CERT advisory with the
phrase "remotely exploitable" in it.

Re: O'Reilly's "bat" book. Do you have the 2nd Edition?
If not -- get it (and ask them about their "upgrade"
pricing/discount if that's still available)

-- Jim

wu-ftpd Problems

On BSDI, I've read ALL of the doc for wu-ftpd, and have ftp logins
limited to the chroot dir, but still have these problems:
1) I cannot force ftp only. The guestgroup "guests" can telnet, and go
everywhere. I've put /bin/true in /etc/shells; I've edited passwd and
master.passwd for that; no effect

Usually I set their passwd to /bin/false or /usr/bin/passwd.
I make sure that I use the path filter alias to prevent
uploads of .rhosts and .forward files into their
home directory under the chroot and I put entries like:

/home/.ftp/./home/fred

... for their home directory field in the (true-root)/etc/passwd
file.

Also make sure that you have the -a switch on the ftpd
(or in.ftpd) line in your inetd.conf. The -a tells ftpd to
use the /etc/ftpaccess file (or /usr/local/etc/ftpaccess --
depending on how you compiled it).

Personally I also configure each "ftponly" account into
the sendmail aliases file -- to insure that mail gets
properly bounced. I either set it to the user's "real"
e-mail address (anywhere *off* of that machine) or I set it
to point at nobody's procmail script (which autoresponds to
it).

2) "guests" ftp to the proper directory, but get no listing. I have set
up executable of ls in the ftp chroot dir in /bin there; no effect.

How do you know that they are in the proper directory?
What happens if you use a chroot (8) command to go to
that dir and try it? Is this 'ls' statically linked?
Do you have a /dev/zero set up under your (chroot)/?

Most common cause of this situation is a incomplete
(chroot) environment -- usually missing libraries or
missing device nodes.