How to install NFS under OpenSlug

(Note: "nfsd", described here, is the module for providing an NFS Server on the slug. To mount files on the slug from other servers, the nodule you need is "nfs".)

Changed lines 24-25 from:

Note: i wasn't able to install nfs, because update.d-rc was missing. If it happend, try "export PATH=$PATH:/user/sbin". If it works, add this path in your /etc/profile

to:

Note: i wasn't able to install nfsd, because update.d-rc was missing. If it happend, try "export PATH=$PATH:/user/sbin". If it works, add this path in your /etc/profile

Changed line 53 from:

To get NFS to automatically start at boot time:

to:

To get NFSD to automatically start at boot time:

Changed lines 71-217 from:

then install portmap-utils and restart portmap/nfs again.

to:

then install portmap-utils and restart portmap/nfs again.

Some exportfs problems

This applies whether you are using an /etc/exports file, or whether you
are using separate exportfs commands. Essentially, for each file to be
exported, you have to specify 3 things:

The host(s) to export it to ('*' means <world>);

The directory or file to export;

The options to export it with.

For example

exportfs -o ro,sync,no_root_squash clientname:/media/sda1

or, in /etc/exports

/media/sda1 clientname(ro,sync,no_root_squash)

The exportfs command always succeeds (bar obvious syntactic glitches), and
records any remotely sensible request in /var/lib/nfs/etab, and not in
xtab, as some manual pages seem to imply. If something does appear in
xtab it indicates that it got as far
as trying to tell the kernel about it. So if, for example, the thing to be
exported does not exist, it goes in etab only.

There is a further file rmtab which is supposed to keep count of what
external clients have actually been mounted so that, if SlugOS crashes, those
clients will continue to see those things as soon as it is rebooted. However,
Linux sometimes fails to keep rmtab up-to-date, as we shall see.

There are two modes in which exportfs can be used:

legacy mode (all systems prior to Linux 2.6, and even some after that, including SlugOS in its default state);

new-cache mode.

See the Man page for exportfs(8).

In legacy mode, if the path to the object to be exported exists,
exportfs immediately sends the export request to the kernel, and records
the fact in /proc/lib/nfs/xtab. If it does not exist, it just goes in
etab on the offchance that you
will create it later. This can be handy if you subsequently hotplug a USB stick
which casues /media/sda1 to be created. Nothing happens immediately, but
as soon as clientname tries to mount it, the mountd will sort it all
out, and it will then appear in xtab.

In new-cache mode, no request is ever sent to the kernel until some
clientname attempts to mount it. So, although the xtab file still
exists, it is always empty.

To use your slug in new-cache mode, include the following in /etc/fstab:

nfsd /proc/fs/nfsd nfsd defaults 0 0

(Note: if you do that manually later on, you will need to restart the mountd
by giving "/etc/init.d/nfsserver restart"). In new-cache mode, you will
find various interesting things in /proc/fs/nfsd, including a file
exports which indicates what the kernel had actually exported.

Note however that, although new-cache mode works fine for exporting,
mounting and reading/writing files from remote clients, it sometimes records
more mounts in rmtab than have actually happened, leading possibly to
exporting too much when rebooting after a crash. This is a bug in Linux
which they have not yet fixed (and seem unsure whether and how to fix) so if
that matters to you, then stick with legacy mode.

So if you exported something (and especially if, in legacy mode, it got into
xtab) does this mean it got, or will get, exported? No! Not if the
kernel takes a dislike to it. You will get an accurate view of what the kernel
thinks if you look in /proc/net/rpc/nfsd.export/content (or in
/proc/fs/nfsd/exports in new-cache mode), but to find out
why something is not there, you will have to go look in
HowTo.ProbingTheKernel. So surely all this uncertainty is a bug in
exportfs? Not so; it was intended that way - it is a "feature".

So here are some of the things that the kernel dislikes:

Anything other than a file or a directory (so no devices, or fifos, or streams, or other funnies, though soft links work if the thing at the far end works). Note the thing that you export does not have to be a complete filesystem; once a filesystem is mounted somewhere, you can export any file or directory within it.

Things such as /proc, /sys, etc, which are not genuine files at all.

Certain types of filesystem which NFSD just does not understand, notably

NFS, SMBFS, NCPFS, CODA, AFS. These are all either mounted from elswhere, or are part of some distributed file system; either way, they are not stored on this server, and the client would be better off mounting them directly from elsewhere.

Filesystems where nfsd support has not been coded in. The only example likely to be encountered on a slug is JFFS2, which is used for the onboard Flash (so don't expect to be able to mount that from outside, though it is rumoured that this is being worked on). The other examples are mostly weird or obsolete systems unlikely to be encounted.

But there is another hurdle to overcome. Anything you want to export is either
itself a mount point, or one of its parents will be a mount point (ultimately,
even root (/) must be mounted somewhere). If that mount point is a genuine
device (i.e. you can find it in /dev), well and good - it can
construct a filehandle out off its major and minor device numbers. But if
the mount point is not a device (anything on TMPFS is the commonest
example), then you have to specify a unique fsid number yourself, e.g.

exportfs -o sync,fsid=2 clientname:/tmp

(which actually exports /var/volatile/tmp, of course). To check that you do
not use the same number twice, you should look in
/proc/net/rpc/nfsd.fh/content where you will see something like

which shows the normal fsidtype 0 for the /media/sda1 from my first example,
on which the device /dev/sda1 (a USB stick) had been mounted
earlier, together with the fsidtype 1 (for numeric fsids) from my /tmp
example. Note that the fsid value zero is reserved for the root of the whole
filesystem, as also shown.

To see all the filesyetem types your system currently knows about, look in
/proc/filesystems, which marks with "nodev" all those which will need
this treatment.

NFS Version 4

Currently, SlugOS (and indeed most/all Linux systems) uses NFSv3 when acting
as an NFS client to mount files from outside. However, we are concerned here
with SlugOS acting as an NFS Server, in which case it may well encounter
outside clients that will try and connect to it using NFSv4. This is not yet a
supported feature (as of SlugOS 4.8) though Linux claims to contain all that
is necessary for the purpose. In fact, it does work sometimes (and would likely
work even better if SlugOS were to upgrade to using version 1.0.8 or later of
the Linux nfs-utils). However, the first thing to ensure is that root (/)
is given fsid=0 (which I mentioned above was the conventional value).
If you do not actually need to export /, exporting it to the fictitious
client DEFAULT with fsid=0 might be sufficient.

Note also that the problems with rmtab are likley to be even worse with
NFSv4, so it is not (yet) for the faint-hearted.

If /etc/init.d/nfsserver start is hanging at "starting 8 nfsd kernel threads:" every time, it helped for me to reinstall portmap (I know it is already installed, but something apparently went wrong):

ipkg -force-reinstall install portmap

Aufter that, the NFS Server started flawlessly for me.

to:

If /etc/init.d/nfsserver start is hanging at "starting 8 nfsd kernel threads:" every time, it helped for me to reinstall portmap (I know it is already installed, but something apparently went wrong):ipkg -force-reinstall install portmapAfter that, the NFS Server started flawlessly for me.

March 23, 2008, at 06:13 PM
by gasmann -- add info about hanging on startup

Added lines 60-64:

Hanging on Startup
If /etc/init.d/nfsserver start is hanging at "starting 8 nfsd kernel threads:" every time, it helped for me to reinstall portmap (I know it is already installed, but something apparently went wrong):

January 11, 2007, at 06:09 PM
by Dean Jackson -- Clarity, merging last two comments.

Deleted line 56:

Changed lines 59-64 from:

I installed NFS server by following the procedure described above. After all installation was done, when I run the shell script "sh /etc/init.d/nfsserver start", got the following error message.
FATAL: Module nfsd not found.

Something is going on here. Could you test yours and update this How-To. Thanks.

//Have you tried rebooting the machine, that helped for me. Also try running "depmod -a".

to:

When running /etc/init.d/nfsserver start, if you recieve the "FATAL: Module nfsd not found" message, either restart the box, or run the command:
depmod -a

Failed to start NFSDI installed NFS server by following the procedure described above. After all installation was done, when I run the shell script "sh /etc/init.d/nfsserver start", got the following error message.
FATAL: Module nfsd not found.

Something is going on here. Could you test yours and update this How-To. Thanks.

Run the command "update-rc.d nfsserver defaults" which will create the scripts to start it in levels 2-5 and stop it in 0, 1 and 6. Some documentation on update-rc.d is here http://wiki.linuxquestions.org/wiki/Update-rc.d

to:

Run the command "update-rc.d nfsserver defaults" which will create the scripts to start it in levels 2-5 and stop it in 0, 1 and 6. Some documentation on update-rc.d is here http://wiki.linuxquestions.org/wiki/Update-rc.d

Problem with NFSDIf you have the problem when you type /etc/init.d/nfsserver start and the following line is in the log : Sep 30 15:49:07 (none) daemon.err nfsd[2030]: nfssvc: No such deviceYou must launch the command : depmod -a

November 20, 2005, at 12:09 PM
by mwester -- added some information on the exports file format

Added line 11:

\\

Added line 13:

\\

Added lines 20-39:

Some additional options for the /etc/exports file:The example above is a fairly secure way to export the /usr/public directory - it permits only the computer at IP address 192.168.1.4 to mount the /usr/public directory. In cases where multiple computers wish to mount the directory, or if you are using DHCP where you can't guarantee exactly which IP address a computer will be using from day-to-day, a slightly different syntax can be used to specify a range of accepted IP addresses. For example:

/usr/public 192.168.1.0/24(rw)

specifies that any IP address in the range 192.168.1.0 to 192.168.1.255 will be allowed. If you prefer the traditional netmask way of expressing this, this format:

/usr/public 192.168.1.0/255.255.255.0(rw)

is equivalent.

The (rw) option should be self-explanatory; it can be replaced by (ro) in order to ensure that other systems can only mount this exported directory for read access.

NFS and the super-user (root) have an interesting relationship, one which can trip up unsuspecting users. Specifically, there's no guarantee that the owner of the root account on an NFS client should have the same root privileges on the NFS server. In order to make sure that the client does not gain more privileges than it should, NFS maps the client's root user to the "nobody" user on the server. This behavior is generally regarded as a feature, but it may not be how you wish your network to operate. If you can say that the root user in your network is "trusted", then you can explicitly inform NFS that it is not to perform this mapping; i.e. the root account on the client is to have the full privileges accorded with the superuser on the mounted filesystem:

/usr/public 192.168.1.0/255.255.255.0(rw,no_root_squash)

Fine Print: a note on permissions with NFS in general. NFS uses the user numbers and group numbers associated with a process, not the textual user ids or textual group names. Basically this means that if you do not keep your passwd files in close syncronization, you can introduce a great deal of confusion into your network, if not an outright security problem. For example, assume user id "fred" is added on one system and assigned UID number 501, while on another system somewhere, user id "ethel" is added and is given the UID number 501 on that system. Since NFS uses only the user id, on NFS filesystem shared between these two systems fred and ethel have the same privileges -- the two users are one-and-the-same from the NFS filesystem's point-of-view. This is easily avoided by making sure that the UID numbers and GID numbers in the passwd files are consistent across the various system in the network.

September 22, 2005, at 04:54 AM
by stanley_p_miller_qaz -- added how to start at boot time

Changed lines 21-26 from:

touch /etc/default/nfsd (actually this step might not be needed)

to:

touch /etc/default/nfsd (actually this step might not be needed)

To get NFS to automatically start at boot time:

Run the command "update-rc.d nfsserver defaults" which will create the scripts to start it in levels 2-5 and stop it in 0, 1 and 6. Some documentation on update-rc.d is here http://wiki.linuxquestions.org/wiki/Update-rc.d

How to install NFS under OpenSlug

This HowTo might be a little bit chaotic and incomplete. It took me a little bit of tweaking to get things running, and some steps might not be needed or out of order. If you use this HowTo and find flaws please correct!

comment the line modprobe -n nfsd || exit 0 in /etc/init.d/nfsserver. Apparently this thinks nfsd is not loaded although it is loaded in the previous step.

create a file /etc/exports. The format of this file is<directory to share> <computer to share to>(options).E.g.:/usr/public 192.168.1.4(rw)where 192.168.1.4 is the PC to which you want to export the directory.

start the daemons by sh /etc/init.d/nfsserver start

On the PC with IP address 192.168.1.4 you can just say mount 192.168.1.77:/usr/public /mnt assuming your slug is on 192.168.1.77. You can of course also use the name of the slug instead of the IP address.