Add setzfsquotas script to handle fixup of existing quotas, add update
script to do a one-time invocation of this script at boss-install time,
and fix accountsetup so it will properly set both quotas going forward.

* Move user mod (gecos,password) into the accountsetup proxy instead of
ssh chpass. Wrap all usermod/chpass system calls in a loop that looks
for the busy file error, back off and try again for a while.
* Add same wrapping to local (boss) calls of usermod/chpass. I put that
function into emutil.
* Rename old modgroups in the proxy to setgroups, since that it is what
it was actually doing.

wanting to call setgroups cause it is so slow. also refactor the code to
chown/chgrp user dot files so we can call it from reactivate.
Refactor the code that bumps user/project activity and calls exports
setup so that we can call it from reactivate.
When deleting a ZFS home/proj directory, do the ZFS rename and then
set the mountpoint=none, no need to have it mounted.

many ZFS mounts on ops. which on the Mothership is on the order of 8000
or so. Deactivate/reactivate a user with:
boss> wap tbacct deactivate -u <user>
boss> wap tbacct reactivate -u <user>
Deactivate will set the shell to nologin and set the ZFS mountpoint=none.
Reactivate will undo that. Note that these do not HUP mountd.

Affects user, project and group directories. Gotta take all the directory
creation/removal/moving out of the boss-side scripts and get it into the
ops-side scripts.
Current state is...not even syntactically correct in some scripts!

creating the initial ZFS volumes, that is described in Mike's notes
file on how to setup ZFS on ops. But once that is done, the runtime
supports takes care of creating volumes for users and projects/groups.
New configure variables, with defaults to:
WITHZFS=0
ZFS_ROOT=z
ZFS_QUOTA_USER="1G"
ZFS_QUOTA_PROJECT="100G"
ZFS_QUOTA_GROUP="10G"