the output of ./configure looks fine and clearly states that it will use the neon library in //usr. That means the headers form the libenon27-dev packages in /usr/include and the library from the libneon27 package in /usr/lib. When it nevertheless tries to use libneon27-gnutls then there is something seriously broken with these packages. Either a bug in the packages or something went wrong with the package management.

Even when you now have a working davfs2 with your locally build neon library you should consider reporting this to ubuntu.

This looks like your version of davfs2 is not compiled against the Neon library of your system.

Did you configure davfs2 before running make and make install?

If not:
Please read file INSTALL in the top level source directory of davfs2.
cd into this directory and run "./configure". When finished the configure script will print the location of the neon library it will use. It should be /usr.
Run "make" and as root run "make install".

If yes:
Look at the output of configure for anything neon related and send it.

Did you purge the davfs2 package from Ubuntu? Is /sbin/mount.davfs a symbolic link to your version of mount.davfs which should be in /usr/local/sbin?

"ldd /sbin/mount.davfs" and "ldd /usr/local/sbin/mount.davfs" will show you all of the used libraries. Both commands should produce the same output.

I just looked for Ubuntu packages. They offer the same packages as Debian and most probably they are just the Debian packages.

Your output from neon-config looks like it is not from a Debian package.

Have you another neon library on your system? In /usr/local?
Please try /usr/bin/neon-config --libs.
If you have another neon-library you should use option --with-neon=/usr when configuring davfs2 to use the library from the package.

I'm confused about Ubuntu. I would have expected they offer the same packages as Debian for this. This would be
libneon27
libneon27-gnutls
libneon27-dev
libneon27-gnutls-dev

If you install all but libneon27-gnutls-dev the configure script of davfs2 would select libneon27 (the OpenSSL version).

neon-config ist part of the -dev package. That you have libneon-27-dev but neon-config shows -lneon-gnutls is strange. Seems that Ubuntu has dropped the OpenSSL version of neon? Please check again with your package manager.

If you really need to build your own neon library you will need
openssl (you should already have this)
libgnutls-dev
the neon sources (take version 3 from the neon website)

In the top-level source directory of neon please run ./configure --help. It will show the option to select the OpenSSL library. You should not change the prefix so it will get installed in /usr/local. You may also need to run ldconfig. Please read the neon documentation for this.

When building davfs2 you will have to call the configure script with option '--with-neon=/usr/local' to select your neon library.

today I finished my long running test and I could reproduce the problem.

For both versions of the Neon library (with OpenSSL and with GnuTLS) I did:

create 10000 files in 100 directories, each about 15 kB

read some hundred of these files

delete all the files and directories

run the complete cycle up to 7 times in a row

Results:

Neon complied with OpenSSL:
During the first cycle it allocated about 5 MiB of real memory. This is what I expected from the size of 10000 nodes. It never released the memory. But during the next 6 cycless it did not allocate any additional real memory. As far as I know it is quite common not to release memory once allocated. But it could obviously reuse the freed memory in cycle 1 to 6. I think there is no memory leak.

Neon compiled with GnuTLS:
During the first cycle it allocated about 30 MiB of real memory. This is one magnitude more than I expected. In the next cycles it allocated additional memory. After for 4 cycles real memory was up to 100 MiB and I stopped the test. There is a memory leak.

I will have to investigate this further. It will take some time. Please tell me your versions of libgnutls and libneon (I used libgnutls 2.12.20 and libneon 0.29.6, both from Debian Wheezy).
Maybe there is already a patch somewhere or I can create one.

Workaround:
The easy way to solve this problem at the moment is to use a neon version that is compiled against OpenSSL. Your distribution probably has one and you can have both version of neon installed at the same time.
To compile davfs2 (including the patch for the minor memory leak) you need the -devel package for the OpenSSL version of neon (the GnuTLS-version of the -devel package has to be removed).
neon-config --libs should show which TLS-library is used.

Cheers
Werner

P.S.: Please use my mail-address only when necessary. As much information as possible should be publicly available on this support tracker.

Regarding option minimize_mem:
This option is only useful if the directory tree is very big. For a rough estimation you may assume about 400 Byte of working memory for every file or directory (depending on the length of file names and etags). If this amount of working memory is not a problem it is better to not use this option.

Please, after running the file system for long enough, tell me whether this patch fixed the problem.

The most obvious problem is that there are about 6000 files in lost+found. davfs2 will put files there when it can't upload new or changed files to the server.

So please do:

- check the real size of your cache directory (it is in /var/cache/mount.davfs/<name fuilt from servername and mout point>.

- look at the files in lost+found and try to find out where they come from.

There seems to bee something wrong with file uploads. To check this do

- set debug option 'debug most'.

- mount the file system.

- try to copy one of the files in lost+found into some other directory of your webdav file system. For the target use a name that is the same as the one in lost+found but without that random string at the end of the file name, beginn with a '-'-character.

- wait about 15 seconds.

- unmount and send me the logs. Please do not include that zillion of messages from snmpd. I'm only intersted in messages from mount.davfs.

After that you should remove all the files in lost+found. If you don't need them you may just remove all files in the cache directory (including the index file). When running the file system again you should regularly check the lost+found directory.

davfs2 will only remove information from memory if the file or directory has not been accessed for some time (typically about 5 seconds, depending on option file_refresh).

You should first look for any application or user thats scans the file system regularly. Some graphical file managers do that or some daemon that checks for changes in the file system. If there is any you will have to disable it.

If you can't find any such process, please set options 'debug config' and 'debug cache', mount the file system, do some directory listings, wait about 1 miniute and umount the file system.
Search your log files for entries from mount.davfs and send them to me. It may be quite a lot of messages. You may want to send them to my private email address: -unavailable-

Hi Werner,
I installed the last version 1.5.0, but we still have
a memory problem....
The RES counter in the "top" for the mount.davfs process is always increasing and inccreaing and after a while the machine begins to swap and kills another important process.
news config parameter minimize_mem is set to 1 but without success

Like I suspected this periodically cleaning of metadata keeps the performance good at least with our use. Naturally cleaning too often adds some extra work but it's negligible and nothing when compared with the worse case scenarios which are avoided by this. And one can tweak the frequency with file_refresh parameter.

The problem was that a directory may be open for quite some time and the kernel will use inode information after the directory has been closed. It should only do this for 1 second (that's the live time of inode information set by davfs2).

The old patch took only the time when the directory has been refreshed to calculate whether a file node can be deleted. The new patch also takes the time when the directory has been closed (atime) into account.

I patched the source code of version 1.4.7 (even though the name in the diff file implied version 1.4.6). Compiled ok and I tested against a similar setup we are running daily. Like you said 'minimize_dirs 1' can only be used if one never set current directory inside the davfs2 file system. Just using find command of Linux is enough to trigger this problem (find seem to move it's cwd while running, easily seen with lsof).

But I also ran into problems with running with just 'minimize_mem 1'. Most of the time it works as expected (as seen from the debug log). But sometimes it triggers a situation where a file/files cannot be copied. And these files vary in different runs.

when testing my old patch I noticed a major problem: if the working directory is set to be in the davfs2 file system the kernel will cache the inode number of PWD. davfs2 does not know of this and will remove the node which will cause file system errors.

Because of this I changed the patch and you now have two new configuration options:

'minimize_mem 1' will tell davfs2 to remove unused file nodes, but it will not remove directory nodes. Note that local file times may change unexpectetly because the local times stored in the node may differ from the Last-Modified time stored on the server. You should not use this with applications that depend an file times (like backup programs).

additionally setting 'minimize_dirs 1' will tell davfs2 to also remove unused directory nodes. You may use this when your file systems has a lot of directories and you are sure that you never cd into the davfs2 file system.

Attached is a patch.

Please report about your experience so I can decide whether to include it in the next release or not.

Back in 2008 I notified about the davfs2 cache growing problem and you got it fixed back then. We've been using it ever since (currently version 1.4.7). And it is under heavy use (nightly packaging scripts with 10000's files going through).

But we are just using it as direct read-only file system access of Windows clients to svn repository through combination of davsf2, tmpfs, autofs & samba. The only problem we've encountered is that over time it slows down if automounter is not given the opportunity to cleanup! This doesn't happen with the automatic scripts itself but people leaving just some executables running. The memory consumption is not really issue for us but my educated guess is that cleaning metadata regularly would help this speed issue too.

This is a design issue of davfs2 which is not well suited for machines with a small amount of memory.

Details:
davfs2 stores meta data of all ever requested resources in a tree of nodes in the working memory. It only removes nodes that get invalid. When davfs2 is running for some time it might finally store meta information about all resources in memory. This might get rather big if the server holds many resources.

Possible solution:
About two years ago, on request of a user, I added code that will regularly scan the node structure and remove nodes that have not been used for some time. Unfortunately I got no response from the requester of this feature and was not able to seriously test it myself. So I removed that code. (See https://savannah.nongnu.org/support/?107437)

If you are interested to test this feature (and report your experience) I could create a patch for you (or a patched source package). If your tests are successful I would include it as an option in the next release.