It's not slow because of the encryption, it's slow because it's FUSE and it keeps checking the file system state.
– w00tMay 19 '13 at 13:40

3

@w00t I don't think that it's FUSE slowing it down, and not the encryption. Changing the encryption to arcfour sped it up for me, whereas using scp was just as slow as sshfs.
– SparhawkSep 28 '13 at 4:57

23

@Sparhawk there's a difference between throughput and latency. FUSE gives you pretty high latency because it has to check the filesystem state a lot using some pretty inefficient means. arcfour gives you good throughput because the encryption is simpler. In this case latency is most important because that's what causes the editor to be slow at listing and loading files.
– w00tSep 29 '13 at 11:16

Besides already proposed solutions of using Samba/NFS, which are perfectly valid, you could also achieve some speed boost sticking with sshfs by using quicker encryption (authentication would be as safe as usual, but transfered data itself would be easier to decrypt) by supplying -o Ciphers=arcfour option to sshfs. It is especially useful if your machine has weak CPU.

-oCipher=arcfour made no difference in my tests with a 141 MB file created from random data.
– SparhawkSep 28 '13 at 4:39

6

That's because there were multiple typos in the command. I've edited it. I noticed a 15% speedup from my raspberry pi server. (+1)
– SparhawkSep 28 '13 at 4:56

4

The chacha20-poly1305@openssh.com cipher is also an option worth considering now arcfour is obsolete. Chacha20 is faster on ARM processors than AES but far worse on x86 processors with AES instructions (which all modern desktop CPUs have as standard these days). klingt.net/blog/ssh-cipher-performance-comparision You can list supported ciphers with "ssh -Q cipher"
– TimSCNov 20 '17 at 20:48

I do not have any alternatives to recommend, but I can provide suggestions for how to speed up sshfs:

sshfs -o cache_timeout=115200 -o attr_timeout=115200 ...

This should avoid some of the round trip requests when you are trying to read content or permissions for files that you already retrieved earlier in your session.

sshfs simulates deletes and changes locally, so new changes made on the local machine should appear immediately, despite the large timeouts, as cached data is automatically dropped.

But these options are not recommended if the remote files might be updated without the local machine knowing, e.g. by a different user, or a remote ssh shell. In that case, lower timeouts would be preferable.

Here are some more options I experimented with, although I am not sure if any of them made a differences:

Recursion

The biggest problem in my workflow is when I try to read many folders, for example in a deep tree, because sshfs performs a round trip request for each folder separately. This may also be the bottleneck that you experience with Eclipse.

Making requests for multiple folders in parallel could help with this, but most apps don't do that: they were designed for low-latency filesystems with read-ahead caching, so they wait for one file stat to complete before moving on to the next.

Precaching

But something sshfs could do would be to look ahead at the remote file system, collect folder stats before I request them, and send them to me when the connection is not immediately occupied. This would use more bandwidth (from lookahead data that is never used) but could improve speed.

We can force sshfs to do some read-ahead caching, by running this before you get started on your task, or even in the background when your task is already underway:

find project/folder/on/mounted/fs > /dev/null &

That should pre-cache all the directory entries, reducing some of the later overhead from round trips. (Of course, you need to use the large timeouts like those I provided earlier, or this cached data will be cleared before your app accesses it.)

But that find will take a long time. Like other apps, it waits for the results from one folder before requesting the next one.

It might be possible to reduce the overall time by asking multiple find processes to look into different folders. I haven't tested to see if this really is more efficient. It depends whether sshfs allows requests in parallel. (I think it does.)

If you want to read file contents to get it into cache the wc -l is pretty good. It just counts occurrences of 0x10 in the file so it simply reads the file once without outputting the contents.
– Mikko RantalainenMar 26 at 15:04

It is efficient with mv. Unfortunately when you run cp locally, FUSE only sees requests to open files for reading and writing. It does not know that you are making a copy of a file. To FUSE it looks no different from a general file write. So I fear this cannot be fixed unless the local cp is made more FUSE-aware/FUSE-friendly. (Or FUSE might be able to send block hashes instead of entire blocks when it suspects a cp, like rsync does, but that would be complex and might slow other operations down.)
– joeytwiddleSep 8 '16 at 5:00

After searching and trial. I just found add -o Compression=no speed it a lot. The delay may be caused by the compression and uncompression process. Besides, use 'Ciphers=aes128-ctr' seems faster than others while some post has done some experiments on this. Then, my command is somehow like this:

Either NFS or Samba if you have large files. Using NFS with something like 720p Movies and crap is really a PITA. Samba will do a better job, tho i dislike Samba for a number of other reasons and i wouldn't usually recommend it.

if everything worked on your end, before this you should have a successful mount. You might want to check and make sure the destination directory is shared by using the "exportfs" command to gurantee they are able to be found.

Hope this helps. This is not from a lvie environment, it has been tested on a LAN using VMware and Fedora 16.