2 Answers
2

That kind of depends on what service is providing the NAS functionality. Samba? NFS? All of the above? At its very base, Linux will use all available unallocated memory to serve as the file cache. That amount of memory will be used by default through the normal cache mechanisms of linux. Especially if a 64-bit kernel is used.

16GB is quite a lot of RAM for that problem. And yet, it can be just right for what you need. It all depends on how much of your data is in active use at any given time. If your 'working set' of active/open files is over 12GB, then 16GB of RAM is perfect. Ideally you want to have all of the open files able to fit into server-cache in order to provide maximum performance. What level that's at depends on your environment so there isn't any set answers.

It's good to have all that data fit in RAM for several reasons, but one of which is for writes. It allows the server to do I/O reordering to minimize HD latency, something that the RAID card will also do.

Linux is going to use your unallocated RAM as a cache by default... but your scenario raises two questions immediately in my mind:

The network is likely going to be a bottleneck

What happens to your cache when the server reboots? unexpected? how are writes committed to disk?

You could certainly setup ramfs to be available over the network, but you really need to be sure to flush it to more permanent storage as well. If your data set is read only (or just occasional writes even), then this might work perfect. But for user data or general file shares - yikes!

I don't know what your specific environment looks like, but you might be much better off virtualizing that server if it has excess capacity. Another play/proving/test ground is generally quite welcome.