I'm a little dismayed that Joel ranked on my question... laughing at somebody for even asking this question. At least he appreciated the answer it produced! :)
–
JasonJun 25 '09 at 16:21

1

I believe he laughed that this is thought condition that is common for the "geek" experience. As Jeff pointed-out, he blogged about exactly this too. We have all at one point or another believe we have exceeded some boundary (enough hardware or enough programming experience) where something no longer applies to us, usually we are wrong, but occasionally we are right. If the programmer (user) never got smarter than the language designer then we would still be programming in FORTRAN, LISP and COBOL.
–
Jim McKeethJun 25 '09 at 22:40

5

Jason, it definitely is a good question. That's why I took the time to carefully answer it!
–
quuxJun 26 '09 at 3:18

There is only a limited amount of memory given to drivers, called the non-paged and paged pool memory sections. A page file is necessary for when the paged section gets full, as a gamer I have seen a game complain about paged pool memory just because I had my page file disabled on a 8 GB system. They are necessary, they prevent paged pool depletion and actually do speed up your system.
–
Tom WijsmanOct 24 '11 at 16:22

13 Answers
13

TL;DR version: Let Windows handle your memory/pagefile settings. The people at MS have spent a lot more hours thinking about these issues than most of us sysadmins.

Many people seem to assume that Windows pushes data into the pagefile on demand. EG: something wants a lot of memory, and there is not enough RAM to fill the need, so Windows begins madly writing data from RAM to disk at this last minute, so that it can free up RAM for the new demands.

This is incorrect. There's more going on under the hood. Generally speaking, Windows maintains a backing store, meaning that it wants to see everything that's in memory also on the disk somewhere. Now, when something comes along and demands a lot of memory, Windows can clear RAM very quickly, because that data is already on disk, ready to be paged back into RAM if it is called for. So it can be said that much of what's in pagefile is also in RAM; the data was preemptively placed in pagefile to speed up new memory allocation demands.

Describing the specific mechanisms involved would take many pages (see chapter 7 of Windows Internals, and note that a new edition will soon be available), but there are a few nice things to note. First, much of what's in RAM is intrinsically already on the disk - program code fetched from an executable file or a DLL for example. So this doesn't need to be written to the pagefile; Windows can simply keep track of where the bits were originally fetched from. Second, Windows keeps track of which data in RAM is most frequently used, and so clears from RAM that data which has gone longest without being accessed.

Removing pagefile entirely can cause more disk thrashing. Imagine a simple scenario where some app launches and demands 80% of existing RAM. This would force current executable code out of RAM - possibly even OS code. Now every time those other apps - or the OS itself (!!) need access to that data, the OS must page them in from backing store on disk, leading to much thrashing. Because without pagefile to serve as backing store for transient data, the only things that can be paged are executables and DLLs which had inherent backing stores to start with.

There are of course many resource/utilization scenarios. It is not impossible that you have one of the scenarios under which there would be no adverse effects from removing pagefile, but these are the minority. In most cases, removing or reducing pagefile will lead to reduced performance under peak-resource-utilization scenarios.

dmo noted a recent Eric Lippert post which helps in the understanding of virtual memory (though is less related to the question). I'm putting it here because I suspect some people won't scroll down to other answers - but if you find it valuable, you owe dmo a vote, so use the link to get there!

On Solaris it was/is even more involved. The swap file is mirroed in a ram disk like tmpfs so the memory is always almost full - but it is apparently provable that this is the optimal strategy.
–
Martin BeckettJun 25 '09 at 19:40

I have long believed that, instead of allowing Windows to manage my page file size, I should set it to a fixed amount (e.g. min 2GB, max 2GB), because letting it grow and shrink can cause fragmentation problems. Is that good thinking, or should I follow your first line and let Windows handle everything?
–
John FouhyJul 29 '09 at 3:26

As I see from other answers I am the only one that disabled page file and never regreted it. Great :-)

Both at home and work I have Vista 64-bit with 8 GB of RAM. Both have page file disabled. At work it's nothing unusal for me to have few instances of Visual Studio 2008, Virtual PC with Windows XP, 2 instances of SQL Server and Internet Explorer 8 with a lot of tabs working together. I rarely reach 80% of memory.

I'm also using hybrid sleep every day (hibernation with sleep) without any problems.

I started experimeting with it when I had Windows XP with 2 GB of RAM and I really saw the difference. Classic example was when icons in Control Panel stopped showing itself one after one, but all at once. Also Firefox/Thunderbird startup time increased dramatically. Everything started to work immediately after I clicked on something. Unfortunately 2 GB was too small for my applications usage (Visual Studio 2008, Virtual PC and SQL Server), so I enabled it back.

But right now with 8 GB I never want to go back and enable page file.

For those that are saying about extreme cases take this one from my Windows XP times.
When you are trying to load large Pivot Table in Excel from an SQL query, Excel 2000 increases its memory usage pretty fast.
When you have page file disabled - you wait a little and then Excel will blow up and the system will clear all memory after it.
When you have the page file enabled - you wait some time and when you'll notice that something is wrong you can do almost nothing with your system. Your HDD is working like hell and even if you somehow manage to run Task Manager (after few minutes of waiting) and kill excel.exe you must wait minute or so until system loads everything back from the page file.
As I saw later, Excel 2003 handles the same pivot table without any problems with the page file disabled - so it was not a "too large dataset problem".

So in my opinion, a disabled page file even protects you sometimes from poorly written applications.

Shortly: if you are aware of your memory usage - you can safely disable it.

Edit: I just want to add that I installed Windows Vista SP2 without any problems.

I love it how everyone is saying "Microsoft has spent many hours thinking about this problem, so don't mess with it", yet completely ignore real world experiences. I've had the paging file disabled since XP and never regretted it. It's like the computer got an injection of awesome.
–
AngryHackerOct 20 '09 at 6:36

3

It's pretty standard practice to disable paging on servers that iSCSI boot, paging over the SAN would be noticable slow. You just really have to watch your memory usage, and stay away from the max.
–
Chris SMay 18 '10 at 21:43

6

-1 I don't see any references in this answer. I actually had my system crash because the page file was disabled and my Paged Pool memory ran full. Yet, my physical memory usage was only at 2 GB...
–
Tom WijsmanDec 4 '11 at 13:59

You may want to do some measurement to understand how your own system is using memory before making pagefile adjustments. Or (if you still want to make adjustments), before and after said adjustments.

Perfmon is the tool for this; not Task Manager. A key counter is Memory - Pages Input/sec. This will specifically graph hard page faults, the ones where a read from disk is needed before a process can continue. Soft page faults (which are the majority of items graphed in the default Page Faults/sec counter; I recommend ignoring that counter!) aren't really an issue; they simply show items being read from RAM normally.

Above is an example of a system with no worries, memory-wise. Very occasionally there is a spike of hard faults - these cannot be avoided, since hard disks are always larger than RAM. But the graph is largely flat at zero. So the OS is paging-in from backing store very rarely.

If you are seeing a Memory - Pages Input/sec graph which is much spikier than this one, the right response is to either lower memory utilization (run less programs) or add RAM. Changing your pagefile settings would not change the fact that more memory is being demanded from the system than it actually has.

A handy additional counter to monitor is PhysicalDisk - Avg. Queue Length (all instances). This will show how much your changes impact disk usage itself. A well-behaved system will show this counter averaging at 4 or less per spindle.

I've run my 8GB Vista x64 box without a pagefile for years, without any problems.

Problems did arise when I really used my memory!

Three weeks ago, I began editing really large image files (~2GB) in Photoshop. One editing session ate up all my memory. Problem: I was not able to save my work since PS needs more memory to save the file!

And since it was PS itself, who was eating up all the memory, I could not even free memory by closing programs (well, I did, but it was too little to be of help).

All I could do was scrap my work, enable my pagefile and redo all my work - I lost a lot of work due to this, and can not recommend disabling your pagefile.

Yes, it will work great most of the time. But the moment it breaks might be painful.

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).

+1, simple for the Mark Russinovich link. It's worth pointing out that win7 even pops up a notification to point that you will not be able to "trace down system problems" if you disable the swap file.
–
altCognitoMar 21 '11 at 2:59

Just tried to use Media Player Classic to load a 6GB mkv file. It ran me out of my RAM and pagefile memory. Went back to VLC pretty quick. +1 for the "you never know what you'll run into". Eventually MPC crashed and my RAM was restored, but what if you get a DLL in third party software with a memory leak? You will have a lot more mileage if you have some disk-backed memory to help you out.
–
mpblochApr 13 '10 at 23:00

Plus the point, what's the good of having 8GB if you have to live in constant fear of actually using it?!
–
David SchwartzOct 27 '11 at 9:05

The "keep at least a small pagefile" seems a little weird to me since it is not clear how Windows is going to use it. For instance, it might thrash it even more than a bigger pagefile that offers more space - I'm guessing, but as long as there is no reliable source on this I would consider the small pagefile advice possibly harmful and recommend a standard practice instead.
–
mafuJan 30 '12 at 21:39

You didn't mention if it's a 64-bit edition of Windows, but I guess yes.

The pagefile serves many things, including generating a memory dump in case of BSoD (Blue Screen of Death).

If you don't have a pagefile, Windows won't be able to page out to disk if there isn't enough memory. You may think that with 8 GB you won't reach that limit. But you may have bad programs leaking memory over time.

I think it won't let you go hibernate/standby without a pagefile (but I didn't try yet).

Windows 7 / 2008 / Vista doesn't change the use of page file.

I saw one explanation from Mark Russinovich (Microsoft Fellow) explaining that Windows can be slower without page file than wih a page file (even with plenty of RAM). But I can't find back the root cause.

Are you out of disk space? I would keep a minimum of 1 GB to be able to have a kernel dump in case of a BSoD.

it was in a sysinternal video with Salomon. It had something to do with kernel page pool
–
Mathieu ChateauJun 10 '09 at 21:19

You can't post an "answer" when you have no idea: I have a Windows Vista 32-bit laptop with 4GB of RAM and I put it into standby all the time. Can you at least restrict yourself to supplying answers to questions you actually know answers to?
–
PP.Jan 15 '10 at 17:50

What PP tried to say: The hibernation process uses a file separate from the swap file, so this is not an issue in this case.
–
mafuJan 30 '12 at 21:41

Pyrolistical, it's probably far too late for you to see this, but turn your statement around and phrase it as a question: When does pagefile slow anything down? A good answer to that would prove your theory.
–
quuxMar 6 '14 at 23:39

The only person that can tell you if your servers or workstations "need" a pagefile is you, with careful use of performance monitor or whatever it's called these days. What apps are you running, what use are they seeing, what's the highest possible memory use you could potentially see?

Is stability worth possibly compromising for the sake of saving a minute amount of money on smaller hard disks?

What happens when you download a very large patch, say a service pack. If the installer service decides it needs more memory than you figured to unpack the patch, what then? If your virus scanner (rightly) decides to scan this very large pack, what sort of memory use will it need while it unpacks and scans this patch file - I hope the patch archive file doesn't contain any archives itself because that would absolutely murder memory use figures.

What I can tell you is that removing your pagefile has far higher probability of hurting than helping. I can't see a reason why you wouldn't have one - I'm sure that there might be a few specialist cases where I'm wrong on that one but thats a whole other area.

Disabled my page file (8GB x86 laptop) and had two problems even with 2500MB free.

1) ASP.NET error trying to activate WCF service : Memory gates checking failed because the free memory (399556608 bytes) is less than 5% of total memory. As a result, the service will not be available for incoming requests. To resolve this, either reduce the load on the machine or adjust the value of minFreeMemoryPercentageToActivateService on the serviceHostingEnvironment config element.

Quite how 3.7GB is less than 5% of 8GB I will never know!!

2) Getting Close programs to prevent information loss dialog : When 75% of my RAM is used I get a dialog box telling me to close programs. You can disable this with a registry modification (or possibly by disabling the 'Diagnostics Policy Service').

In the end I decided to just turn it back on again. Windows just plain and simple was never designed to be used without a page file. It's optimized to run with paging, not without. If you're planning on using more than 75% of your memory and you don't want to mess with your registry - then it may not be for you.

The ASP.NET error strikes me as perhaps being a 32-bit issue, but if the number you provided is correct (399556608 = 399,556,608) then the error is correct - ~400MB is approximately 5% of 8GB.
–
fencepostOct 12 '10 at 18:11

The key question is whether your anticipated total memory usage for all apps and the operating system usage approaches 8 GB. If your average mem usage is 2 GB and your max memory usage is only 4 GB then having a page file is pointless. If your max memory usage is closer to 6-7 Gb or greater than it's a good idea to have a page file.

It seems a lot of severely limited people have an opinion on this subject but have never actually tried running their computer without a page file.

Few, if nearly none, have tried. Even less seem to know how Windows treats the pagefile. It doesn't "just" fill up when you run out of physical RAM. I bet most of you didn't even know that your "free" RAM is used as a file cache!

You CAN get massive performance improvements by disabling your page file. Your system WILL be more susceptible to out-of-memory errors (and do you know how your applications respond in that scenario - for the most part the OS just terminates the application). Start-up times from standby or long idle periods will be far snappier.

If Microsoft actually permitted you to set an option whereby the pagefile ONLY gets used when out of physical RAM (and all the file buffers have been discarded) then I would think there was little to gain from disabling the pagefile.

Disabling the page file results in performance degradation, you won't see any performance improvement from introducing memory load. Usage of the page file only when you are out of memory is certainly not what you want...
–
Tom WijsmanDec 4 '11 at 14:01

This is antidotal, but we run a Windows Server 2003 Terminal Server for about 20 users, with 10-15 logged on at time and have 8GB of RAM. We do not run with a pagefile and our server runs faster than it did from before. This obviously is not a solution for everything but we have run like this for 2 years now, and have had no issues that I am aware of.

You do have issues but you don't notice them, consider how an increased memory load can slow down simultaneous request. Enabling the page file makes such moments more snappier...
–
Tom WijsmanDec 4 '11 at 14:06