Posted
by
timothy
on Saturday February 20, 2010 @04:30PM
from the nothing-but-dihydrogen-monoxide-to-drink dept.

Via newsycombinator comes a reaction at Ars Technica to the recently reported claims of excessive memory use on machines running Windows 7. From the article: "I installed the XPnet performance monitoring tool and waited for it to upload my data to see what it might be complaining about. The cause of the problem was immediately apparent. It's no secret that Windows 7, just like Windows Vista before it, includes aggressive disk caching. The SuperFetch technology causes Windows to preload certain data if the OS detects that it is used regularly, even if there is no specific need for it at any given moment. Though SuperFetch is a little less aggressive in Windows 7, it will still use a substantial amount of memory—but with an important proviso. The OS will only use memory for cache when there is no other demand for that memory."

Though SuperFetch is a little less aggressive in Windows 7, it will still use a substantial amount of memory—but with an important proviso. The OS will only use memory for cache when there is no other demand for that memory.

I really wonder when people will get this. In the earlier thread I saw people commenting that Windows 95 didn't need so much memory and so on..

To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

This is in the case you need to free like 1GB of normal RAM. Other than like Photoshop and games don't need that much. But anyway, it has to be zeroed out for security, and so that other apps cannot randomly get data from other apps.

Besides the technical points, 30ms to free that RAM is really low. You wont even notice that.

You might want to grab a copy of Process Explorer sometime, and look at the stats it reports. you'll notice that windows actually spends idle time pre-zeroing ram, so that this is already done, in more than enough amounts. If your system is slammed, i could see having to pre-zero the pages, just before use, however, but it's not like it's not something that couldn't be done while waiting for other I/O operations to complete (since your system is slammed anyway:) )

My laptop currently has 2.8 million pages zeroed atm (it has 8gb, and I don't have much running right now, so there's not a lot to cache.)

I didn't make a snide remark. I pointed out that windows maintains a pool of zeroed ram, and will constantly fill it when it's able to.

Windows will allocate from that pool first, and then top it back up when it hits a threshold of low free memory from supercache.It then has the luxury of handling this in the background, so you're not waiting for the system to zero memory out, unless you somehow manage to totally max out the memory you've got in use.It's at this point when you've got bigger problems, like the fact that the disk will probably become your bottleneck, not the speed at which you can zero out ram.

I didn't miss your point, I'm suggesting that you're not thinking like an operating system designer, and saying "how can i shorten the critical path of giving a process the memory it requests?" The answer is: Pre-zero memory, and fill the zeroed pool from the cache as needed when you get low. There's no "wait" for zeroed out memory, at this point.

This is another reason why having more than 2G of RAM on a 32-bit OS is dumb.

What are you smoking? Even on Windows 32-bit, without any modifications, the 2GB limit is per-application. I have yet to see a machine "lose" more than 1GB to address space (admittedly, no crossfire or SLI setups).

I have 4GB in my 32-bit development machine and regularly use it all. And I mean all - I use Gavotte's RAM Disk to create a 512MB RAM disk in the "unusable" portion that I use to hold my temp directory and other things I

Have you tried using PAE in Windows. It was supported for the original release of XP, but Microsoft removed support for it in either SP1 or SP2. Their claim was buggy 3rd party device drivers that didn't handle support for PAE correctly so the system would become unstable. I'm not sure about Windows 7 but PAE does not work in Vista as I tried it with the original release and SP1. If you need 4GB or more just use Windows 7 64bit. Hardware manufactures have finally gotten their act together and you have

No, Windows actually uses PAE, and Datacenter versions of the OS support more than 4 GB RAM under PAE. However, almost all the versions of 32-bit Windows intentionally limit the physical address space size to 4 GB, even when PAE is enabled. This is apparently for driver compatibility reasons (so that physical memory pointers will never exceed 32 bits).

Yes, and nanoseconds (10^-9) multiplied by the number of memory locations to clear (10^6 when you're talking multi-MB chunks of memory) gets us right back in the millisecond (10^-3) range. Which is just a blink of the eye for us humans, btw.

Yep, same for Linux. My Linux boxes use ALL the memory available even if I do not run many applications on it. The left over memory SHOULD be used as buffers/cache. If Windows 7 seems to use more memory from a newbie point of view, it might be because it does things like it should better than previous versions. I can't tell for sure since I have never tried win 7.

See this 4 GB Linux machine below, it only has ~49 MB of "absolutely free" memory and uses ~449 MB of swap.

In realty, it has ~2842 MB of "available memory" since it uses ~2792 MB of buffer/cache.

Using buffer/cache makes the system order of magnitude faster. If programs need that memory, the OS will give to them and use less buffer/cache.

You, sir. misunderstand the basic concepts of an OS and thus the relevance of setting up a swap space.

Swap is used by my machine to swap out process that I haven't used for days as well as part of processes that haven't accessed the swapped part of their memory for days. I almost have no swap activity (exchange between disk swap and memory).

I am grateful that Linux is smarter than you seem to be, it figured out that it was more efficient to swap out those process and to regain t

So,pray tell, where do I learn the meanings of the various stats in Task Manager?

You can press F1 while in task manager and then search for a particular metric, e.g. "available memory". This produces results that seem moderately useful, for example:

Under Physical Memory (MB), Total is the amount of RAM installed on your computer, listed in megabytes (MB). Cached refers to the amount of physical memory used recently for system resources. Available is the amount of memory that's immediately available for use by processes, drivers, or the operating system. Free is the amount of memory that is currently unused or doesn't contain useful information (unlike cached files, which do contain useful information).

For more details about particular counters you can check the Windows Internals book, or Memory Performance Information [microsoft.com] on MSDN. Also, many counters in task manager have similar or identical perfmon counters, and perfmon has its own help (IIRC there's a "show description" option in the counter selection dialog)

More to the point, the company that wrote this little monitoring tool badly misunderstood basic principles of how the operating system works. At this point, I think we can move on and completely disregard any conclusion they came to. It either demonstrated profound ignorance or a deliberate attempt to mislead people it what turned out to be a slashvertisement of their products and company.

From the article:

One might almost think that this whole exercise was simply a cynical ploy. Allegations of Microsoft bloatware are, of course, nothing new, and oblique references to the old canard that what Intel gives, Microsoft takes away does nothing to dispel the impression that this is another case of Microsoft bashing.

What a surprise. Fortunately, people really didn't even let them get away with it even in the previous article. Microsoft deserves plenty of what slashdot slings its way, but let's stick try sticking to facts.

"The OS will only use memory for cache when there is no other demand for that memory"

Ok, I'm not going to bother and read the smart people. I'm going to go straight to my point.

If you are using nearly all available RAM for disk cache, then EVERY REQUEST FOR RAM WILL REQUIRE CACHE DUMP.

It's like this;

If you have 4GB RAM and are using, say, 1.5GB for applications and system, and you use 2.2GB RAM for cache, then you are left with 300MB approx for any new demand. So any demand in excess is going to make your

You seem to think that its not a read-only cache. SuperFetch caches disk blocks as they appear on the disk. The "dumping" of them means to not consider them as valid cache anymore. There is no need to write them out.

The vast, vast, vast, vast majority of that cached memory is read-only caches (like DLL caching and superfetch) which doesn't need to be "dumped". Some small, very small, portion of it is read/write disk cache, but that portion is never going to be dumped unless you're *completely* out of memory otherwise. And that's basically a "last resort failure mode" at that point.

You're as bad as the guys who wrote that article in the first place. If you don't know how Windows works, please don't talk about it.

You're as bad as the guys who wrote that article in the first place. If you don't know how Windows works, please don't talk about it.

Hell, its not just windows. All operating systems do this.. and to be quite frank, programmers of all kinds should have cache techniques well understood. So the GP is neither a windows guru nor a decent programmer. The odds are very good that hes just an I-use-software geek, rather than someone who knows anything about computers.

To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

It's more like 100ms on an average PC, but yes, you are correct.

But since background stuff will be happening too, maybe 120ms...

If 120ms isn't an acceptable delay, then you need an OS where programs are geared for low disk IO usage, and low memory usage. That will prevent any software from interfering with any other software, giving very fast and consistent performance.

Selection of software is big. For example, the difference between My Uninstaller [nirsoft.net] and Add/Remove in XP is huge. You wouldn't notice on a fast

By design, your favorite apps would be precisely the ones to benefit from SuperFetch.

Exactly, I still can't believe all the complaints people have against Superfetch ever since Vista came out. The whole purpose of it is that it monitors your computer usage and keeps copies of commonly used DLLs and applications in memory. So when you want to start one of your favorite apps, the files it needs are already in memory and it can start instantly. Without Superfetch you'd be reading the files off the hard driv

To state it again. This is not RAM memory you need, use or have purpose for. IF you do need it, it is zeroed-out and free'd to application in like 30ms (one frame in usual FPS games).

The problem with previous versions of windows (I haven't used anything newer than XP) is in how the OS decides that you do not "need, use or have a purpose for" certain types of memory.

The pathological, and yet all too common case with XP is the OS's decision that text pages should be dumped in favor of disk cache far too soon. The result being that if you have multiple apps open and a few that you haven't touched for roughly 10 minutes and then go to copy a couple of gigabytes of files around the text pages for those 'idle' applications are flushed out and the disk cache loaded with parts of those copied files (which you are unlikely to ever need). When you click on the iconbar to bring one of those formerly idle apps back to the foreground the system grinds away for a long time (obviously machine dependent but never instantly and frequently way beyond the point of annoying) as it reloads those text pages from disk before the application even starts to redraw itself much less starts becoming fully interactive again.

The worst part about that behavior is that, to the best of my knowledge, there are no knobs to tweak it. I can't specify how long a text page needs to be idle before it should be a candidate for flushing or even if it should be pinned down permanently so that is never paged out. I once went looking to see if there was a way to do it from within the application code itself - something like mlock()/mlockall() in posix - and I couldn't find an equivalent, which may just be a reflection of my own inexperience with the Windows API but I figured I would throw that out there anyway.

I once went looking to see if there was a way to do it from within the application code itself - something like mlock()/mlockall() in posix - and I couldn't find an equivalent, which may just be a reflection of my own inexperience with the Windows API but I figured I would throw that out there anyway.

The function you're looking for is VirtualLock [microsoft.com]. You may also look into increasing the process's minimum working set with SetProcessWorkingSetSize [microsoft.com]. This requires SeIncreaseBasePriorityPrivilege.

A process that is scanning through a file is supposed to use the FILE_FLAG_SEQUENTIAL_SCAN hint so that the cached pages are recycled first, but that doesn't always happen. It also doesn't help that csrss will ask the kernel to minimize a process's working set when its main window is minimized.

Starting with Vista, working sets of GUI processes are no longer emptied when the main window is minimized.

For the standby cache recycle problem, Superfetch can help a lot. First of all, it can detect when apps do things like read lots of files sequentially without using FILE_FLAG_SEQUENTIAL_SCAN (or when they do this through a mapped view) and deprioritize these pages so they don't affect normal standby memory. And if useful pages still end up being recycled (e.g. because some app temporarily consumed lots of memory), Superfetch can re-populate them from disk later.

#1) Linux _doesn't_ have all the best and greatest technology built in.
#2) It even does have some really crappy technology in between.
#3) guess what the "-/+ buffers/cache" line in the output of "free" means
#4) guess what the "buffers" and "cached" columns of "free" means
#5) guess what prefetch/preload is.

Linux (and actually all recent Windows, Mac OS X, Linux distros) use the very same technology. Linux did this really early too. I don't know the full story about intelligent RAM usage, but first time I saw it being used was on linux servers. And if you do compare it to the past "demand and receive, fully" memory model, it's a lot better.

You can even (trivially) roll your own in Linux. lsof occasionally to build some stats on commonly opened files. cat them to/dev/null to fill the filesystem buffer cache. I'm not sure anybody would even bother to give it a name like "SuperFetch", never mind a trademark like "Microsoft Windows SuperFetch®". It's kind of sad and depressing really.

You have no idea what you're talking about. Superfetch is much more than stats on commonly opened files. It takes into account the times of the day, weekends etc. too among other advanced stats. Anyway if it's trivial to roll your own, why doesn't such a thing run by default in Ubuntu? That's the thing that's sad and depressing, not giving names to technology.

SuperFetch also keeps track of what times of day that applications are used, which allows it to intelligently pre-load information that is expected to be used in the near future.
Source: Wikipedia [wikipedia.org]
Their work, says Horvitz, was able to predict which applications users would open by time of day and also by day of the week.
Source: InfoWeek [informationweek.com]

Recent versions of Ubuntu preload some stuff on boot at least. For doing it during runtime there's the "preload" package. Now the good thing is that if I think it's pointless, then I don't need to install it, and it isn't on my system.

And I concur with the grandparent, the naming of various mostly not very interesting technologies with some moniker to make it sound more exciting is annoying, and I enjoy the lack of that stuff in Linux.

One of the main tricks, which you're completely glossing over, is the algorithm that determines what files are likely to be wanted/needed at any particular time of day. Sure, lsof will give you the raw data on what's open when, but making something useful out of that it a touch trickier than you suggest.

Additionally - and forgive my lack of Linux kernel expertise; I never thought to check this - is there a limit on the size of the filesystem cache? If it's less than a couple gigabytes, you're not going to b

There's probably a psychological threshold. If most linux desktop apps are small, say a couple of hundred mebibytes, then it's going to take say 15 seconds to get the data from disk and 5 or so to load it up into memory, if you cache it in ram, and it now takes 1.5 seconds to access in memory and 5 to process (you still have to move the data to the processor) you've got from a system that takes say 21.5 seconds to load to one that takes 6.5 to load or roughly 1/3rd. Yes that's all canned off the top of my

And though your English has some minor mistakes every now and then, it's not like you are writing in such a way that we cannot understand you. Your English is better than my grasp of Spanish by many miles. Congrats.

I chose Spanish because that is the only foreign language that I have attempted to learn, not because of failing geography class. That last bit is why I don't think Europe is a country, just like some other people. [youtube.com]

I think it's just a sign of the times. I regularly bump up against my 2GB ram limit (once a day) if I have GIMP/Photoshop open, 3 or 4 Chrome windows open with 10-20 tabs each (many of those being youtube videos), usually a videogame in the background (Windowed No Border mode at full or almost full screen resolution rules), along with whatever else I'm doing, a paused VLC video, steam, and any other background apps + whatever I'm working on currently. This isn't a problem in Win7, it's a problem of Leaving a Bunch of Shit open all the time.

I often find myself with a long list of tabs open presenting a history of my travels during searches allowing me quick backtracking to various points. Youtube or other flash content isn't unusual in many tabs. It's just easier than closing this or that tab only to find the one you closed had a potential link or piece of information you now need. When all is said and done I just X the Window and start fresh, but up till that point you could potentially have a vast number of pages of all sorts of content open

I highly HIGHLY recommend a flash blocking add-on like FlashBlock [mozilla.org] for Firefox. There will be a play button where all the embeded flash videos would be and it won't load them until you click play. You can of course whitelist sites that you'd like to load all flash from. But now you don't have to have those 10 pages in tabs each with 2, 3 or more flash ads or graphics eating up CPU cycles.

Sometimes it's a catchy song I want to listen to again later, but probably don't want to favorite. Or I did a search and found more than one interesting, tangential video I might want to watch later, but don't have time to now. Other times I simply forget to close them. Sometimes I leave them open to link to later in a blog or email/facebook etc. Maybe if there was some sort of intermediate between "youtube favorites" and "web browser history", I would replace my current system.

Linux uses available memory for cache, and rather aggressively. All available memory can be filled with cached file blocks. This happens routinely on systems which have big randomly-accessed files open, like databases.

There's nothing wrong with this, except that, once in a while, Linux hits a race condition in prune_one_dentry, causing an "oops" crash, when there's an unblockable need for a memory page and something is locking the file block cache.

This is one of the Great Unsolved Mysteries of Linux. Linus wrote about it in 2001 [indiana.edu] ("I'll try to think about it some more, but I'd love to have more reports to
go on to try to find a pattern.. "). As of 2009, this area is still giving trouble. [google.com]
The locking in this area is very complex. [lwn.net]

if everyone is so afraid of their computer memory being used to the fullest, why do these people install so much of it?

I've got 8GB of ram in the machine I'm on at the moment, and I want the OS and applications to use it to the fullest and most efficient extent possible at all times. I didn't install a 64-bit OS and 8GB of ram so that I can see 6GB free at all times.

if everyone is so afraid of their computer memory being used to the fullest, why do these people install so much of it?

Most users remember back to at least the 90s. You had to install enough ram to do what you needed (load the OS and program(s) - WinNT had the audacity to require 8MB to run well). There was no caching of any useful sort, so your free memory was really a measure of how many programs you could load. Programs, like Photoshop, added scratch files to overcome the physical RAM limits, but at a horrible performance penalty should have to actually use it. "Free RAM" became synonymous with "how many things you could do or open simultaneously."

All modern operating systems have moved on, but people haven't been educated about this. They remember how bad it was when they ran out of memory, and panic when the OS reports it's almost full. Honestly, it would be far better if MS would have reported the cached memory differently. I don't really care how much memory is used as superfetch cache most of the time - I'm more concerned with the total active usage. My netbook "only" has 2GB, but I do some 24/96 audio recording with it, and will occasionally work with photoshop images, so I am concerned if I have less than 500-600MB free when I open a session as I'm likely to exceed the physical RAM. I can read the data, so it's not a big deal, but others freak out about it.

But it makes me feel so awesome to see 6GB free. I'm all like, "Damn, I have a lot of RAM!" When the RAM is all fully, my system monitoring graphs don't look at cool. I also like seeing my CPU utilization showing 4 cores, each idling around 1%, and having a multiple terabytes of free space on my hard drives. All those graphs get ruined if you actually use your computer for stuff.

The last article specifically said RAM was nearly exhausted and there was excessive paging to disk. No one cares if RAM is full or not, if it's unused it's wasted anyway. The concern is having 85% memory utilization and then paging memory out to the pagefile.

A good OS uses all the RAM, and allocates available free blocks of RAM to the programs as required.

However using the greater part of a gigabyte plus paging to the hard drive just to display the desktop and run the low level functions is inexcusable and points to either a) memory leak b) the OS is doing something legitimate you are unaware of, like indexing files, etc c) the OS is doing something illegitimate like sending the contents of your hard drive to someone in Redmond, the NSA/FBI or the RIAA/MPAA or

Both articles miss some very big and important points. Back in the day of Windows 2000 and XP, the Task Manager chart reported the memory comit charge. Basically, that was the amount of memory applications (and Windows) requested allocated. This does not mean that much memory was actually used, but with the exception of very badly written/buggy programs, it should be close. As a rule of thumb, if you look at that and see that your commit is significantly larger than your RAM, you know you're probably in trouble and will be very reliant on swap.

Windows Vista and 7 report something completely different. The chart shows ram memory used minus cache, an almost useless metric, but it does not indicate how much 'total' memory, real and virtual, is allocated. If you look at the screenshot in the ars aritcle, you will see that the commit charge is over 3GB. That's a lot of memory, and doesn't include cache!.

At the end of the day, however, a bare bones Windows XP would require about 120MB of memory, whereas Windows 7 is around 1GB. That sounds like a big difference, but we are talking several years of new features and eye candy. Ultimately, when you drill it down, it means that Windows 7 requires $20 more worth of memory. An insignificant issue, so long as you keep that in mind when designing a system for Vista / Windows 7. (ie, make sure that any computer or device destined for those OS's have at least 2GB of ram)

Back in the day of Windows 2000 and XP, the Task Manager chart reported the memory comit charge. Basically, that was the amount of memory applications (and Windows) requested allocated. This does not mean that much memory was actually used, but with the exception of very badly written/buggy programs, it should be close

Not necessarily. Many programs commit large chunks of memory in case they need it later but only use a small portion initially. This simplifies program logic because you don't have to free and reallocate the buffer when you need more space, deal with potential reallocation failures etc. Or a program might want to specify a larger-than-default stack commit size to make sure it doesn't hit a stack overflow if it tries to extend the stack while the system is temporarily out of commit (most services and other system critical processes do that). Or it might map a copy-on-write view of a file, in which case commit is charged for the entire view but no extra physical memory is used until the program actually writes to the pages. Etc etc... The end result of this is that you can't really say anything conclusive about physical memory usage by looking at commit charge

Commit charge is a virtual memory metric. It's great for detecting memory leaks and deciding how big your pagefile needs to be, but not so great for understanding physical memory usage. Often it might seem like there is a correlation between commit charge and physical memory, but you can also find systems that are very low on available RAM yet have plenty of available commit, and vice versa.

Task manager now shows used physical memory (defined as Total - Available). Available memory is the most straightforward way to understand whether your system needs more memory or not, and this is why in Vista/Win7 it was chosen as the main indicator of "memory usage".

Still fails badly though, because the task manager will show lots of available memory when lot of caching is being done, depending on how the system is tuned. (sometimes known as swappiness in Linux, I'm not entirely sure where to find the tuning parameters in Windows). It's not very hard to find systems that consider more than 30% memory available, but are considerably slowed down by swap activity. Of course, the only way to really prove that is with some kind of swap monitor that looks for excessive sw

That's why you have metrics for page faults. If you don't have any page faults you're not swapping. If you have some you're probably slightly over utilized or doing something odd. If you have a lot, you have a problem.

As someone commented on the last story a couple of days ago about this, if you don't want all your memory to be actually used, pull some of it out and put it in your desk drawer. What, you do want it all used? Well, that's what Windows 7 is doing, using all of it all the time, rather than leaving some of it unused much of the time. Oh, you only want it used for certain purposes? Why? If it's not being used for anything at the moment, using it for something is clearly better than that. And that's what Windows 7 (and Linux) do! If a more important use for it comes along, it repurposes it for that.

All good operating systems do this. My Mac, for instance has "inactive memory", which is not exactly the same as Windows, but close enough.
If your memory is free, it's not doing anything for you. End of story.

In any case, if you want to compare OS cache performance you might at least try to use the same compiler and the same language on both machines.

Use Visual C++ on both machines? Good luck getting it to run in Wine.

Use GCC on both machines? The last time I tried GCC on Windows, it produced bloated binaries for any C++ program that uses <iostream>. A quarter megabyte for Hello World [pineight.com], on two platforms (MinGW targeting Windows/x86 and devkitARM targeting GBA/ARM7)? Give me a break.

Bloated C++ binaries were caused by GCC having only a static c++ runtime on windows

And it still has only a static C++ runtime on, say, Nintendo DS because there's no libstdc++.dll in the BIOS.

Newer MinGW GCC releases (4.4 series) include libstdc++.dll

Which of course I'd have to distribute with each copy of each program because the user doesn't already have it. Even so, how many bytes does hello world take under newer MinGW? (I don't have Windows in front of me at the moment.)

As I understand it, it doesn't cache data, it caches applications (and I think also fonts and other often-used things). So startup time for your web browser, e-mail client, IM client, and any other applications you use often will be much faster. For example, Google Chrome loads almost instantaneously on my system, from a cold start. It won't keep a memory cache of things that applications do, and hence it won't speed up compilation, rendering, etc.

How would you do either of those activities with Linux. I know you can cat stuff into the cache, and you can control the "swappiness" and something about dirty inodes in proc/sys, but how could you say "I want this file to be in the cache no matter what" or "what files are in the cache" other than by creating a ram disk and putting them there (and then wondering why your ramdisk files are also cached because you used/dev/ram instead of tmpfs...) manually?

There used to be a ramdrive.sys but turns out the OS is smarter than us when trying to use RAM as a cache.
Here's a reference to it in an article for troubleshooting memory problems: http://support.microsoft.com/kb/142546 [microsoft.com]

It's not "hogging" memory if it dumps it the second you start up a program that needs it... It also doesn't make your system "appear" faster, it makes it faster. I paid for all that RAM, I don't mind it being taken advantage of; that's why it's there in the first place...

Describing caching as a way Windows makes your computer "appear" faster is really a little disingenuous. If that is the only metric for your complaint then you should be angry that your processor caches as well. After all, your processor takes the time to check two or three caches every time it issues a move instruction. If it misses every time, then it has to pick what to throw out of the cache and read directly from memory. Wouldn't it be so much better if it just made a fetch to ram every time there

If Windows 7 actually uses that much memory it's not scaremongering, it's memory hogging. Whether it's using it on not is a pretty fine distinction, it's still using it just because it can. If something else needs it, Windows has to decide if it wants to let go of it or not.

So are you saying Linux, BSD, Mac OS X and pretty much every other modern desktop OS other than Windows XP are also memory hogs as well? Because they also do the exact same thing and use up all of the free memory for caching, marking it as available.

Actually, even XP had a disk cashe (like all the other OSes you mention, it was post-caching - storing data in case you need it again soon, rather than pre-caching things that you are probably about to use). However, as I understand it, the disk cache in XP was relatively small, and XP was (at release) used extensively on machines that really didn't have the kind of RAM it wanted. Therefore, it implemented an extremely aggressive paging algorithm, constantly writing memory pages to disk but not recovering t

fflush() just flushes stdio's buffer, so that any data written to the file is sent to the operating system (via write() on *nix), making it visible to other programs. It does not flush anything in the operating system's own buffers/caches. Also, fclose() calls fflush() automatically, so calling both is redundant.

You're probably thinking of fsync(), a system call that does actually force data to disk. And should almost never be used, unless you enjoy waiting through several seconds of disk grinding and gener

The issue with ext4 had to do with dirty cache not being written to disk in a timely manner. This meant that you could have disk writes you thought had already happened and certain actions could get out of sync because of they way they'd been written.

The cache in this case is not dirty(in fact it's read only) so there's no risk of losing data (since most of the time the data on the disk and the data in memory are identical and when they're not the disk is right).

Depends on the circumstances. If I'm just browsing the net then yes I expect to be use little memory. If I'm playing a new PC game then I plan on using every last ounce of memory.

Because there some tasks that I do want to use as much memory as possible then I would naturally want the OS to use as little as possible. I don't want to have worry about stupid little programs running in the background.

Instead of executing even more programs to cache crap to run a little faster, PC retailers need to start s

That makes no sense. Superfetch is run at the lowest priority, all other reads/writes interrupt anything it's doing and it only uses idle processor time.

At boot, Windows is doing other things not prefetching. Turn superfetch on and monitor your active processes at boot, you'll see that the superfetch PID is sittng all the way at the bottom of everything. Its IO is negligible as is its CPU usage until AFTER everything else has loaded and your system has gone back to twiddling its bits for a few hundred tho

Odd... your experience is the exact opposite of mine. I find that SuperFetch, *especially* on a laptop, is essential to speeding things up specifically because I don't have to wait for the data to be read off the disk before I can run a program. Startup doesn't seem any slower to me (I'll grant you that I haven't timed it) and unlike XP (no SuperFetch), either Vista or Win7 (both with SF enabled) is usable immediately after logging in. Sure it's slower while background stuff loads, but it's not like the com

My Xserves would run at 100% when they were doing a lot of postscript processing on the print queues, but on average the UNIX stuff there was loaded far more heavily than the Windows servers. (2003r2 at the time.)

The Windows guys would order more hardware when they got to 60% CPU load. This was hard for me to grasp at first when I took over the Citrix farms. Windows does actually have nice performance instrumentation and nice documentation to go with it.