Split the anonymous and file backed pages out onto their own pageoutqueues. This we do not unnecessarily churn through lots of anonymouspages when we do not want to swap them out anyway.

This should (with additional tuning) be a great step forward inscalability, allowing Linux to run well on very large systems wherescanning through the anonymous memory (on our way to the page cachememory we do want to evict) is slowing systems down significantly.

The file backed queues and anon/swap queues receive differentpageout pressure. Basically the larger the fraction of pageson each queue that were recently referenced, the more we letoff the pressure, since that queue contains useful data.

For example, if 75% of the pages scanned on the anon/swap queueswere referenced, but only 25% of the pages scanned on the filequeues were referenced (ignoring used-once pages), we will put3 times as much pressure on each file page as on each anon page.This is further modified by the /proc/sys/vm/swappiness parameter.

---This patch has been stress tested and seems to work, but has notbeen fine tuned or benchmarked yet. For now the swappiness parametercan be used to tweak swap aggressiveness up and down as desired, butin the long run we may want to simply measure IO cost of page cacheand anonymous memory and auto-adjust.

Please take this patch for a spin and let me know what goes welland what goes wrong.

Changelog #3:- Change some whitespace on Andrew's request.- Use unsigned long, not ULL since the calculations in get_scan_ratio() no longer need numbers that big.Changelog #2:- Fix page_anon() to put all the file pages really on the file list.- Fix get_scan_ratio() to return more stable numbers, by properly keeping track of the scanned anon and file pages.

-- Politics is the struggle between those who want to make their countrythe best in the world, and those who believe it already is. Each groupcalls the other unpatriotic.