It appears that you're running an Ad-Blocker. This site is monetized by Advertising and by ">User Donations; we ask that if you find this site helpful that you whitelist us in your Ad-Blocker, or make a ">Donation to help aid in operating costs.

Okay I am fairly certain my swap partion isn't being used. Doing a few different monitors shows no activity. Memory monitors show almost allways almost maxed out. How can I make sure swap is being used, I messed around in linuxconfg but got no where. any ideas are most welcomed. Thanks ahead of time.

I think swap was mainly for older machines that didnt have a lot of RAM. When the cache was filled it would use swap space as memory to hold data. If you have lots of memory in your machine I dont think it would really need to use the swap space unless your running some hefty applications.Windows has something similar to swap, virtual memory, but they just dont use a set space they use the empty space at the end of your drive. I could be wrong though, this is just what my understanding of swap is.

Quick check to see how much memory and swap is being used is the top command. Don't be suprised if most of your memory appears to be being used. Most operating systems grab as much memory as they can so they can manage it.

What Energy said. Don't worry if it isn't using swap space. Swap is a last resort effort. You want it to not use swap space. It's slower then RAM. If you want to see your swap in action. Open up about 5 instances of Mozilla then do a top. You'll see your swap screaming in pain. Another thing, as a rule of thumb. Your swap is supposed to be the same size as your RAM. Or double it. One or the other. I can't remember. : )

now a days with machines that use ~256MB DIMMs and up it should be the same with older machines that use ~16MB SIMMs it should be doubled since it will probably get used alot more. But this isnt a written rule or anything just my opinion your welcome to make your swap space whatever size you like.

Yeah, that's a little excessive IMHO. 512mb of swap would be more than ample unless you plan to continuously run a few big programs at once. More than likely you could even get away with a little less if you wanted.

This is just the rule of thumb. Make sure you keep that in mind. If you aren't going to be a server you probably will never even notice. You probably still wouldn't notice even if you were a server unless you were really running it ragged.

First, when RAM was very expensive they use to recommend 2 1/2 times the amount of RAM. (i.e. on a 1 gig drive with 32 Megs of RAM 80 Megs was an acceptable amount of disk waste.) Now remember that this figure came about because either the machine was in a multi-user environment or was a workstation trying to run graphic intensive programs (i.e. Pro-E, AutoCAD, etc.) For those who have never had the fortune to be in this kind of environment we are only talking pre-1995 here.

Second, most systems today do not need this much swap space. However, when you get into serious server environments swap can begin become critical. For instance large SGI servers, if they crash and core dump (write their RAM to disk), need as much swap as memory. The core dump file can then be analyzed for why the system crashed. Also, large databases often use extensive swap.

Third, if you get into a situation where you really need to worry about swap there are some tuning tips. The simple ones are use multiple smaller swap partitions and spread them across disks.If running a database avoid putting swap on the same disks as the database or transaction logs.

One of the advantages to commercial operating systems is how they handle I/O (input/output.) Linux and the BSD's have come a long way but fiber channels, neural design, swap, multi-processors are still the areas the big boys excel in.

That is actually a great question. I apologize for not answering it in my last response.

As bus speeds and processors become faster through Moores law the old hard drive wiring became the bottleneck. 160 MB Ultra SCSI-3 was not fast enough. Also, as you increased the number of disks being used to increase server capacity into multi-terabytes the distance between the hard drives and the bus became to large for SCSI technology. They therefore went to fibre channels to increase bandwidth and distance. (Notice I fixed my spelling. It is fibre according to SGI.)

As a side note, when you look at the speed of IDE drives today (ATA 100, 133, and 150) people often assume that these speeds make SCSI obsolete. This is actually not the case. The advantage to SCSI is that much of the machine's I/O load is offloaded to the SCSI processor. This frees the CPU to perform other activities. You will notice this when you transfer a large file (a few hundred MegaBytes) across a hard drive. The OS becomes very sluggish because the CPU is having to manage every byte being transfered. (To notice this on Unix machines you have to move the file across partitions, since Unix is smart enough to just move the inode assignment.)

Now for neural design. In very large servers you purchase processors in groups, usually four, and not in single units. This is because those processors are designed to work much as your brain does, calculating how to balance the load between both them and the other CPU clusters. (Sorry clusters is a term I made-up. SGI calls them bricks.)

Now I don't want you to think I have slighted the free OS's by what I have said. It is just a matter of economics. When a machine costs millions of dollars you just don't have time to port Linux to the machine. Capitalistic principles say it needs to be in production quickly. SGI, Sun, etc. have no reason to spend their time porting Linux. Since 8 and 16 processor machines have dropped in price Linux has been ported to them.

Just side note I think IBM also has their version... Never know, I've seen all sorts of fun things from IBM. All should go to the World Linux expo, I'll be there this yr. Anyway, sorry get side tracked.