I have been running processor and memory intensive tasks past 2 weeks. But, even after i stop the run, the system is pathetically slow. A simple operation like opening the terminal by clicking on the icon takes 4 5 seconds. And firefox hangs a lot . I have scp'ed about 60L files from my server ( which is in the next room ) to the local system and it has been running since yesterday morning(30 hours and counting) . How to diagnose what is taking up so much of the available resources that Linux feels worse than Vista ? For now, i cannot restart the system as the scp operation is still running :( I checked System Monitor and it shows CPU1 and CPU2 usage between 20-40% [it keeps fluctuating].

Configuration : 64-bit AMD processor, 2 GB Ram.

I ran top to see the swap availability and this is what i saw > Total Swap : 4095992k . Used -> 198872k . Free : 3897120k

just out of curiosity, how are you scp-ing the files? Are you calling scp for each one, or are you copying a bunch of them using one scp call ( i.e. scp 192.168.0.2:/my/files/* ./ ) ? That may affect your performance as well. And why scp and not a secure rsync? (not trying to be nit-picky - just trying to completely assess what you're trying to do).
–
WillJan 19 '11 at 14:12

1 Answer
1

There are three main issues which cause performance degridation, you can check for issues with these bottlenecks using the top command:

Processing (CPU)

When your CPU has way too much to process, it tries to prioritise what items should be processed first and how busy the system should be allowed to get. This is complicated by multi-processor and scaling processor technologies but it's basically an easy thing to check.

As the processor gets busier the system will slow down in response.

IO (Input/Output) aka Disk Activity

This is a key issue for modern machines with plenty of RAM and strong processors. The fact is, there is only so much bandwidth on the PCI bus and in-between all the parts of the computer. Copying files from the hard drive to the network causes slowdown in disk access and device access.

Accessing files creates file caches in memory and uses processing power to generate protocols needed to shuffle the data to a different place.

Memory (RAM)

If your computer ran out of memory, your programs would simply halt and/or crash. Instead the system starts putting lots of application data onto the swap partition. If you have a very intensive process that uses all of your ram, then you will find the entire of gnome and the ubuntu desktop will be saved to disk in the swap while the process completes.

Trying to load another process will take time as the data has to be recovered from the swap area which is orders of magnitudes slower than ram. Even when your process has finished, there will still be large chunks of important system data in swap which will degrade performance when it's pulled back for use.

Obviously communicating with the disk causes i/o overhead and swapping is to be avoided if at all possible. This last item is the most likely cause of your issues and I'm sorry to say, there isn't much that can be done about it. Make sure your processes are really gone using the top command mentioned above and if they are, be patient with your computer as it tries to recover large chunks of memory from the disk just after such processes.

I did use the top command to retrieve the information about the RAM and swap memory , both available and used. And even though all processes, except a rm process and a scp process is going on, the system is very slow. So, i guess i have nothing to do but wait. I wonder though about the possible reason for the slowness of the scp process. Thanks.
–
crazyaboutlivJan 18 '11 at 15:16