What are your tips for improving overall system performance on ubuntu? Inspired by this question I realized that some default settings may be rather conservative on Ubuntu and that it's possible to tweak it with little or no risk if you wish to make it faster.

This is not meant to be application specific (e.g. make firefox load pages faster), but system wide.

Preferably 1 tip per answer, with enough detail for people to implement it.

It would be a good idea to mention how effective your tip is: how much of an improvement did you notice, or even better, measure?
–
GillesAug 13 '10 at 17:47

5

I have not found any evidence that changing the swappiness has any positive effect. It might give a temporary feeling of performance increase, that seems to subside quite fast. I have not seen any concrete evidence in form of benchmarks that would proof the effectiveness of changing the swapiness parameter
–
txwikingerAug 14 '10 at 14:27

5

I doubt it has any noticeable performance impact. The ttys used hardly any memory, nor would there be any significant cpu usage.
–
txwikingerAug 16 '10 at 14:37

38 Answers
38

If you are "the average Joe", then just don't do anything. Don't fiddle with programs or settings which you don't understand. Don't follow tips posted on the Internet how to improve the performance of your system by compiling some software yourself or by installing a selfmade kernel.

Some of those tips may give you minor performance improvements indeed, but some of them will also give you a real headache, if you changed the wrong setting, disabled the wrong service, installed the wrong driver etc.

Therefore just be happy about your nicely running system. And BTW: Why would you need those 5 percent performance improvements? It will not lead to typing your office documents faster or editing your holiday photos in half the time.

And just to be clear: If you are not the average Joe, but a developer/hardcore gamer/... needing any cycle you can get, you are not the target of this comment...

Tweaking settings and compiling your own kernel and software is a great way to learn. I think it should be encouraged, as long as people know that when they break things badly, they may need to re-install their OS.
–
NerdfestAug 21 '10 at 14:09

16

Only people that really want to understand how the kernel works should consider compiling your own kernel. For most people, they should even know a "kernel" exists. Stick with a stock kernel and you can take advantage of the regular software updates.
–
Brad FiggAug 22 '10 at 20:11

1

@Brad, ah how the times have changed :) When I started with Linux we had to compile our own kernel since it was before dynamic modules were implemented. I don't miss it though!
–
Thorbjørn Ravn AndersenAug 26 '10 at 5:42

1

@Nerdfest That is a good point, my most important teacher in the linux-world have no doubt been the Gentoo manual... I never really got a working GUI, but I did learn a lot about how a basic linux install works! And that is practically the same on all the distro's
–
LassePoulsenAug 26 '10 at 8:25

1

I'm playing around with performance tweaks right now precisely for what I'm learning from them. That said there are two cases where I always want to tweak with Windows, and I'm having the same attitude with Ubuntu: 1. A system's so slow that every little helps. 2. A system's so fast that I can't help but wonder what it can be cranked up to! Middling systems I leave alone :) That said, a definite +1
–
Jon HannaJul 30 '12 at 16:51

Disable automatic startup of any services that are not needed (or even remove the package completely).

A lot of packages startup services automatically. These services then use memory and CPU even they are hardly ever used. It is better in this case, to stop those services, or take them out of autostart, and start them up only if they are needed.

To remove applications from starting up on 10.04,go to System > Preferences > Startup Applications (may be slightly different on other versions)

On 12.04, you can go select the startup applications by clicking on the Dash Icon. Then, type startup and select "startup applications".

And just unmark the apps you don't need. But be sure about it, don't just remove apps you don't know. If you are not sure about one, leave it that way. A google search or new question here about specific programs will help.

shell option is now obsolete. Default value is makefile and shell is by 2010-05-14 an alias for makefile.

4. Clean up apt cache at /var/cache/apt/archives and unneccessary apt-sources list in /etc/apt/sources.list

sudo apt-get autoclean

5. Install BUM (Boot Up manager)

sudo apt-get install bum

Remove unnecessary applications and services from startup

6. Remove some unnecessary TTY’s

sudo vim /etc/default/console-setup

Edit: ACTIVE_CONSOLES=”/dev/tty[1-3]“

Note: goto /etc/init/ and change the tty’s files that you DO NOT want. Edit them and comment lines starting with “start on runlevel”. So, in this case, you’ll comment the start line in "tty4.conf" to "tty6.conf" files.

7. Install Prelink

sudo apt-get install prelink

sudo vim /etc/default/prelink

Edit: PRELINKING=Yes

sudo /etc/cron.daily/prelink

Actually, Prelink is useless since Feisty Fawn (because Ubuntu uses a very effective runtime linker now). In addition, it's intrusive - it directly modifies the executables and ultimately can break them. DO NOT do it.

This provides no information as to why we should follow your advice. Some context and benchmarks please.
–
The Pixel DeveloperOct 15 '10 at 2:42

3

I was trying number 3 and the comments above the line to change listed the valid options and 'shell' was not one of them. Is this answer old? Did the comment not tell me all the options?
–
JohnJun 6 '11 at 18:26

1

I tried number 3 as well despite that "shell" option was not mentioned as valid one. After edit booting took twice longer than when concurrency was set to "none". So I went back to orginal setting.
–
WojciechSep 1 '11 at 21:09

Sounds trivial, but I found the default 10 seconds in Ubuntu is too long for my tastes. Say my screen takes a bit to auto-adjust the res, I see the counter reads 8 seconds at first sight.

I would edit the timeout to 3 seconds, giving me a second to see the boot menu (accounting for the time my screen adjusts to the res). Plenty of time, as pressing the arrow keys to select another item stops the counter.

This might break some email software that rely on this feature and a few backup tools. But for the casual user this should be OK as neither Thunderbird, nor Evolution would be affected. Though on Ubuntu, switching from relatime (default) to noatime won't bring much improvements. See lwn.net/Articles/244829 how relatime works and you will understand that it will already dramatically decreased the number of last access time updates.
–
HuygensMar 28 '12 at 10:51

If you are short of RAM, use zramswap or zram-config from Ubuntu repos. It's virtual swap that compresses unused RAM contents instead of putting them to disk (which usually freezes the system after you hit the RAM barrier). I experience little to no performance loss with it instead of system freezing every time I run out of RAM.

This works only for Natty and up (because you'll need kernel 2.6.37.1 or newer). For older systems you can use compcache, but you'll have to set it up manually.

For those who never hit the RAM limit it gives some speed boost on HDD systems anyway, but you'd better decrease swappiness to achieve the same effect.

SSD users: most likely you won't experience any speed boost, but zramswap can reduce SSD wear quite a lot.

"The default setting in Ubuntu is swappiness=60. Reducing the default value of swappiness will probably improve overall performance for a typical Ubuntu desktop installation. A value of swappiness=10 is recommended, but feel free to experiment. Note: Ubuntu server installations have different performance requirements to desktop systems, and the default value of 60 is likely more suitable."

The FAQ is pretty complete about explaining what swap is, how it is used and how to change it. Recommended reading for anyone thinking of tinkering with swappiness or the size of swap file on disk.

@Erigami: +1 for mentioning something that actually made a difference to you.
–
GillesAug 13 '10 at 18:04

2

@DecioLira: No. It pushes the apps I'm not currently using onto disk, meaning that the one I'm currently in has access to more physical memory.
–
ErigamiAug 13 '10 at 19:40

2

Do you have any hardcore number that show the difference and in which situation and makes what kind of difference?
–
txwikingerAug 13 '10 at 20:02

2

@Erigami: I played around with swappiness for some time. And at the beginning it felt like it was faster, but with time it all seemed the same. Some real measurements would really be interesting.
–
txwikingerAug 13 '10 at 21:12

2

Various tests I have done prove that vm.swappiness=100 is better than 10. On a slow machine it will help loads, on a fast one it will make no difference (unless you run multiple GB apps pages into ram). It is win-win.
–
NightwishFanSep 28 '10 at 22:30

If you do this, then make sure to check and/or clean your /tmp directory every so often, otherwise you run the risk of running out of RAM just because some app forgets to clean out its temporary files. I read that this was actually a problem on some Solaris boxes at a point, because the OS would mount /tmp on a ramdisk and eventually it would fill up. A good performance booster if you use it right, though.
–
InkBlendOct 24 '12 at 21:34

ensure you have used tune2fs to turn on writeback mode BEFORE you edit your fstab file and BEFORE you reboot.....I say BEFORE because I rebooted after I altered my fstab but before I turned on writeback mode and borked my boot....nothing lost but I had to use a live CD to gain access and change my fstab.... safer if you enable on a non boot drive to test first ....

massive improvement in speed in both boot and shutdown and day to day use

You can also turn off Journal mode that will give an added boost, for added safety make sure you have a UPS connected and working because with these features turned off your data isn't as safe, having said that my system doesn't have a UPS and it's power has been interrupted at least three times and I've suffered no data loss, but your mileage may vary

nodiratime is not needed here because noatime implies it
–
ShnatselJul 21 '12 at 10:35

5

also, data=writeback completely disables journaling, which means that you won't be able to repair your filesystem if something happens to it - e.g. in case of power outage, gpu lockup, etc.
–
ShnatselJul 21 '12 at 10:37

ørn, the idea behind this question is learn a little more about the system, and how to tune for people who are interested about it. Just buying a new machine/parts is an obvious answer that don't teach anything.
–
Decio LiraAug 25 '10 at 22:42

13

Then mention that in the question. Buying more RAM is probably the simplest and most efficient way to speed the system up at all (since Linux uses the unused memory as disk cache).
–
Thorbjørn Ravn AndersenAug 26 '10 at 5:38

The following is for experts only. As the name implies, it can and will eat your data, even if you are careful.

eatmydata is a drop in package that will turn off fsync. Fsync is a system operation that ensures that your data is written to disk before continuing. Generally you want this, as it makes recovering from power outages and failures easier, faster, and less data lossy. It comes at a price though; anything calling fsycn will have to wait it's turn in line, rather than simply delivering data to the kernel to write at some later date. And in some, perhaps even many filesystems, fsync will write out all data, not just the stuff you're interested in protecting.

There are some specific situations where fsync isn't worth the cost. Imagine you have a server that number crunches a bunch of data. Rather than pointing this at a live database, it might be faster to dump into a consistent local database, install eatmydata to turn off fsync, and let that go. This can still crash and lose data, but since it's not the only copy of anything, you can just restart the process from scratch. Or, for example, Ubuntu's build servers, where all we care about is the final package produced. Or, on the desktop side, if a program (like Firefox) is syncing so much it's slowing the entire system down. Just be prepared to lose all data associated with using this, or face dire consequences.

A lot of standard applications use a lot of memory and often also CPU while they are running in the background. Webbrowser, email clients etc are very inefficient in memory usage and the javascripts embedded often use CPU time with no benefit to the user.

Just by only running the applications that are used currently, the system will be a lot faster. Also, stopping applications is the only way of freeing memory lost in memory leaks.

The startup of the application on a fast running system is often less than switching windows on an overloaded and slow system.

Yea but that kind of goes without saying.
–
Dmitriy LikhtenAug 13 '10 at 19:41

1

@Dmitriy Likhten: How many people have an e-mail client run in the background? We should give them a lightweight applet instead that notifies of new e-mail, and the e-mail client is only opened when the e-mails are read.
–
txwikingerAug 13 '10 at 20:04

2

Well.. the problem is that swapping is unfortunately often slower than using no swap but enough memory available to start the application in its entirety. While swapping seems to work well on servers, on desktops its seems to do more damage than helping in my experience (I also have to say that this is one of the deterioration of the Linux kernel in the last 5 or so years. Swapping used to work lots better).
–
txwikingerAug 14 '10 at 14:25

Unity tends to be a bit resource-hungry, though I am surprised to hear that you experienced similarly poor performance even under Unity2D. One possible solution would be to play around with other more lightweight Desktop Environments such as Lubuntu (LXDE) or Xubuntu (XFCE). I think you will see a substantial difference in overall responsiveness and performance.

Additionally, you can try going into the Startup Applications manager and unchecking applications and processes that you don't need Ubuntu to automatically start for you at login (e.g. Bluetooth Manager if you don't have bluetooth, UbuntuOne if you don't use it, programs you simply don't use, etc.) Before doing this, first make hidden startup applications visible in the manager:

Come on, Unity is more than capable of running perfectly on a machine with 4Gb of RAM and a dual-core processor. Suggesting to try LXDE would make sense on a netbook with 1Gb of RAM and an Atom processor. Surely the OP's problem is caused by some misconfiguration or hardware incompatibility, not by the fact that Unity can't run on that hardware.
–
SergeyAug 29 '12 at 21:23

5

You may be completely right about the cause of Christian's problems, and you are certainly right in thinking that Unity should (theoretically) be running perfectly smoothly on his machine. However, neither of these facts in any way negate my statement that Unity is resource-hungry (regardless of machine specs) and that using LXDE or XFCE would undoubtedly yield significant performance improvements. It may not be the ideal solution in this situation (hence my designation of the suggestion as "one possible solution"), but my hope was that it might at least improve his experience.
–
mblascoAug 30 '12 at 3:13

If we are talking about getting from BIOS to internet connectivity i can recommend setting up network without using NetworkManager, personally I've done this because i have a very sluggish DHCP server and NetworkManager doesn't start probing for network until i've logged in.

Part 1. Set swappiness. This may be as per degusa's answer, but it could be the opposite, and it'll have more effect when it is.

One scenario that some of us are happy to be in is when we have plenty of RAM. Generally, we've a small percentage of it being directly used by the kernel and applications, some (maybe a large amount if you've used other tweaks to boost performance such as mounting /tmp in memory) used for ramfs and tmpfs, and gigs and gigs being used as a disk cache to make our file-access faster.

In this scenario once the total used memory including cache becomes high, and an application needs more RAM, linux has to decide whether to take some cache from the file system, or swap out to the swap partition/file.

Since we've tonnes of RAM, and quite possibly only bothered with swap at all so we could enable hibernation, we want it to lean toward taking some of that copious cache, and hence want a low swappiness. If we don't care about being able to hibernate, we might even find that such a high-RAM machine doesn't need swap at all.

Another scenario is someone with low RAM who is switching between a few heavy applications and spending a reasonable amount of time on each. Imagine perhaps a web dev who spends some time on their IDE, some on a graphics editor, some on their browser of choice, a bit on some other browsers to check on compatibility issues, and maybe 5 minutes every hour on their mail client. They're also likely hitting the same files repeatedly with reads and writes and hence benefiting appreciably from file caching. This person could probably benefit from linux being more eager to swap out the memory used by whichever of those heavy applications they're currently not active on, so swappiness should probably be higher for them.

Not only is the best setting for them likely to be higher than the most common advice, but they're probably going to notice it more than the person who always has gigs to spare anyway, too.

Part 2. Priority & number of partitions.

Each swap partition has a priority, and linux will use that with the highest first. If not set in /etc/fstab, it'll be treated as negative starting with -1 (explicit settings are between 0 and 32767 and so -1 is lower than any explicitly set) and continuing in order in fstab to -2, -3 and so on.

The best setting depends upon where the partitions physically are. If you've only one, then it doesn't matter (but maybe you should have more than one, so read on).

If you've two or more on the same physical drive, then they should have different priorities so that it doesn't try to use two partitions that require seeking between them (does anyone know if this is automatically avoided?). The defaults are fine. It's probably not a good idea to have two swaps on the same drive anyway, but it can happen if you created one and then decided you needed more swap later (perhaps when adding more RAM).

If you've two or more on two or more physical drives that are of about equal speed, then setting them to the same priority will mean linux will use them both at the same time, which offers better performance for reasons analogous to why RAID or simply ensuring that there are frequently used files on both drives will - the work gets split between them.

If you've two or more physical drives of equal speed but have swap only on one, maybe you should change that, for the above reasons.

If you've two or more physical drives, of very different speeds, then generally you want the fastest drive to have a higher priority than the slower, so it's used first. You may not even want to have any swap on the slower, though it might make sense if you e.g. have a small swap on a fast but small drive for fast swap, and a larger swap on the slower drive so you've enough space to hibernate.

If the faster of the two is an SSD, then there's two alternatives with different pros and cons:

Highest-priority, or perhaps only, swap on the SSD, for speed.

Only swap on the non-SSD, to reduce writes on the SSD and hence increase its lifetime.

Number 2 is probably the one to go for if you only really have swap to allow for hibernation and otherwise your copious RAM means you don't really need it (and if you're spending money on an SSD, you should spend it on RAM too) unless perhaps you're a boot-up speed fanatic who wants to boot from hibernation at a speed that'll show off your fancy high-spec rig! Otherwise, the balance is all about speed vs SSD lifetime.

If you've a drive existing solely for swap so as not to compete with other file I/O, then you presumably are hard-core about performance and already know about this stuff better than me and are only reading this to see if I got anything wrong!

Answer the questions, and from then on apt-fast will act like apt-get in just about every regard except that it downloads packages in parallel. It makes no difference if you are going to install a single application, but lots for larger installs.

In combination, with this, my /etc/fstab has:

tmpfs /var/cache/apt/archives tmpfs defaults,noatime,mode=1777 0 0

This has the downside that if the same package is used several times over different boots, it'll have to download it again, but then it may have been updated in the meantime anyway. It has the upsides of faster access of them, automatic clean-up of unused packages on reboot.

Since I've been re-installing a lot over the course of these experiments, it allowed me to do a comparison. After installation of 12.04 one will at the time of writing have about 300 updates including a kernel update available just after installing. I ignored software updater and did the above changes before apt-fast update && apt-fast dist-upgrade and the download part is many times faster (the actual installation takes the same time).

I have an alias of alias apt-get="apt-fast" so I don't even need to change habits (the only differences are different feedback on the download, a confirmation on whether I want to download them, and an implied sudo should I forget it, but the commands to trigger anything is the same).

as @blasmat indicated to go through Startup Applications I could see that the service started automatically and I disabled it. Now my computer is much faster. I think there are still improvements I can make (I don't feel it at it top conditions), but after 20 hours of testing the performance I can say it really goes well.

For example, the parameter kernel.threads-max = 16379 sets the maximum number of concurrent processes to 16,379.

This is smaller than the maximum number of unique PIDs (65,536). Lowering the number of PIDs can improve performance on systems with slow CPUs or little RAM since it reduces the number of simultaneous tasks. On high-performance computers with dual processors, this value can be large. As an example, my 350 MHz iMac is set to 2,048, my dual-processor 200 MHz PC is set to 1024, and my 2.8 GHz dual processor PC is set to 16,379.

Tip: The kernel configures the default number of threads based on the available resources. Installing the same Ubuntu version on different hardware may set a different value. If you need an identical system (for testing, critical deployment, or sensitive compatibility), be sure to explicitly set this value.

There are two ways to adjust the kernel parameters.

First, you can do it on the command line. For example, sudo sysctl -w kernel.threads-max=16000. This change takes effect immediately but is not permanent; if you reboot, this change will be lost.

The other way to make a kernel change is to add the parameter to the /etc/sysctl.conf file. Adding the line kernel.threads-max=16000 will make the change take effect on the next reboot.

Usually when tuning, you first use sysctl –w. If you like the change, then you can add it to /etc/sysctl.conf. Using sysctl –w first allows you to test modifications. In the event that everything breaks, you can always reboot to recover before committing the changes to /etc/sysctl.conf.

How is lowering the total number of threads supposed to make a difference to performance? It's rarely reached on most systems anyway. It could be useful on a server whose main job is to serve http requests, but what evidence do you have that the default setting is not the best?
–
GillesAug 13 '10 at 18:02

7

There are plenty of people who've found some setting and decided to tweak it, and then posted it to the web with some dodgy explanation of why it would improve performance, without ever checking whether it did make a difference. Sometimes someone bothers to check, and often they discover that the default setting is there for a reason, namely that the original author did test and chose a reasonable default. So my question still stands: can you cite a benchmark that shows that (at least in some circumstances) the default setting is not appropriate?
–
GillesAug 13 '10 at 19:33

2

@gilles default setting are default for a reason appealing to the most people, those who wish not to tweak can go on with their lives. Although If your among the sticklers (and i am sure you are) then your more than welcome to benchmark it. let me know how it goes :)
–
myusuf3Aug 14 '10 at 2:48