Posted: Wed Mar 06, 2013 11:00 am Post subject: Maintenance - how much time does it take YOU?

Newbie question here: how much time will maintaining a "simple" Gentoo system take, in minutes/ hours per week? I'm very close to installing Gentoo but I'd like to know how much time it really takes.

For the past 5 years I have worked mostly Linux-based. Starting on Ubuntu, and later using for sorter or longer times Mint, Archbang, Arch, openSuse, Fedora, Siduction (=Debian Unstable), Sabayon. My "home base" so to speak has always been Crunchbang (=Debian Stable).

I am a normal user, not an IT guy. I'd be using it on a i5, 8GB RAM laptop. I don't use many programs: some office programs & suites, a few editors, VLC, Inkscape, Gimp, some music programs. I do like to tinker, to optimize, to have goofy Window Managers. There are lots of reasons why Gentoo is attractive to me (do I have to preach to the choir?), the one thing that might be a show stopper would be *too* long maintenace times.

So, what is a realistic estimate for maintenance time? Thanks in advance, Pieter

Normally, after installation, you can adopt a "once a month" update routine. It is not too short and not too long that you would have tons of updates piled up. In this case, you would run "emerge -auDv world --changed-use" and look at the output and if it is satisfactory, then you let it run. This compilation phase doesn't take a lot of time - if you are not fussy about USE flags and the programs that are installed, then you would simply accept whatever update is going happen. What takes time is the merging of the configuration files in /etc post this emerge and taking care of any messages from emerge. The former can be largely taken care of by emerging and setting up cfg-update. The latter can be read in an organized fashion by using elogv and taking any further actions that you need to take. So, the list of commands you would be running is

Code:

emerge -auDv world -- changed-use
emerge @preserved-rebuild # if required, OR revdep-rebuild if you are not using portage-2.2
cfg-update -u # update config files
elogv # OR elogviewer to check messages from emerge

The actual amount of time you will spend attending to these commands will be, say 15-30 minutes at the most (I am not counting the compile times - only counting the time you need to pay attention).

Now, the not normal case - sometimes the upgrade happens to be disruptive in the sense that some crucial system package gets updated (recent example is udev) and you need to pay a lot of attention while doing the upgrade. This is a time consuming upgrade because you will want to make sure that you don't do any silly mistakes and that you follow the directions from emerge, read up any news item (eselect news), and/or read the forums. This kind of update can easily take more than an hour. Fortunately, such disruptive upgrades don't happen so often nowadays (at most once or twice a year)._________________emerge --quiet redefined | E17 vids: I, II | Now using e from git | e18, e19, and kde4 sucks :-/

That's mostly it. But I'll add that, if you really don't care about use flags and you are going to blindingly accept updates, you'd better stay in Sabayon or Arch. It'll be the same but without the etc.update+revdep-rebuild hassle, and without the occasional abi or driver breakage._________________Gentoo Handbook | My website

Executive summary: given the variables you gave and ignoring initial install time which you will most likely do one time, weekly updating once you get into routine will probably take somewhere within 5-30 minutes, rsync process inclusive (which is noticably slower than apt-get update). Your CPU and RAM will be the fast parts. Gentoo will let that i5 stretch it's legs, but at some point you'll run out of things to keep those monster cores busy. You'll 'miss' those other distros having twice as many processes running to do the same job. You'll spend far more time trying to configure your kernel (assuming you're new to that) than actually building it and once you invested that time on a manual roll your own it takes very little additional time on revisions. You can always do the genkernel route to do Gentoo's way of a generic+initramfs and a million modules to try to meet all hardware configuration needs just like the mainstream distros do and my first install booting on an external usb hard drive connected to a laptop did (Dell BIOSes from 2006 are fickle creatures). Sometimes there will be surprises, between the other weeks of yawning fits. Any surprises should just be viewed as character building, interesting and adding to your knowledge base.

You want tinkering (which is undefinably time consuming), you'll get all you could possibly want with Gentoo. You will essentially be your own distro on look and feel that you set far more than what you've listed. You definitely get back what you put into it and those that don't put much in and drag heels tend to get a kick in the chops on occasion.

Bigger packages usually have longer release periods. So while they were a bigger deal on slower aging hardware to build and infrequently rebuild, newer systems like yours will generally just laugh it off._________________Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn.

Others already gave probably better answers, but I can share a bit of my experience at least.

I tend to have it in my morning routines to sync and check for anything to update. As above, this isn't what is suggested usually. You'd more often see people do it once a week or every other week or month I guess. I do know there are a bunch of people syncing once or twice a day too, though!

I imagine it's a bit of a double-edged sword. On one hand, you are more likely to have less packages to update at once, than you do when you do it less often. But on the other, you are more prone to get into possible issues with packages (more of a thing with the 'unstable' ~amd64 and ~x86 than with 'stable' amd64 and x86 though).

As for the actual update process... it doesn't really take any time from me, actually. I fire up KDE or e17 (well, e17 doesn't work for me for some reason and I haven't yet researched on it so KDE it is... but I digress), open a terminal and one ore few apps via that, while I also have a tab for syncing and emerging. Simultaneously, I'm doing my other stuff. When things are in sync, I'll check what emerge -pvuND @world has to offer me (N usually just to see what has changed... one can pretty much always ignore that, but I want to build things I guess, dang it!).

After more or less carefully viewing the menu, I'll get rid of the -p (--pretend) and replace it with a -j (--jobs), and let it roll.

Even with the -j option, I have no issues with doing everything else I do (games, video, music, whatever). I have only a 6-core Phenom II 1090T, with 8GiBs of RAM, and that's probably way more than enough.

Depending on what was updated, like if there was KDE things or Xorg stuff, I may or may not restart X after everything is done. I may even do a Kernel upgrade right after emerging new sources, copy things into place, but actually start using it whenever I feel like re-booting (usually the next morning, unless there's a feature I want to try out).

So you see, it doesn't really take time at all (I think) unless there is some sort of an issue with a package or few. Despite the horror stories you might have seen, I fail to break my installation ever. ~knocks—on—wood~

I started using Linux in the late 2010, and I started with Gentoo. Yes, with no prior experience on Linux/Unix at all, other than a short test-drive on Ubuntu which didn't yield a thing... I tried to break the system a lot, even by mixing a lot from the 'unstable' tree to the 'stable' one. Not so long ago I went fully 'unstable' since mixing stuff isn't really a thing one should do (case dependant, of course). Even on ~amd64 (that means 'unstable'), I never really have to tinker with things so much as to think it took a lot of time from me.

Then again, I might actually like tinkering with things and that's why I'm of that mind-set. ^^;

I have a 10 year-old laptop with a 1.7GHz intel pentium CPU that I have been testing things out too, and with that single-core CPU, things obviously take a lot more time to compile, and with 512 RAM, it can be pretty slow with anything else when compiling something like, say, gcc-4.7, and things will take a lot more time (for example: net-libs/webkit-gtk-1.8.3-r300 just under 19 minutes versus 2 hours, 49 minutes and 2 seconds). In short: the more cores is the more awesome!

I feel like I'm rambling from here to forever now, so I'll leave you with you can only really find your answer by trying it out for yourself, methinks. ^^

I do hope this helps~
Good luck!_________________Kind Regards,
~ The Noob Unlimited ~

To save time and frustration I would really recommend reading all portage output, both the pretend stuff and most importantly the output at the end, doing what it says is a must.

Also, after a sync and a run of emerge -DNupv world I pick out packages that affect boot and do them first - kernel, openrc, grub etc. Esp the kernel, some packages check to see if .config has the right options set and fail if not, so best to get it done first (only if you have the symlink change automatically which I beleive is the default).

Given your nice new hardware everything should compile pretty quick, longest for me is firefox and that ain't too bad. Plus as said above you can do other things while this happens. The time really goes on config updates, and doing what portage says. I can recommend dispatch-conf for config updates, it keeps track of previous changes and after some use keeps the amount of config edits down some.

To more directly answer your question, I would measure it in minutes per week, not hours. However, if you haven't updated in months it can be a multi-hour process, depends on whats been changed._________________"Give me control of a nation's money and I care not who makes the laws."
Mayer Amschel Rothschild

How fast can you type in the update command? Five seconds? Time off after that, your computer will do the work, you can do something else.
In five minutes or so you can check back and type in upgrade command, another five seconds, faster from shell history.
After a minute or so check the emerge output, this is going to take more time, in particular if you do not like what you see and have to adjust flags. Generally less than a minute. Then let emerge do it's job.
After emerge finishes read it's output, update conf files, may take half a minute.

Personally, I spend an average of about three minutes a day keeping my gentoo installs up to date.

You can update your tree mindlessly, and then you'd do a:
emerge -uaND --with-bdeps=y world (Or similar)
and have a look to make sure all makes sense.

Occasionally you'll get a problem(Need to write a line in a file or something) and sort it out pretty quickly, and then on a rare day, you'll get something that won't build properly(Not really much of a problem).

The largest time consumption is actually building everything, especially on my netbook, with with an i5 there are only a handful of packages which you'll be irritated to see pop up when you're updating.

As others have said, you can even just do it once a week/month or so._________________CFLAGS=" -O999999"

It depends on your knowledge and experience.
I have gentoo at at home and at work, i manage both of them from home (when i'm at work i don't have the time,cpu cycles and ram to spend with updates). My systems are ~ _________________"Dear Enemy: may the Lord hate you and all your kind, may you be turned orange in hue, and may your head fall off at an awkward moment."
"Linux is like a wigwam - no windows, no gates, apache inside..."

First problem I see is you don't understand what system maintenance means.

It's the cleaning out of old files - not apps. Performing backups (automated) and adding/removing users.

If you think maintenance means updating software and such, then you're doing it wrong.

The first thing to do is look real hard at the intended use of the system. This also gives you an idea of the possible security holes that need to be addressed - yes you need to plan the security from the beginning, otherwise it's a day late and a dollar short always catching up effort. From here, you begin deciding what software to use to fill those needs. Before deciding, check the apps out and look for any GLSA's or other security flaws that have been posted online along with proposed work-arounds for them.

I don't consider a system stable until at least 90 days of normal use hasn't exposed any issues. Once that point has been reached, I lock things down and don't make any changes for at least 12 to 18 months. Sounds like a long period doesn't it but that's the minimum I can get away with due to regulations (business/governmental). Some of us always have to consider the damn red-tape. Long term stability is important from a user standpoint as they don't like change. Stability rules and it's why so many businesses have yet to upgrade past XP-SP2 even though SP3 was released 7 years ago.

Side note:
Home Depot just upgraded their inventory control software from DOS 5.0 to WFW 3.11 last year. DOS 5 was released almost 30 years ago and they just upgraded from it to WFW 3.11, that was relased 25 years ago. That's a long time - a full human generation between upgrades yet companies do this all the time.

If you keep in mind that changes have a time cost - in learning the new methods/software/what ever, you'll quickly come to the same way of thinking as I have on updates and upgrades. Simply put, unless there's a major improvement or bug fix, I ignore updates with the simple "If it aint broke - Don't Fix It" thinking and it works quite well for me. The only time I'll bother making an exception is if I've been hacked or attacked. Then I'll look at what happened and begin planning a clean install - I do not bother trying to fix a broken door, I replace it.

First problem I see is you don't understand what system maintenance means.

The OP stated outright they weren't an 'IT' person (whatever that label means nowadays), administratively or inferred somehow otherwise. The only user implied was them. They may as well have been a Mac, Windows or Ubuntu user asking how time consuming it'd be to properly use Gentoo for their normal desktop personal use because that was the gist of the post.

FastTurtle wrote:

It's the cleaning out of old files - not apps. Performing backups (automated) and adding/removing users.

I completely disagree. System maintenance in the way you're implying is far more broad in scope.

Furthermore, most old files you're referring to are app related and the system files they depend upon.

Gentoo already provides a plethora of house-cleaning, be it configuration files, stale shared libraries and link dependencies, redundant/insecure/abandoned/replaced/upgraded/whatever packages are removed, replaced and so on. All of which can cause security and stability issues if not pro-actively maintained. Most stray files are user created and generally frowned upon because it's up to YOU to resolve issues caused by breaking outside the box.

Backups are a user space and non-Gentoo specific issue, manually or via specialized software, regardless of whether automated or not. Yes, your /home directory is your problem, but that's the case everywhere. I did my share of RAID storage servers backed up to DAT nightly with end of week tapes stored securely offsite in the 90s. When a bank burned down a few businesses away, the executives I worked for stopped questioning why I went through such perceived hassles of preserving their book of business. Data representing thousands of clients that they were fiduciarily responsible for along with the US federal government. Regardless, most people don't backup, particularly with any regularity or worse they unknowingly trust 'cloud' subscription services to do the right thing automagically and that issue is, again, irrelevant to the original theme of this thread. Worse yet, many of today's newer users could care less about any localized data.

FastTurtle wrote:

If you think maintenance means updating software and such, then you're doing it wrong.

Everyone is entitled to strong opinions now and then. We agree to completely disagree. Software (system/user) is generally updated for two main reasons, yes? Functionality (added/changed) and maintenance (fixes). As in the software development lifecycle. Usually it's these localized custom software solutions (which are all too often already end of life if any) stopping up the works. So, yes you're damn well right it does count in that category by mere definition. Confusing chasing bleeding edge releases and upgrading just for the sake of are not the same thing.

I see people continually miss the point that current day Gentoo tends to err more with a strong focus on stability and security, all things considered, while you still have the option to go risk bleeding edge if desired. Yet people complain repeatedly in the forums here for one extreme or another, that it's not new enough with xyz. Or, because they insisted on some particular package/version outside of considered stable or couldn't follow best practices on keeping their system current with required changes, that now things 'break' and cry foul. Meh. If that's really what you want then do grab the latest still supported 'LTS' variant of xyz (including an initial Gentoo install that you just sit on if you insist), let it patch to current and network unplug for a decade. When you come back you can tell us how well the capacitors held, if your input power remained conditionally perfect and clean and if any high speed mechanical devices still manage to spin up. Maybe that yet unknown filesystem bug won't have hit you causing bitrot by that point. That is, of course, if the system will still manage to talk to the outside world which never is a moving target of change.

FastTurtle wrote:

The first thing to do is look real hard at the intended use of the system.

Again, refer to the beginning post. Although I agree on glsa-check -t all tests when one emerge --syncs and just frankly doesn't have the time to administratively iron out how important a pretend emerge view of a world update might affect them. Counting only on a glsa check to catch an active vulnerability in a timely manner is foolish. Many updates resolve CVE issues well before a GLSA has been put out. Secondly, in production environments, time is money is the root of all evils and systems are generally rushed into place and band-aided over time. They're lucky if they have a very well written and followed security scheme policy in place at the onset and it requires fairly frequent oversight and revisions to even remain effective. Triage plans also need to be in place to expect and plan for the worst because it can and does happen. A vast majority continue to suck at this because management simply do not want to devote the infrastructure and expense until real liability pressures have become too much of a concern.

FastTurtle wrote:

I don't consider a system stable until at least 90 days of normal use hasn't exposed any issues.

That could be perhaps a reasonable view from a does the system 'function' standpoint. Assuming usage constraints were fully tested (good luck), hardware redundancy in place, etc. Given that many security vulnerabilities are often found well after 90 days and actual resolutions made available often much later after the fact, it's not the best security test. Older code tends to make a lot of sloppy assumptions about input validity, often skipping too many checks. Even if all non-security related bugs in software could be somehow discovered and resolved within a 90 day window, the world would be an amazing place.

FastTurtle wrote:

Once that point has been reached, I lock things down and don't make any changes for at least 12 to 18 months.

No changes at all for at least a year to a year and a half, really? Then you have definitely risked exposure for your clients. Well at least these were read only embedded firmware (because firmware never [sic] has bugs!) proprietary systems instead of Windows clients/servers that want to talk to everyone and everything... right? So you, what, unplug from the outside world?

FastTurtle wrote:

Sounds like a long period doesn't it but that's the minimum I can get away with due to regulations (business/governmental). Some of us always have to consider the damn red-tape. Long term stability is important from a user standpoint as they don't like change. Stability rules and it's why so many businesses have yet to upgrade past XP-SP2 even though SP3 was released 7 years ago.

Businesses and governments could generally care less about their employees being object to change.

Regulations? Seriously, if you sit on your laurels for a year doing the see no hear no chant citing regulations and you end up being responsible for say Sony's debacle, don't you think you'd be held to the fire right quick? Maintenance falls into the damned if you do and if you don't category. There is no perfect black and white.

I don't doubt having dealt with government issues on a federal, multi-state and local level, along with wacky (in)decisions made by businesses that usually only revolve around not seeing past the bottom line, that perhaps your hands are tied changing anything. But that's usually situated around support for... /drumroll apps (again, often EOL) that they have invested time/money into and require continued uninterrupted functionality from without new investment.

Adminstrators who didn't roll out SP3 for Windows XP clients usually did so only because of perceived costs, some known hardware conflicts, the lowest multi-year rollouts of lowered spending into 'IT' oriented infrastructure, and pre-existing band-aids (patches) that they already RISKED applying on top of SP2 that were incorporated into SP3. They also, for good reason, hated Vista like everyone else. As far as stability goes, a cleanly slipstreamed SP3 installed XP Pro will function better more often on same hardware than a patched SP2 set equivalent. This is of course completely ignoring the obvious risks to those same systems that now remain often critically unpatched since SP2 hit EOL in 2010 on the most vulnerability patched OS released on the planet.

FastTurtle wrote:

Home Depot just upgraded their inventory control software from DOS 5.0 to WFW 3.11 last year. DOS 5 was released almost 30 years ago and they just upgraded from it to WFW 3.11, that was relased 25 years ago. That's a long time - a full human generation between upgrades yet companies do this all the time.

Yeah, along with KMart, Sears, hospitals and the banks. Old guard regimes are not good examples. Like many large organizations, they're full of band-aid solutions in their often abused 'IT' departments with a focus on not affecting profit margins. While you list that example from said corporation, they've invested over years on the PoS, RMIS, SAS and the behemoth that is SAP ERP/CRM systems (which should have included your IC example, but who knows with some side accounting departments...) see their blog http://team2homedepot.blogspot.com/. And given this, http://arstechnica.com/gadgets/2008/11/microsoft-puts-windows-3-11-for-workgroups-out-to-pasture/ I question your example on where and why they even would have obtained large scale site licensing for WFW 3.11.

FastTurtle wrote:

If you keep in mind that changes have a time cost - in learning the new methods/software/what ever, you'll quickly come to the same way of thinking as I have on updates and upgrades.

Then you must be an absolute joy to troubleshoot for. System administrators are paid to expect, correct and deal with possible changes and troubleshooting events on a daily basis while keeping the illusion of no-change to their users. That's the magic and good ones are never idle. Sure, in an ideal world, nothing would need to change or be fixed once set. There are no bugs. Semantically, everything functions exactly as spec'ed and the spec never changes. Businesses would love that, there'd be no support overhead to pay for. Everything would be magically known, no risk. Yet reality is a slippery slope at best which you either fight against the grain to keep a foothold on, or look the other way while you eventually get overwhelmed, crushed and washed aside with the coming tides. Windows users aren't immune. IBM, Microsoft and Apple aren't immune, corporations and governments large and small haven't been remotely immune. So why do you think you're the exception and believing it's helpfully sound advice for others, particularly with Gentoo?

An attitude of inflexibility is not the answer.

FastTurtle wrote:

Simply put, unless there's a major improvement or bug fix, I ignore updates with the simple "If it aint broke - Don't Fix It" thinking and it works quite well for me.

Does it? How would you know, short of an actual real strenuous audit or point of failure noticed? Utilized Metasploit against your systems? You do so at your peril on a rolling release distribution such as Gentoo. So let's discuss that...

FastTurtle wrote:

The only time I'll bother making an exception is if I've been hacked or attacked. Then I'll look at what happened and begin planning a clean install - I do not bother trying to fix a broken door, I replace it.

In this day and age, short of a tremendous amount of pro-active work versus the money going against it, if someone or worse, some group, wants to hit you, you won't know or know at all easily what happened until it's far too late. It's not like they'll leave a welcome note on your doorstep. And when you do that clean install, how did that really resolve your issue from re-occurring again?_________________Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn.

Navar: Any response I provide can't be proven/disproved and are outside the scope of the OP's question of "How long does system Maintenance take on a daily/weekly/monthly basis".

As to my business security practices, I'm forced by both Law and Regulation to follow certain practices and they're well outside the Scope of the OP's Question so I'm not going to head there as they're not open to debate unless you're in a position to actually change either the Laws or the Regulations I'm subjected to.

As to comments in regards to system stability, I'm in business to make money, not spend it, just like every other business out there. Thus if it doesn't help the bottom line and isn't forced on me by a law or regulation, it aint gonna happen. Another old saying that applies is "Penny Wise - Pound Foolish" and yes MY time has a definate cost to the company. I don't work for free and do get Overtime when earned.

@navar @fastturtle - Thanks for the insights. For the record, I'm the sole user of my system. At my work (university) the system admins are also very conservative. Yes, I understand that, but to me that conservatism makes the system unusable. Forcing employees to work with oldfashioned junk has a cost - lower productivity and lower satisfaction. More and more colleagues bring their own devices, because the official ones are stuck in the 90s. So we see byod, cloud and shadow IT. Meanwhile the IT dept. loses its relevance and they don't even see it. They are nice guys, I know them, I tell them what is happening, but they still don't see it. And our students don't use the official systems at all unless they are forced to....

For my personal use, stability is not very important. I back up my files, and I have several Linuxes installed on my laptop. So if say Arch or Gentoo would not run, I fire up Mint and work from there. The file system should be stable of course, I dont use btrfs for instance. Securitywise, I don't have the time nor the knowledge to read all warnings. In my Debian Sid system I look at the Debian Weather (yes a silly name) and when there are not many problems I dist-upgrade (I do nt know the Gentoo term yet). And I hope things dont break. If they do I wait for a few days and then I upgrade again. Yes I am a noob, I am sorry

Finally it seems I have time. So Gentoo here I come, I do love the Forum!

LOL Actually, I found out that I know more about computing (specifically Linux) than many professional IT guys at our university.....seems like we came to the point where*everybody* is doing exiting IT-projects BUT the IT-dept. who are doing mindnumbing stuff like XP->Win7 migrations. Booooooring, click, next, click, next

with my morning coffee. Running on stable amd64 with a few unstable packages (mostly multimedia apps), most of the days there are none or very few packages that need updating.

Today was a bit special as a new version of icu came along, it always breaks several big packages and revdep-rebuild will chug along for hours. But on average, I'll guess that daily maintenance takes less than ten minutes._________________Grumpy old man

Not much, apart from usual `emerge -avjuDN world`. Tricky stuff starts when gentoo maintainers screw something up (happens once a week on average), then it takes about an hour to figure out the problem and push fixed ebuilds to my private overlay.

Kernel version changes means new options and often the move of some drivers to other locations. e.g. my Webcam stopped working in Sykpe. Found out, that the UVC driver had moved to different config location. Takes also some time

My home (These days that's 4 machines, 3 clients and 1 server.) systems usually get updated weekly. I start on Saturday or Sunday morning before the rest of the family has awakened. Getting it all started usually takes somewhere around 20 minutes to the point where I've pressed "Enter" after "emerge --sync && layman -S && emerge -atuvDN world". That number is kind of an average, because sometimes I see things in that emerge that I don't like, an have to fiddle with USE flags or something.

At that point the maintenance time becomes largely irrelevant to me, because I'm off doing other things while the machines chug along. I visit the controlling console with its 4 xterms several times through the morning, nudging things along, typically spending less than a minute at any one visit. Typically by noon or so the last "revdep-rebuild -p" is done. The server is the slowest machine, and typically the last one done. If one of the clients needs a new kernel it's usually done before the server finishes its package updates. If the server needs a new kernel it goes into the afternoon.

So run time, 1/2 day or so, weekly. Keyboard time, probably 1-2 hours weekly. I could probably automate and reduce the keyboard time, but I haven't bothered._________________.sigs waste space and bandwidth

I run 3 Gentoo machines, one Lenovo Thinkpad T500 ~amd64 as a desktop, one stable amd64 Lenovo Thinkpad T500 at work and one stable amd64 Atom D525 as a homeserver. My desktop gets updates every three or four days. That update works within 10 minutes except for KDE-Upgrades which takes nearly 10 hours. While I switch from KDE to PekWM/tint2 these days upgrades will be very fast in the future.

My desktop gets updates every three or four days. That update works within 10 minutes except for KDE-Upgrades which takes nearly 10 hours.

10 hours for KDE only?
I didn't do updates since march and ran it a couple of days ago. Actually all the system was rebuilt (570 Mb of sources was downloaded and 162 packages was rebuilt/reinstalled) in 2.5 hours. Including new kde 4.10, firefox, amarok, qt, kernel and so on.