Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!

Notices

Welcome to LinuxQuestions.org, a friendly and active Linux Community.

You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.

Having a problem logging in? Please visit this page to clear all LQ-related cookies.

Introduction to Linux - A Hands on Guide

This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.

Dealing strictly with hardware, is it safe to leave the pc on all the time or does it slowly wear the hardware down? I couldn't find any articles on this and was curious as do I leave it on constantly.

you shouldn't need to ever turn a machine off... if you do it's badly designed. ("your mouse has moved, please reboot windows") My servers' been up for about 3 months, but that's nothing of course. cue the "my uptime's bigger than yours" competition.

looking deeply in the hardware core, you have to keep in mind that turning on and off computers brings "power peaks" to all the devices (hard disks included !) ... which obviously are not the best ways to spare them.

My ultra generic box that I built back when I didn't know jack about computers runs fine staying on 24/7 for weeks at a time. I've had no hardware problems from heat (and there is only 2 fans in there, the CPU and the PS fan).

If you have a cheap power supply, cheap and/or no fans or a poorly ventilated case it could overheat easily. Most computer devices have a lifespan called MTBF that means they could run continuously for xxx hours before the average one fails but turning on/off your system shortens this time. It's kind of like a lightbulb, if you left it on it would last for xxx hours but if you flicked the switch too often it would probably burn out early.

The question for your OS is will it last, I find my Windows machines need a reboot after any extended length of time but my linux machines usually get rebooted only when either the hardware or kernel need to be changed out or I lose power to one.

Windows desktop OSs measure uptime in hours...
Windows servers measure uptime in days...
Linux/Unix (and Netware too) servers measure uptime in months or years.
I once visited a Netware related site where admins listed the uptimes of their servers and one had been up for over 4 years.

The question do I flick the switch till it dies or leave it running till it dies is one of those questions like what's your favorite {OS/window manager/netscape theme/food/movie/song} that will really be up to you in the end.

HA HA, do you mean loud like the wheel turning or annoying like Pat Sajak?

Getting back to the topic: leave your computer on 24/7 if you want, just make sure you have power management setup to shutdown your monitor, or turn it off manually if you aren't going to use it for half an hour or more.

All solid state electronics suffers from thermal cycling. This means heating up and cooling down. i.e expansion and contraction. This is particularly a problem with solder joints, socketed ICS like bios chips, and critical connections like dimm sockets. In the old days of dual-in-line socketted ICs after a while they would climb right out of their sockets due to thermal cycling.

Electro mechanical stuff i.e disk drives also suffer from thermal cycling, but have the added stresses of acceleration and deceleration at varying temperatures to worry about. Everytime you boot your machine the poor old harddrive gets spun up cold and the head actuator gets thrashed just at the point when its warming up, nightmare!. This why a lot of harddisk failures are first noticed when booting a cold system.

You are better off running the kit a little warm than turning it on and off.

Leaving things running is not the best idea if you intend to keep the system as it is, forever.

In the early years of personal computers, designs had weaknesses which failed during the time when you are turning the power on.

That is no longer as much of an issue. I've spent 25 years as an Electronic Design Engineering Technician, I speak from having the day to day experience of telling the designing engineers how their designs failed - most of the more modern designs do not fail as often. That said, there as still some designs that can be so bad as to have the problems which a better design has avoided; again, not as likely, though, since familiarity with the issues involved have taught design engineers what to avoid - if the engineer has listened.

Lastly, consider the MTBF ratings of hardware, especially hard disk drives. Most hard disk drive manufacturers rate a disc drive as having a certain failure rate within a certain amount of power on hours; the key concept there is 'power on hours'. If you shut off the hard disk drive overnight, then the drive accumulates the power on hours over a slower time span; reaching the failure takes a longer duration. IBM hard disk drives are currently suffering from a ratings trick which boosts the apparent MTBF, but under conditions which are nonstandard. The Deskstar family is rated for a certain amount of lifetime based on being used only 8 hours per day..... some people bought them without knowing the details and they left the drives running 24x7... only to find they were failing in 1/3rd of the rated lifetime. IBM currently is divesting itself of these disc drives..... evidently it is too costly to honor the warranty returns.

Turn the stuff off, it is both morally responsible and cost effective.

But... (there is always a but isn't there) "most of the more modern designs" have many powersaving features that make power savings not an issue. While you are away from a powered up system your monitor will suspend, your HDD will power down your CPU throttles down to nothing and you aren't spinning devices like CD/DVD/floppy/zip drives so the system takes very little power.

Computer technology changes so fast that it's hard to keep something in your PC until it reaches it's MTBF since it will likely be obsolete way before that time.

<BOFH>
To set up your computer to run 24x7 you need to first initialize the power restrictor gradient to avoid harm from crosstalk of buried phone cables. Find the power switch, not that silly button on the front of the case but the REAL power switch, on the back at the power supply next to the power cord. Turn the computer on and when it is running all the way, use the power switch in the back. Turn it back on and this time use the power switch while it's doing that disk check, this verifies the power gradient. Do this until the system no longer checks the drives after you switch off power and then it will be ready for power saving mode!
</BOFH>