Posted
by
kdawson
on Tuesday October 23, 2007 @01:12PM
from the changing-spots dept.

Last week we discussed some of the security features coming in Leopard. This article goes into more depth on OS X 10.5 security — probably as much technical detail as we're going to get until the folks who know come out from under their NDAs on Friday. The writer argues that Apple's new Time Machine automatic backup should be considered a security feature. "Overall, Mac OS X 10.5 Leopard is perhaps the most significant update in the history of Mac OS X — perhaps in the history of Apple — from a security standpoint. It marks a shift from basing Macintosh security on hard outside walls to building more resiliency and survivability into the core operating system."

Well Linux and Apple people like seeing Microsoft with security holes. How many articles about microsoft security problems are tagged "HAHA". Windows People like seeing Apple and Linux security holes because then they don't feel as bad about choosing Windows. Linux people are not normally to happy to see Apple Security holes because it usually means Linux has a simular problem and vice versa.

It is basicly a case if one can say I am more secure then you then I win.

I guess it depends on what you mean by "work together". They sort of do work together. They're constantly borrowing ideas from each other. Sometimes the Linux/Mac/Unix people are even using the same code. But do any of them want to hold up their own security improvements while they try to persuade everyone else to adopt the same security practices?

Why doesn't everyone (Apple, Microsoft, Linux/Unix people) work together on security? Its the one thing that everyone benefits from.

Microsoft is free to use any and every security feature ever developed by the open source community. This includes virtually 100% of Linux/bsd's development and lion's share of OSX's security features as well.

The reason we can't say the same for a Microsoft->open source is because for a lot of security in windows...no one has access at all.

Apple locks their software to particular hardware, and locks up their hardware (e.g. the iPhone) and bricks it if an end-user tries to modify it.

...tries to unlock it. Have there been any cases where merely installing third-party software on a machine caused it to be bricked on an update (and, if so, was it demonstrated that the third-party apps were the cause, and were there any cases of an unmodified iPhone being bricked by an update)?

Apple contributes a lot to the open source community. Safari/khtml is perhaps the best example of that, but they open source their kernel (darwin), quicktime streaming server (darwin streaming server), OpenDirectory, bonjour (mDNSresponder) and a number of other tools and software packages. Apple also owns CUPS, though they bought that and didn't develop it in house (it's GPL2).

On top of that Apple regularly credits security researchers and links to their websites in software updates when they report vulnerabilities to Apple. They work with the community, not against it.

You can work with Apple on these open source projects. The fact that you don't, and that you don't know about them in the first place probably means you aren't a programmer, and aren't really serious about contributing to open source. What you really like doing is feeling superior.

It's perhaps most telling that you use the iPhone as an example of why you're upset at Apple's lack of security. You have it all backwards. The issue with the iPhone was that there were security vulnerabilities. The iPhone was cracked with a buffer overflow exploit. Apple fixed the exploit, which broke hacked phones. They did not intentionally brick phones, and instead told people not to update if they had hacked phones. You're probably remembering the whole thing wrong because you were too smug to learn the facts. Hint: fixing buffer overflows is good security, not bad. Apple is under no obligation to preserve a buffer overflow on a product they ship. If you don't want a security hole patched, don't update the product.

Apple hasn't violated the terms of any open source license. They give back to the community. They maintain a number of open source products. You can be mad about the iPhone being locked, but that's a separate issue from security or open source.

Well a lot of people considered Moving from OS 9 to OS X a downgrade. It took until 10.2 for it to have features better then OS 9 before that there were a lot of internal things changed but it wasn't better it was just potentionally better. 10.5 may be the OS version with the most improvements to the system. Not the most changes to the code base.

It wasn't a lot of people. It was a vocal minority, the same minority which swore up and down that they'd never touch Apple again after the Intel switch and who spend hours debating the tiniest "flaws" in OS X's GUI. In other words, people for whom computers are an obsession or a fetish.

The the rest of us--people for whom computers are tools used to make money--OS X, and the features it brought, were long overdue. The switch was entirely worth it if only for the addition of a modern memory susbsyetem to an Apple OS. No more preemptive multitasking and having to specify how much memory each application got.

Umm...not entirely. I really like the power OS X and am quite enthusiastic about the Intel switch. And yet, as an Apple fan from the mid 90s, I can completely recognize that 10.0 was pretty rough when moving from OS 9. Do you remember how slow that felt? OS 9 still feels faster to me than OS X, although I'd never, ever want to use it again.

I mean really...you think the people who even know about the term "preemptive multitasking" wasn't outnumbered by those who groused about how the new Mac upgrade ran a

I can completely recognize that 10.0 was pretty rough when moving from OS 9.

Old Macs had a flaw (yes, I said it) where holding down the mouse button would freeze the rest of the computer.

Including the network stack.

We noticed this because when the rest of the office would play MP3s from our graphics guy's Mac's shared folder, everyone's audio would randomly and simultaneously drop out. We eventually realized that it happened when he was holding Photoshop's menus open for a long time while he pondered which filter to apply to some image.

Talk about a false dichotomy! Do you really think the two are at all related?

There were people who understood the flaws, but (correctly) thought that moving to OS X should not require giving up good performance (which took years to get back), or UI niceties like the way the classic Finder worked. As to the latter, unfortunately Steve apparently didn't like the old Finder and never allowed the OS X Finder to work the same way. Spatial mode is still broken to this day, the "Show Package Contents" feature is

Talk about a false dichotomy! Do you really think the two are at all related?

Definitely. The old OS model allowed certain shortcuts such as hacks that directly patched the code segments of other programs that were running to change their behavior. The new protected memory model flat-out makes that hackery impossible, so it was up to programs to add explicit support for message passing and other external control systems. There isn't a message passing system in the world that's as fast as just overwriting a destination application's buffers with new data.

That's just one example of why some things are inherently slower if done right. Sometimes it's just not avoidable. That doesn't mean that the new way is inefficient or bad, just different.

I was never into Macs back in the day so I can't comment on old vs. new Finder or spring loaded folders, etc., but I find it telling that the only people who seem to seriously dislike the new Finder are the ones who seriously loved the old one. To everyone else it's pretty spiffy and a reasonably good model of how such things are supposed to work. That is, I'm not at all convinced that the old Finder was actually superior; it's just that people liked it that way, darnit, and anything different is inferior by definition.

None of that has anything to do with multitasking or event loop handling and you know it.

I was never into Macs back in the day so I can't comment on old vs. new Finder or spring loaded folders, etc., but I find it telling that the only people who seem to seriously dislike the new Finder are the ones who seriously loved the old one. To everyone else it's pretty spiffy and a reasonably good model of how such things are supposed to work. That is, I'm not at all convinced that the old Finder was actually superior; it's just that people liked it that way, darnit, and anything different is inferior by definition.

As someone who used the old (oops, "Classic") Mac OS from versions 6-9, while I do think there was a certain level of curmudgeonness among the people who swore they wouldn't switch, there were very legitimate concerns about the OS X Finder and GUI, which I'm not sure have really been resolved.

Don't get me wrong, I still think OS X is better overall, because of its underlying architecture and a functional CLI, but the Classic Mac GUI had been honed incrementally over almost two decades before Steve just decided to bin the whole thing and reinvent the wheel. It was that interface which made the crappiness of OS 9 worth dealing with, despite the fact that you could hang the whole system by holding down the mouse button, and had to manually allocate memory, and everything else. It was the Mac's saving grace -- perhaps its only saving grace -- throughout the 'lean years' of the platform. And that's why a lot of users just never got over its elimination; it was, for many people, the only reason why they'd stuck around for so long.

There was no real reason to change it when the old codebase was dropped for NeXT's: even if none of the code needed to be kept, the interface guidelines that had evolved as best practices, arrived at by painstaking trial-and-error by generations of Mac programmers, could have been retained. What I think happened is that Steve Jobs wanted more eye candy, and wanted to make the entire desktop reflect the OS's "newness." It was a sales tactic, and although I don't think there's any debate that it worked, it was a pretty huge cost.

OS 9 was an operating system with a great GUI and a terrible backend; OS X had a great backend, but a GUI that was almost unusable at first, and which has only very recently come back on par with the Classic OS circa System 7.5 or so. (They just recently snuck the option-click-to-close-all-Finder-windows trick back in, which I believe originated on the IIgs, and was definitely missing for a while in early OS X versions...)

(Incidentally, the interface scizophrenia isn't limited just to the Mac OS; you also see this behavior in some of the major Apple apps [e.g. iTunes] -- every time there's a whole-number version increase, some part of the interface gets changed, apparently for the sake of changing it. It's as if they realize that some people won't believe that anything is different unless the widgets change, so they scramble everything around periodically, just to keep everyone on their toes.)

OSX was worthless to me (as an audio engineer/sound designer) until they added Core Audio, which made professional audio tools possible. But it took too long. By then, all the cool kids had given up on ProTools and MOTU, and were using SONAR, Gigastudio, and Nuendo on Windows.

A nearly non-existent minority actually thought that MacOS 9 was better than Mac OS X at first. This minority survived until the release of Mac OS X 10.2.A large majority of MacOS 9 users migrating to Mac OS X thought that, while pretty, the Aqua UI was slow, bloated, and annoyingly shiny. They also gave most of the organizational features of the Finder a complete fail as well. Gone were spring-loaded folders, pop-up-tray tabs on the desktop, hierarchic menus, the app-switcher menu, and a host of other thin

The switch was entirely worth it if only for the addition of a modern memory susbsyetem to an Apple OS. No more preemptive multitasking and having to specify how much memory each application got.

Yeah, that and security-- including real multi-user stuff. There were always some users who got stuck on the OS9 crap. They'd get their knickers in a twist because there was some missing feature like the color "labels". And then there were the OS9 power-users who had figured out how to do all the insane old Mac

Bullshit. If you had to spend a year using 10.0.x or 9.0.x which would you pick?
I'd pick 9.0.4 every time. And I like OS X. I evaluated every version of it from Developer Preview 1 up until 10.2. I switched to OS X for daily use when 10.2 shipped. Because for day in/day out use of an operating system I have to get work done, not just admire its microkernel or crash protection.

OS 9 was more responsive, yes. But, due to cooperative multitasking, if any program crashed, your entire computer did as well. I did some fairly memory intensive Photoshop work for a newspaper on an OS 9 Mac that was packed to the gills with RAM, and I'd have an average of two reboots a day. This can be maddening to the point where you'll want to throw the Mac out the window if you just lost an hour's painstaking work to the fucking bomb.The OS X came about. Systemwide crashes are a rarity, and in my exper

Maybe in the history of Mac OS X, but definitely not the history of Apple itself. I'd say that would be, oh, the shift to Unix.

myself, i would consider the shift in architechure a greater historical shakeup. it's still amazing to me apple has shifted their core processor/architechure setup twice, including an emulation layer (each time) to ease transition. i had (and still own) a Motorola Mac (SE/30, Moto 68030 CPU) and remember the titanic shift it was migrating to the PowerPC. And, more recently, shifting from the Power/RISC platform to Intel. I think Apple's continued demonstrated ability to shift its underpinnings with damn near nary a disruption is scary impressive.:)

OK, we'll say UN*X instead. For many purposes, being UN*X is good enough - for example, no Linux distribution I know of is UNIX, none having passed the SUS validation suite, but a lot of stuff written for UN*X Just Works.

Reading this made me wonder. What would happen if you had an important file you temprarly drop it in a public location then move it out. once the person downloaded it. Then someone goes and runs time machine on the public directory and picks up the file that you deleted.... Also will time machiene pick up different permissions set on a file at different time. You made it and tested it as 777 then after you assure it physically works you bring it down to 755 will it allow you to go back in time and get the permission 777 of the file...

While I do agree having good backups is important part of security... Perhaps just perhaps because it is so easy there is a security problem with it.

What would happen if you had an important file you temprarly drop it in a public location then move it out. once the person downloaded it.

If it is an important file, why would you drop it in a public location in the first place, instead of just transferring it directly to that user or putting it in a password protected location or them? The scenario you envision is already a security problem because you're posting private data in public temporarily. I'd argue the right solution, is not to do that at all.

Sure you can argue the correct solution but, my way is the easier solution... Given most people they will go with the easy solution. Put it on a public location turn on file sharing tell them to go to this address, then turn it off after they got the file, delete the file from that dir and you are all set. For most cases it will take a while for a hacker or whatever to find the file and get it, durring the 10 minutes it is public. Of course there are more secure ways of doing this but the point it how far

You're assuming that time machine works over a shared network folder.I very much doubt this will be the case. To my mind, Time Machine looks an awful lot like a pretty wrapper around a snapshot function, similar to that found in modern logical volume managers and SAN products. Sun's ZFS has such a function, and Apple have licensed ZFS for inclusion in Leopard [news.com].

Such a system generally works at the block level (with LVM), though with the filesystem integration ZFS gives it could probably operate more efficie

Sure you can argue the correct solution but, my way is the easier solution... Given most people they will go with the easy solution. Put it on a public location turn on file sharing tell them to go to this address, then turn it off after they got the file, delete the file from that dir and you are all set.

Or easier yet you can include it in an IM chat or e-mail, which is what most people do these days and which is no less secure than what you describe.

For most cases it will take a while for a hacker or whatever to find the file and get it, durring the 10 minutes it is public.

Sure, but you're advocating lousy security instead of real security. Do tell, how is your method "easier" than e-mail or chat file transfers?

If it is an important file, why would you drop it in a public location in the first place...

GUIs are prone to errors, just like consoles. All that has changed is how the error manifests. When your finger slips at the console you get a typo. When your finger slips during a drag you may inadvertantly issue a mouse up and drop the file being moved prematurely, in the wrong folder. It can be a PITA when you were dragging over a bunch of subfolders in a list view.

On the "777" issue, I don't think the backup snapshots are writable in the general sense, so it wouldn't much matter if your backup of a file had writable perms. What you're probably more interested is a file you initially created as 755 and later changed to 700 (which is basically the same issue as your "accidental publication" concern). The answer is that Time Machine allows you to explicitly ask it to delete all historical copies of a given file, for precisely these kinds of reasons.

Then someone goes and runs time machine on the public directory and picks up the file that you deleted.

Time machine isn't a feature that "someone" can run against your network drives. Time machine allows you, the operator, to use a second hard drive to maintain snapshots of the drives that you're using. Since the snapshots are on a separate drive, there's no risk that someone accessing your system remotely will have access to files that you've removed, or whose permissions you've changed.

The much-needed focus on availability is a real breath of fresh air. If one can recover a previous state (i.e. if it is available), it's a great deal easier to restore integrity. Confidentiality improvements are always welcome, of course, but they'll never be complete, and availability allows us to recover after the fact.

Also, Time Machine is a great forensic tool.

Overall, of course, I'm lauding the article more than 10.5, since I'm unaware of any of these features being truly new to the IT world.

"With Time Machine making it easier to back up for all users, especially individuals not already protected by some corporate backup system, Apple is doing more to improve security than any upgrades to firewalls or Safari ever could."

Although I am a fan of backups, this is really silly. Even if we assume that users have Time Machine turned on, that they have external media on which to back up, that they manage to actually have everything turned on and hooked up to do the automated backup, there's still o

"Code randomization" is a terrible idea. Virus writers will write something that searches around for the right place to patch. Developers will think buffer overflows are now OK, and write worse code. Worst of all, bugs become nonrepeatable and harder to debug. (Great for tech support. Much harder to pin blame on the vendor now.)

"Virus writers will write something that searches around for the right place to patch"

No, they won't be able to do that. At that point, they haven't gained execution yet.
Buffer overflows require you to jump to code which is in a known place in memory (usually libraries), which in turn slingshots you back to the exploit code stored on the stack (or other). Without knowing where to jump to, your malicious code will just sit there in memory, not doing anything.

ASLR works using the dynamic linker. For the vast majority of programs (I can't think of any counter examples off the top of my head), the dynamic linker works transparently to match up in-program function calls with their proper library addresses. If ASLR adds bugs to the implementation, it must be because of a faulty linker, which can be debugged out.

Virus writers will write something that searches around for the right place to patchIt's not quite that simple. Virus writers have a practical limit of how much code they can squish into a buffer overflow (which reduces the effectiveness of a NOP slide) Not only that, protected memory operating systems will bomb out if you start randomly poking at memory addresses. Since the addresses are randomized, you don't really know where to start looking which means it becomes a probability game of how many valid addresses the code your looking for could be at compared to the total address space.

Developers will think buffer overflows are now OK, and write worse code.Developers have known about buffer overflows for years, and people still use sprintf over snprintf. I doubt anyone who is doing any serious coding will look at ASLR and say, "Hurray! We can forget about string validation!"

Developers have known about buffer overflows for years, and people still use sprintf over snprintf.

snprintf just trades off potentially writing past the end of the buffer with potentially reading past the end of the buffer. People should be resizing their buffers as needed - when is it ever OK to truncate data - and stop misusing the 'n' functions.

I can't say for sure that Apple did this, but do note that randomizing it once per computer (e.g. ramdomize it *while* prebinding) is very nearly as effective as randomizing it every time. It still means someone can't write exploit shellcode that works on all (or even a significant fraction) of machines. This is the approach glibc's prelink uses.

- Which class of bugs depends upon the memory layout of your libraries? E.g. what kinds of bugs happen or don't happen depending on that layout?- Do you have any idea how less vulnerable you are to an attack when the attacker can't get you in 1 hit? A networked-based attack would essentially have to flood you to get the right address, and bandwidth limitations could prevent them from ever doing it (searching through a multi-gigabyte address range a few dozen bytes at a time takes a *long* while when you'r

"Code randomization" is a terrible idea. Virus writers will write something that searches around for the right place to patch.

Brilliant solution. All they have to do, in order to run the code that "searches around," is run some code that searches around for the right place to patch. But to run that, they first have to run some code that searches around for the right place to patch. But in order to run that, they have to--STACK OVERFLOW! User Sloppy DoSed.
Oops, I guess it worked, after all.

I am wondering if some even more basic holes have been filled here.I have been given to understand that one of the problems with OSX is that in order to make some legacy software work such as applescript, apple had to make a few file settings more open than they should be.

The big example is the one which allows a USB drive with a correc tly set up copy of OSX on it to automatically become the boot drive with full root access to all drives on a restart. IIRC there's even a company that sells these things pr

The USB thing can be fixed via an Open Firmware password (G5 and below, though I'm sure there's an equivalent for intel). If you have one in place, holding down the option key on boot will present you with a password screen before the Boot Manager.The only other ways to boot from an external disk if there is an Open Firmware is to use the Startup Disk pane of System Preferences (requires admin password) or to use the bless command in the terminal (requires sudo / root access).

This is why I said "default settings". There are several more things like this. It's stuff that can be fixed by folks who know what they are doing. The whole point behind macs though is they should not require that level of system knowledge to make them work.This is or has been a known problem on a few Linux distros as well. Still IMHO it should be fixed. There's a big difference between surreptitiously slipping a flash drive into a slot for a minute or two and taking the lid off a machine. Especially

It is possible to boot a Mac from an external drive (USB or Firewire on Intel Macs, and Firewire drives on PPC Macs) but it is pretty easy to stop that from becoming a problem. Apple have a utility that stops people changing firmware settings including booting from a different drive http://docs.info.apple.com/article.html?artnum=106482 [apple.com]

Application signing, warning dialogs for downloaded files, and the like... these have been Microsoft's first line of defense against cross-zone exploits for a decade now and they have systematically failed. Now Microsoft is using Sandboxing, and that will also fail.

I wish that Apple would decide to photocopy good ideas from Microsoft rather than bad ones. The single set of application bindings for helper applications and URL handlers? That comes from Windows. The idea of giving users the opportunity to open potentially hostile files directly from mail and browser software? That comes from Windows. Open Safe Files? That comes from Windows. Popping up dialogs before automatically doing stupid things, instead of not automatically doing stupid things? That comes from Windows.

The last straw for me was when Safari on OSX warned me that I was downloading an EXE file because it's executable. Not that I was running it. Just that I was downloading it. Holy Mother of Turing!

Re: Vista Previous Versions (Also in 2003 Server)Some users will find the feature objectionable because it could give the bossman a new way to check up on employees, or perhaps it could be exploited in some nefarious way by some nefarious person. Previous versions of Windows were still susceptible to undelete utilities, of course, but this new functionality makes browsing quite, quite simple.

How freaking stupid can this get? The person that wrote the content at the link you provided knows NOTHING about what they are talking about, confusing terms, and not even 'getting' the context of what they are trying to argue. And you post links to technical articles you apparently don't even understand or you would realize how off track you were.Here try this...Instead of 'Volume Shadow Copy' introduced in WindowsXP/2K or 'System Restore' introduced in WinME and effectively in WindowsXP; Go look up 'Previ

Time machine is a security hole from hell. Just suppose you record some pr0n of yourself using the built in iSight, then think better of it and delete the files. Now anyone can casually sit at your desktop and retrieve all the compromising files.

Apple just made it easier to recover deleted files, if you're using backups. If you're not using backups, there is no problem. OS X has also long had a "secure delete" option that not only deletes the file, but writes over it with random data multiple times, ala DoD requirements. I'd be willing to bet that also does the same on your time machine backups.

Watch the Apple leopard video. I believe in there, they talk briefly about how TM has the option to permanently remove all versions of a file. It should also be mentioned on the TM feature page Apple has on the web site... in any case it's possible.

It's such an obvious feature it's no surprise it's included. This is versioning 101 stuff.

Watch the Apple leopard video. I believe in there, they talk briefly about how TM has the option to permanently remove all versions of a file. It should also be mentioned on the TM feature page Apple has on the web site... in any case it's possible.

It's such an obvious feature it's no surprise it's included. This is versioning 101 stuff.

How do it know? When is a file a version and not a new one? For example if I have a configuration file for some data processing program I use. I edit it in different ways for different runs. Is this a version or a different file. Or how about a generic reference letter I go in and change the names in for another use. version or different file? What if I move or copy a file. Are these versions?

That's easy. It tracks the changes to the files. If you create a new file by using "save as" that won't be deleted and neither will it's history, but that is obvious because the original file still exists. If you move a file, it is still the same file. If you copy a file, you've made a new file, based upon the old one.

That's easy. It tracks the changes to the files. If you create a new file by using "save as" that won't be deleted and neither will it's history, but that is obvious because the original file still exists. If you move a file, it is still the same file. If you copy a file, you've made a new file, based upon the old one.

Okay try this one on for size. Make a hard link of a file. Now edit one of the hardlinks and save it (not save-as, just save). Now which one is the copy? From the file systems POV the edited one will be a copy. But from the users point of view it might be the original, especially if they had no way of knowing the hard link had been made.

For example, since I don't have Time Machine yet I currently snapshot my home directory by making a image of it populated by hardlinks. this happens in the backgrou

OS X has also long had a "secure delete" option that not only deletes the file, but writes over it with random data multiple times, ala DoD requirements. I'd be willing to bet that also does the same on your time machine backups.

This is just a wrapper around the shred utility in linux i would guess. Used with find shred is pretty cool.

Another poster has addressed the core issues (secure delete, etc), but one other thing needs to be pointed out: At least anecdotally, I suffer data loss far more often than I have hackers breaking into my system (at least that I know of) or having to deal with the compromise of sensitive information from my hard drive.There is a greater risk for many people in lack of backups vs. outside threats who have sufficient access to the machine to see data we've deleted without bothering to secure delete it or dele

The consequences of a privacy breach are incomparably more grave than that of data loss. You could be put in jail, face a divorce, get fired or have your reputation permanently tarnished by content leaked on Internet. Companies will face lawsuits based on intermediate versions of a memos that were never actually distributed.Suppose you were writing a letter to an old friend and, in a moment of weakness, add a paragraph on how you still have a crush on her and would like to meet. Later you think better of it

Just exclude your homemade porn folders from the Time Machine backup set. Easy. If you forget to do this, just delete the files on your Time Machine drive; it uses the standard.snapshot-style folder layout. No binary databases or big backup blobs that you can't parse and delete yourself.
If you want public key encryption of the backups, set an encrypted DMG to be your Time Machine target. You can even use AES-256 in Leopard.

I'm hoping that this is meant to be sarcastic, though I'm certainly stretching to find it.

Security hole from hell? Okay, if a person has that kind of access to your machine, your files are really already compromised; cause unless you frequently leave your Mac out in the open with the root password pasted to it, people will rarely get to the point where they can recover incriminating files. On top of that, you can control what time machine does and does not back up.

Wait, but I thought it was bad that Vista did that? How is it that it is okay in OS X but not in Vista?

I'm sure the OS X implementation will be better. But it will be funny to watch the backpedaling that ensues, because it was always the idea itself that was inherently flawed, it was argued. Users don't know what exactly they just downloaded does.

It is sad that a site that bills itself as "news for nerds" is inhabited by people that enjoy being ill-informed when discussing these topics. If there's anyone that should read the articles, it'd be people here. Instead, everyone would rather contribute to the overall noise level and spout the same opinion thats been repeated fifty other times.

The difference is not so much in the OS itself but in the OS culture, the legacy applications.A LOT of Windows programs are programmed with the assumption that the user is running with full or almost full privileges because that makes life easier for newbie programmers, and that's how things were designed back in the 9x days.

Most MacOS X programs are designed to run with low privileges and only prompt for privilege escalation when it's really really needed.

I still remember in the late 90s in the apple advocacy newsgroup people telling: "why do I need memory protection and preemptive multitasking"? We don't need that... The it was implemented "finally" on OSX and it was a great thing. Then I remember them telling me the greatness of non-intel processors and how great was that Apple never went Intel. Then they DID move to Intel and boy, what a great move this was:-)

There are tradeoffs to everything. Considering processor capabilities and RAM costs in those days, one could argue that the early 80's would have been too soon to put memory protection and pre-emption into a consumer OS. The Amiga did pre-emption by the mid-80's, but for all practical purposes the Mac MultiFinder worked pretty well. And no one did much protected memory in a consumer OS until the mid 90's (although MacOS had the no-execute bit set for data and the no-modify for code pretty early there.)

I didn't say it wasn't. But NT 3.5 ran almost no games of its day and DEFINITELY wasn't a consumer OS. Real multi-tasking and protected memory was implemented by UNIX a whole lot earlier than 1992 and I didn't include that one, either.

Vista gets bashed because they bombard the user with prompts to the extent that people turn off UAC. Similar prompts on OS X happen infrequently and thus function as a useful warning of possibly dangerous behaviour.

Fanboi IS the correct pseudospelling [whirlpool.net.au].

Using fanboy is bad enough, fanboi should be beyond the pale. It's usually a precursor to irrational rants based on an imagined foe (in this case the 'mac fanboi'). At this point I thought you'd lost all credibility.

They will complain about anything.They want OS X to be realed for common hardware not realize that apple tried that (with their older OS) and it nearly killed them. And right now they are doing stellar, they way they are going now. Basicly they are just jelious that Linux isn't as good as OS X is.

No, Apple did NOT try that. The hardware that was released by PowerComputing, UMax, Motorola, Radius, etc. was not generic hardware. It was Apple designed motherboards. I think in some cases they were even manufacturered by Apple and placed in the other company's cases. Apple had deals with those guys that didn't make any damn sense (for Apple). Very different from trying to support "generic" hardware.

"Mac OS X has the "it just works" reputation because of the limited number of hardware configurations on which it runs."

I've heard this for years but I still haven't seen ANY hardware sample where Windows "just works". I'd put more value on the fact that Apple based the core of their OS on a unix-like system not the registry/spaghetti mess that has been windows for the past decade plus. I'm sure that eliminating poorly written drivers from the mix does help prevent some of the problems that plague windows but it's not the whole story by a long shot.

Besides, with that argument, Linux should be even more unstable because very few of it's hardware drivers are written by the device manufacturers - many are reverse engineered.

Mod the parent up to 11:-).
Besides, with that argument, Linux should be even more unstable because very few of it's hardware drivers are written by the device manufacturers - many are reverse engineered.
I couldn't say it better myself!!

I've heard this for years but I still haven't seen ANY hardware sample where Windows "just works".

It really depends on what you mean by "just works". The truth is that Windows does suffer from supporting a larger variety of hardware. Specifically, if you have a Windows XP computer that crashes on a regular basis, there's a very good chance that you either have some sort of malware installed or else have some really crappy drivers. Ignoring malware and crappy drivers, Windows XP is actually a pretty sta

Not only does it work together with the hardware, but the software works a little better with the software too. It's a little bit less frustrating than using software under windows. A little bit more stable, a little bit more intuitive, simple, and less maintenance. Hardware doesn't have to play a role here.

One day, no matter how large your backup drive is, it will run out of space. And Time Machine has an action plan. It alerts you that it will start deleting previous backups, oldest first. Before it deletes any backup, Time Machine copies files that might be needed to fully restore your disk for every remaining backup. (Moral of the story: The larger the drive, the farther back in time you can back up.)

Expensive proprietary system? o_O Sure, it's infinitely more expensive than your OSS solution (technically), but a $150 price tag for the entirety of Leopard seems like a reasonably good deal to me. I think this is more of a "it's better than what we've got" feature than a "this is a guaranteed fool-proof backup solution". Of course it will start losing files if you push your disk capacity to its limits - but that's true for ANY backup method. If you ran out of CDs and had no means to get more, you'd start

If you look at Apple's description [apple.com] of the time machine functionality, it's not possible for it to work the way they claim.

Could you please explain how you think Apple is claiming Time Machine works, and why you think it's not doing that? I ask because I'm not sure what you find objectionable about the page you linked to. In a simple answer to your question, you can use Time Machine to back up to either an external drive or a server. When space runs out, OSX will warn you, and you'll then be given the option of overwriting your old files. That's what Apple has said about running out of space. I would assume that you'd also have the option of adding additional storage (e.g. getting another external hard drive), and keeping your old backups.

It'll be a very sensible solution for 99% of users. (Yes, that statistic was pulled out of thin air. But it's very sensible.)

However, my OSS solution works much better for me than Apple's expensive, proprietary system would work for me.

Ok, that's great. Nobody is stopping you from using that solution, and Unison has been available on OSX for a while now. In fact, I don't see any reason to think you won't be able to use both Unison and Time Machine. So what's the problem?

Does it default to deleting the oldest files first? If so, then that's probably not what you would have liked in many cases, because you probably care more about preserving the 500 kb manuscript of your novel than about preserving the 70 Gb video of your kids' soccer games.

Actually, it deletes entire snapshots when it needs the room, so you'll still have your 500kb novel as well as the video.

Time Machine is very similar to rsnapshot, except that it can use spotlight to determine which files have changed, an