What's the best strategy for dealing with a botnet: stopping it before it gets …

And Mac users beware—a new Trojan variant attacks Mac OS systems via social networking sites. If you see a message on a social network like Facebook that says "Is this you in the video?" clicking the item could deliver your computer to a botnet—a network of hijacked machines deployed to steal content and launch distributed denial of service (DDoS) attacks on other sites.

So how do we stop these nefarious campaigns? Shortly after we ran a piece on Japan's national anti-botnet strategy, we had the chance to hear a set of security presentations on botnets. The most comprehensive of these came from Fabian Rothschild and Peter Greko of the HackMiami nonprofit and Tom Murphy from the Bit9 security group. The two outfits laid out different strategies for fighting botnets—data obfuscation (making it harder for botnets to read computer content) and "white lists" (carefully restricting what kinds of apps can be used on an enterprise or institutional network).

The essential difference between the two approaches are instructive. Bit9 focuses (PDF) on fighting bots before they get onto a system, while HackMiami hones in (PDF) on what to do afterward.

After a computer is compromised

Most botmasters, Greko and Rothschild note, aren't technically sophisticated. While Mr. Big may not even know what the word "protocol" means, he can hire others to do the dirty work and focus on collecting and selling stolen data to various illegal markets. The "carders" buy credit card numbers; extortionists want corporate employee logins and administration accounts; spammers want email logins; and creepiest of all, pornographers buy Facebook logins.

The "nightmare" technique for stopping botnets. Don't try this at home kids!

Tom Murphy observes that a huge amount of compromise takes place via social networks.

"The profile of advanced threats that we see today, is a lot of it is through social media," he explained. "It's trusted relationships. People get e-mails, and it might be something specific, something from Amazon, or something that establishes some trust. People click on links, and it installs software in a trusted directory, that is sometimes overlooked, and the software resides in the form of something that's legitimate."

The question for Greko and Rothschild is how to protect a computer after it has already been penetrated by a Trojan like Zeus or SpyEye. How do you protect sensitive HTML form data streams or keystroke logs—especially those that include password and financial account information?

These developers divide the "obfuscation" techniques they outline into four categories: basic, medium, hard, and "nightmare." The examples they offer "do not prevent identity theft, just makes it harder for identity theft to happen to your customers."

Basic techniques include the deployment of "extraneous post parameters." Typical bots scour their keystroke logs for names and phrases that are easily recognizable. So deploying phony or incongruous HTML form variable values is one way to make bot log data less usable.

Harder techniques include Base 64 encoding, which turns the input of 24-bit groups (three bytes) into an output of four ASCII characters. The method is best appreciated visually. Thus a line from Thoreau's Walden like "Our inventions are wont to be pretty toys" is turned into encoded Base 64:

"T3VyIGludmVudGlvbnMgYXJlIHdvbnQgdG8gYmUgcHJldHR5IHRveXM=."

Finally, the "nightmare" methods include RC4 encryption, an array-generating algorithm that programmers can deploy using javascript (we're not going to outline the technique any further than that, because we don't want to give Ars readers nightmares).

Before the bot appears

Bit9's Murphy, on the other hand, describes himself as an evangelist for "application whitelisting."

"You focus on good applications and what you trust versus trying to chase the infinite list of bad software," he explained.

And the list does seem to be infinite. As Verizon's 2010 data breach report noted, 97 percent of the compromised data that the carrier and the United States Secret Service evaluated appeared to be attacked by customized malware—much of that "repackaged" versions of existing botnet code (revised to avoid anti-virus detection).

So Bit9 continuously monitors all components of an enterprise or institution's infrastructure and collects performance information.

"And then what we do is we build out a white list," Murphy continued. "And a white list would be trusted sources of software. So ultimately on step three, we built out this recognition that when I see software in my environment, no longer am I going to look at it as if it meets my blacklist criteria, the only way it can run is if it meets my whitelist criteria."

Bit9 leverages a global software registry of approximately eight billion records to help identify and confirm software integrity. So if a bot like Conficker appears and it doesn't meet the institution's "trust criteria," it doesn't run.

"We define the policy," he concluded. "Software comes in from any direction. We basically confirm that it's something you trust. If it meets the criteria for trust, it's allowed execution. If it's not, again, down to botnets, all the way down. This could even be Skype. It could be Instant Messenger. Anything that's not authorized doesn't run in the environment."

There you have it—two very different approaches to the great war against botnets, which show no sign of withdrawing from the field. Plenty of Ars readers work in corporate IT; what general techniques have you found most effective at mitigating botnet infiltration and damage?

Matthew Lasar
Matt writes for Ars Technica about media/technology history, intellectual property, the FCC, or the Internet in general. He teaches United States history and politics at the University of California at Santa Cruz. Emailmatthew.lasar@arstechnica.com//Twitter@matthewlasar

The question of the title in the article seems obvious. Prevention is far better than cure. I didn't read anything in the article that would indicate that addressing botnets after they've infected machines might be better. So, I think I'm missing something here.

MarkHy wrote:

Here's an idea: Switch to Linux. Use a major distro such as Ubuntu. Use the software that's in the repositories, don't add new repositories from untrusted sources, and you will be fairly safe.

Wow. Your resolution to the problem is to switch, enterprise-wide, to Ubuntu? Really? Yes, I'm sure than no large enterprises have ever done a cost analysis to switching to Linux and found that it was not worth it (or simply stupid).

Here's an idea: Switch to Linux. Use a major distro such as Ubuntu. Use the software that's in the repositories, don't add new repositories from untrusted sources, and you will be fairly safe.

This is not and never will be a solution for the people we need to help. The average user can barely deal with changes to Windows, or problems that crop up with same; and you suggest switching them to an OS so mutable that there is no 2 installs the same? The best method to invalidate anything you are about to say in my opinion is to say "Switch to Linux".

The answer has always been prevention, and a healthy dose of information. For everyone else those 2 cannot help, well we can't help them. Put them in the locked down to uselessness VLAN and get ready for the tears.

I think the "prevention is better than the cure" people are a fair bit off.

Botnets thrive off of "easy" targets. If you can make the data botnets are chasing after inaccessible, it doesn't matter (from a data breach standpoint) whether you're infected or not. Of course, looking at these two methods, there's nothing that's really stopping anyone from using both (other than a lack of resources).

The prevention method has a lot of problems. It puts faith in a central authority that could be infected. It adds another barrier to innovation (by requiring trust certification for new software). It's susceptible to falsified certificates and immitation. It encourages poor personal data security practices by telling users they're just automatically protected.

The obfuscation method has plenty of problems itself, but I think it's going to be somewhat more successful in the long run. If botnets are allowed access, but the data they acquire is useless, eventually they'll slow down or stop. It's like robbing a bank, getting away, and realizing the bags are stuffed with monopoly money. Some dedicated criminals will keep trying a few times, but if all you ever manage to steal is junk data, you're not going to keep sticking your neck out for it too many times.

In a business setting, whitelisting is really the only way to go these days to keep machines clean. Nothing else is proving as effective, and switching everyone to linux or even osx just isn't feasible.

I think that the whitelist approach could be extremely effective BUT not in the way it is usually employed.

The typical approach I have seen is deployment within a corporate infrastructure. Then the corporate security overlords maintain a list of allowed software for all PCs. Anything on this list is not allowed. So what do you do if you want to do something like: (a) test a new piece of software you are considering purchasing (b) install a piece of open source that hasn't been vetted (c) write a piece of software? Any solution that involves 3 different forms and 6 days of delays (except if it's an executive asking, then it's approved immediately no matter what it's for) is NOT a healthy solution.

Better would be some combination of the following:

* Automatic sandboxing to run software that is not on the list. Perhaps such software could run on a virtual machine that gets wiped out except for certain files that are exported or imported explicitly. Yes, infection is still possible, but FAR more difficult.

* A shared vetting process. Suppose you enforce that executables (or source code for things like javascript and Python that are distributed as source) have a SHA-1 hash of their contents compared against a list before executing. For security, we don't require that every user verify every application before putting it on the list, we only require that SOME honest and reliable user verify it. So maintain a list on a public server, with various levels of trust and some sort of reputation system for the individuals performing the vetting.

Does anyone know how well Bit9's software behaves in a development shop? Anyone writing new desktop apps is going to be creating exes not in the signature database constantly; so some form of exception maintaining is going to be needed.

Here's an idea: Switch to Linux. Use a major distro such as Ubuntu. Use the software that's in the repositories, don't add new repositories from untrusted sources, and you will be fairly safe.

This is not and never will be a solution for the people we need to help. The average user can barely deal with changes to Windows, or problems that crop up with same; and you suggest switching them to an OS so mutable that there is no 2 installs the same? The best method to invalidate anything you are about to say in my opinion is to say "Switch to Linux".

The answer has always been prevention, and a healthy dose of information. For everyone else those 2 cannot help, well we can't help them. Put them in the locked down to uselessness VLAN and get ready for the tears.

Yes and the world is still flat and if you travel too far in one direction you will fall off?Linux is really not that difficult to switch to. At a personal level or a corporate one. As for cost of training? One of the weakest excuses for not doing things. I switched 250 employees over without an it department. Wasn't really that hard. By the time I sold the company we had an actual IT department and they are still using Linux.I am a Mac/Windows guy myself but I had to comment because I hear that from a lot of techs who are just to lazy to switch to something that might provide a safer environment. Yes I said lazy. If an amatuer like me can figure it out I am stymied that someone with a technical background can't. Honestly if your preference is something else then fine. I just get tired of the reguritated techspeak.

It's got nothing to do with laziness and everything to do with support and an vastly established user base. But if I'm running an IT shop and they are considering switching to Linux, depending on the situation, I would probably advise against it. Not out of laziness (I get paid by the hour or by the project), but simply because it would be a stupid idea.

Stop thinking that people that disagree with your position are simply lazy. They may very well know something that you don't.

Quote:

The typical approach I have seen is deployment within a corporate infrastructure. Then the corporate security overlords maintain a list of allowed software for all PCs. Anything on this list is not allowed. So what do you do if you want to do something like: (a) test a new piece of software you are considering purchasing (b) install a piece of open source that hasn't been vetted (c) write a piece of software? Any solution that involves 3 different forms and 6 days of delays (except if it's an executive asking, then it's approved immediately no matter what it's for) is NOT a healthy solution.

FYI, Bit9 is a nightmare for program developers. That executable you keep recompiling? That dynamic library that seems, from the point of view of the whitelist, to appear and disappear with slightly different versions? All huge no-nos from Bit9's point of view. It's the big reason I've switched from Windows to OS X at work. No kidding.

Switching to Ubuntu or your favorite Linux/BSD distro is not in itself a solution to the problem because these systems are not inherently more secure than Windows. They are less promising targets because of their minuscule marketshare, but if there ever is any serious uptake in Linux popularity, you'll start to see botnets w/ linux desktops in them.

Now if you ran Linux with a highly restrictive SELinux policy, then you would definitely have a more secure system, but one that's a real pain in the ass to setup in the first place, and a pain in the ass to support because it's virtually impossible that you won't disallow some action that some significant portion of users will actually need to perform on their computers.

And with this, you also get to have all the fun and expense of an OS migration. It might make sense once in awhile, but it's not exactly a cure-all.

1. One thing we did was make the PCs that run company-sanctioned production software whitelisted and locked-down.

2. Realizing people will get their internet and email access no matter how tight #1 is above, we provide wireless access for people who provide their own devices (iPhones, laptops, etc.), including customers. This way personal use and such are segregated from the work PCs and IT isn't involved in fixing personal devices of any kind. Most of the laptops our employees use for this are Macs.

How the hell does that happen? In ~15 year of browsing all kinds of sites I never got some software installed by simply clicking on a link. Well, maybe somewhere back with Windows 98 and IE4 it actually happened, but nowadays (last 10 years) I always get asked what I want to do with the .exe or whatever gets offered to me.

I know not all people spend their time reading tech sites and can't be aware of everything that's going on, but some common sense in the form of "do not run .exe/.js/..." files doesn't seem impossible to me..Also disabling autorun is the first thing you should do and many people still don't do it, but that one's mostly on Microsoft (should never have made it so easy for the files to execute from removable storage).

How the hell does that happen? In ~15 year of browsing all kinds of sites I never got some software installed by simply clicking on a link. Well, maybe somewhere back with Windows 98 and IE4 it actually happened, but nowadays (last 10 years) I always get asked what I want to do with the .exe or whatever gets offered to me.

I know not all people spend their time reading tech sites and can't be aware of everything that's going on, but some common sense in the form of "do not run .exe/.js/..." files doesn't seem impossible to me..Also disabling autorun is the first thing you should do and many people still don't do it, but that one's mostly on Microsoft (should never have made it so easy for the files to execute from removable storage).

It's definitely possible - I recently fought off a zeroday FakeAV infection that got on a system with IE6, just because the user browsed to a site with a malicious ad. Try searching on Google for "drive-by downloads".

Whitelisting is fine, as long as the party managing the whitelist can be trusted. In today's "return to the walled garden" approach to the internet, I fear the day when third parties somewhere start managing the list. Already happening with mobile app stores, I guess.

It's got nothing to do with laziness and everything to do with support and an vastly established user base. But if I'm running an IT shop and they are considering switching to Linux, depending on the situation, I would probably advise against it. Not out of laziness (I get paid by the hour or by the project), but simply because it would be a stupid idea.

Stop thinking that people that disagree with your position are simply lazy. They may very well know something that you don't.

... You didn't actually answer any of his arguments. He was saying that training isn't that difficult and that it was laziness to not plan training into switching over to a more secure system. But training is a necessary and vastly under-recognized part of IT. When migrating to even the newest release of a standard application you need to worry about training personnel on its use (Everyone LOVES the Word ribbon!).

But as for your argument that the issue is with a lack of support, I think you're overstating. First, let me introduce you to http://www.ubuntu.com/support/services where you can find much official technical support, and at lower prices than from Microsoft. Plus, if you have any self-motivated IT personnel they can mine the huge amount of available information online to fix problems in-house. And your statement about it being a vastly established user base, you're not actually saying anything useful. You're just restating your thesis, saying that switching is hard because people don't know it. qst330 has postulated that the answer to that is sufficient training and that such is neither prohibitive or technically demanding.

And as for the earlier postulate that no two Linux install is the same, that's a non-starter. You're using a straw-man fallacy wherein you're stating that because the OS is highly modifiable and allows end-users to change whatever they want, IT personnel can't control or teach a system that they have full access and control over. The truth is that Linux has even better security because of its heritage from UNIX and the user account settings inherent within. Locking someone down to small non-critical usage is trivially easy.

Now, if you wanted to argue about OO.o versus Word macros or Active Directory, that would be a different thing. But that's not what this discussion is about.

Switching to Ubuntu or your favorite Linux/BSD distro is not in itself a solution to the problem because these systems are not inherently more secure than Windows. They are less promising targets because of their minuscule marketshare, but if there ever is any serious uptake in Linux popularity, you'll start to see botnets w/ linux desktops in them.

This is such an old and tired argument that I'm sad it still gets so much play. It has been thoroughly debunked many times, but I guess it still needs to be done sometimes. If this is so, then why is Linux the Windows of server marketshare? It isn't because a CLI is just so darn pretty. Linux is inherently more secure than Windows, and there really isn't a way to refute that. Linux sits on (last I heard) about 90% of servers because it is a hardened OS with security and limited user access accounts built in from the ground up. If a malware writer could have whatever he wanted, he wouldn't go after a botnet. He'd go after big-time corporate information he could use or sell for much higher profit. Data mining and zombies are the consolation prize for not being able to get access to that kind of information very often. Most botnet masters don't have the technical ability to seriously challenge a Linux server, and that's why they don't.

As far as SELinux, I agree that while it is more secure it is also quite odorous to install and manage. After all, the most secure computer at a company could theoretically be running Windows 2000. But it won't be connected to the network. No networked computer is completely secure, its more a question of where on the slider of risk versus access you want to fall.

... You didn't actually answer any of his arguments. He was saying that training isn't that difficult and that it was laziness to not plan training into switching over to a more secure system. But training is a necessary and vastly under-recognized part of IT. When migrating to even the newest release of a standard application you need to worry about training personnel on its use (Everyone LOVES the Word ribbon!).

Well, the thing is, he made no honest argument in favor of Linux. As mentioned above, he's essentially stating to only use known-clean software. You don't need to spend thousands of dollars retraining users and switching the server and desktop OS just because someone thinks that white-listed software is only applicable to Linux.

Quote:

But as for your argument that the issue is with a lack of support, I think you're overstating. First, let me introduce you to http://www.ubuntu.com/support/services where you can find much official technical support, and at lower prices than from Microsoft. Plus, if you have any self-motivated IT personnel they can mine the huge amount of available information online to fix problems in-house. And your statement about it being a vastly established user base, you're not actually saying anything useful. You're just restating your thesis, saying that switching is hard because people don't know it. qst330 has postulated that the answer to that is sufficient training and that such is neither prohibitive or technically demanding.

I'm saying that switching is unnecessarily difficult and expensive. There's very little that Linux does or can do that Microsoft-based apps and OSes can't (however, the opposite is true; show me Linux's version of Exchange).

Quote:

And as for the earlier postulate that no two Linux install is the same, that's a non-starter. You're using a straw-man fallacy wherein you're stating that because the OS is highly modifiable and allows end-users to change whatever they want, IT personnel can't control or teach a system that they have full access and control over. The truth is that Linux has even better security because of its heritage from UNIX and the user account settings inherent within. Locking someone down to small non-critical usage is trivially easy.

Are you still addressing me? If not, please indicate so by posting the quote that you are referring to.

Quote:

This is such an old and tired argument that I'm sad it still gets so much play. It has been thoroughly debunked many times, but I guess it still needs to be done sometimes. If this is so, then why is Linux the Windows of server marketshare? It isn't because a CLI is just so darn pretty. Linux is inherently more secure than Windows, and there really isn't a way to refute that. Linux sits on (last I heard) about 90% of servers because it is a hardened OS with security and limited user access accounts built in from the ground up. If a malware writer could have whatever he wanted, he wouldn't go after a botnet. He'd go after big-time corporate information he could use or sell for much higher profit. Data mining and zombies are the consolation prize for not being able to get access to that kind of information very often. Most botnet masters don't have the technical ability to seriously challenge a Linux server, and that's why they don't.

You're missing something integral here. Sure, they could go for servers for various corporate espionage/blackmail, et al. Or they could be going for end user's bank accounts, credit card, personal data. Also, you appear to be overestimating the motivations and capabilities of botnet creators. It would be far easier to write and execute Bad Things on end user's machine. They could spend plenty of money trying to figure out a way around server security or they could spend far less by writing an app that just hit the billions of desktops in the world.

Also, keep in mind that large corporations have a lot of pull, as well as the resources to prevent and respond to these kinds of threats. The average home user does not, making them far easier targets. What are criminals looking for? Easy money. If they were willing to go for the hard money, they could just get jobs like us.

You don't need Linux to prevent security breaches, you just need conscientious and knowledgeable IT staff.

Here's a hint: the people posting in this thread are, on average, probably about equally as smart as you. That means, that like you, they've probably used Linux before and have probably done at least a cursory investigation to what the cost/benefit analysis would be to a mass switch of Operating Systems. In any environment of decent size (5K+ users), you're probably looking at a 2+ year migration with a couple hundred thousand dollars spent in training and support for a forecasted net gain of zero productivity.

Security issues aren't so prevalent that you need to switch all your users to Linux or millions of corporations worldwide WOULD ALREADY BE SWITCHING TO LINUX.

MarkHy, yes, those are all very nice, but I'm not seeing anything that has the web functionality of OWA or the ability to assign permissions to one mailbox to multiple secretaries/admin and allow them to get meeting updates and requests for their bosses.

Don't get me wrong, Linux-based messaging services have come a long way, but they're not Exchange nor do any of them offer the full functionality of such. Don't even get me started on how many modules you have to slap onto the base install to get additional functionality.

I'm not a Windows fanboi and I don't have anything against Linux. But let's not pretend that open source can replace all heavily-used proprietary equivalents.

I work at Denmarks largest health organisation (~10 hospitals, 50k+ employees), and to the best of my knowledge ( I'm not in it) it has only had one serious security breach... a zero day virus from inside the network affecting one hospital.Desktops have a choice of personal login with limited access to local storage, access to network drives and usb drives or a common login with no storage, usb or network drives. Laptops are always locked down. All machines run Windows XP with some administration system that distributes software etc. (zen something?).Short of bugs, there is no way to gain admin privileges and all software must be approved by IT before installing ( which they will handle). Guest internet is through an isolated wifi with ssn registration prior to access.

Of course there are exceptions, and getting new software is a lengthy process, but for the vast majority of users office, a browser and a selection of standard clinical software is fine. I dont see why most corporate environments would need more outside specialized departments.

Well, the thing is, he made no honest argument in favor of Linux. As mentioned above, he's essentially stating to only use known-clean software. You don't need to spend thousands of dollars retraining users and switching the server and desktop OS just because someone thinks that white-listed software is only applicable to Linux.

But he is in fact arguing for Linux. He said he migrated an office of 250 employees with minimal effort on his part, and how an IT department still manages it that way. And he said that training was minimal also, with little investment on his part. And we can conclude that he was making valid business decisions if he was able to sell the company off to someone else and it's still running.

Quote:

I'm saying that switching is unnecessarily difficult and expensive. There's very little that Linux does or can do that Microsoft-based apps and OSes can't (however, the opposite is true; show me Linux's version of Exchange).

Handled by MarkHy above, with more brevity than I seem to be able to muster.

Quote:

Are you still addressing me? If not, please indicate so by posting the quote that you are referring to.

My apologies, I confused you for the post by Cabal and quoted by qst330. Just in the mood to refute arguments I guess.

Quote:

You're missing something integral here. Sure, they could go for servers for various corporate espionage/blackmail, et al. Or they could be going for end user's bank accounts, credit card, personal data. Also, you appear to be overestimating the motivations and capabilities of botnet creators. It would be far easier to write and execute Bad Things on end user's machine. They could spend plenty of money trying to figure out a way around server security or they could spend far less by writing an app that just hit the billions of desktops in the world.

Also, keep in mind that large corporations have a lot of pull, as well as the resources to prevent and respond to these kinds of threats. The average home user does not, making them far easier targets. What are criminals looking for? Easy money. If they were willing to go for the hard money, they could just get jobs like us.

You don't need Linux to prevent security breaches, you just need conscientious and knowledgeable IT staff.

But therein lies the rub, you're using my point to try to prove yours. My point in this different discussion is that Linux is inherently more secure, and thus less of a good target, than Windows. And so an IT director could reduce his security risk by using a more secure product with a long and celebrated history of being difficult to crack.

What does Ockam's razor tell us about attack vectors? If we presuppose that Linux and Windows are just as secure for the reasons of this argument, then let's set up a test theory. To support the world we see today, it would be better for a malware writer to write ever-evolving code that aims at a fragmented and rapidly shifting base with multiple high-profile anti-malware repositories to glean (hopefully) many small-profit intrusions and then band them together into a massive, unwieldy, and obvious botnet than it would be to go after one centrally located, centrally administered, and lucrative information and access source. That can't be supported.

As far as needing Linux, I wouldn't ever say that. I know there are many knowledgeable and motivated businesses running a secure system using Microsoft products. But here I was arguing for the validity of Linux in a security sense. It is more secure, and using the marketshare myth to say different is uninformed at best and juvenile at worst.

MarkHy, yes, those are all very nice, but I'm not seeing anything that has the web functionality of OWA or the ability to assign permissions to one mailbox to multiple secretaries/admin and allow them to get meeting updates and requests for their bosses.

Don't get me wrong, Linux-based messaging services have come a long way, but they're not Exchange nor do any of them offer the full functionality of such. Don't even get me started on how many modules you have to slap onto the base install to get additional functionality.

I'm not a Windows fanboi and I don't have anything against Linux. But let's not pretend that open source can replace all heavily-used proprietary equivalents.

Well, like has been mentioned, if you're really all hot-and-bothered for webmail you can't do any better than Google. Built-in messaging and calender abilities, 25GB of storage per user, Outlook interoperability, and 99.9% uptime. Add to that Google apps for collaboration and communication and you've got a winner in my book. And I'm not sure but I think they let you backup any data you feel you need to; they have a division called the Data Liberation Front.

Well, like has been mentioned, if you're really all hot-and-bothered for webmail you can't do any better than Google. Built-in messaging and calender abilities, 25GB of storage per user, Outlook interoperability, and 99.9% uptime. Add to that Google apps for collaboration and communication and you've got a winner in my book. And I'm not sure but I think they let you backup any data you feel you need to; they have a division called the Data Liberation Front.

Yeah, if the cloud is your thing. Many corporations rightly distrust security that they can't control. Google has a great track record, but that's not going to be a compelling argument when your data has been lost or stolen.

Yeah, if the cloud is your thing. Many corporations rightly distrust security that they can't control. Google has a great track record, but that's not going to be a compelling argument when your data has been lost or stolen.

Understandable, but if you're running an insecure system then your data is more likely to be lost or stolen anyway and with no one to blame but yourself. And most IT deployments have less than three nines uptime from buggy software anyway, Google would be an upgrade for most shops.

Beyond all nitpicky arguments, when Microsoft can figure out that servers need to be CLI then they can start to be respected from a security, hardware resource, and uptime standpoint. (Separate issue, I know, but related. And its a big pet peeve of mine.)

But he is in fact arguing for Linux. He said he migrated an office of 250 employees with minimal effort on his part, and how an IT department still manages it that way. And he said that training was minimal also, with little investment on his part. And we can conclude that he was making valid business decisions if he was able to sell the company off to someone else and it's still running.

Again, why would a company want to switch over desktop and server OSes when their current system can do the same thing? Also, what's little investment? How about quantifying some of that? Not to knock his business, but we can conclude no such thing. Many companies buy other, smaller, companies for the tech they've developed, not because it was a viable business model to begin with or that it was run properly.

At any rate, great for those 250 users; what would the expense be for 5,000 users or 100,000 users?

Here's an idea: Switch to Linux. Use a major distro such as Ubuntu. Use the software that's in the repositories, don't add new repositories from untrusted sources, and you will be fairly safe.

. . .doesn't address the fact that Windows can do the same thing and routinely does on the networks I control.

You're assuming an awful lot of information with very little to extrapolate from.

Quote:

Handled by MarkHy above, with more brevity than I seem to be able to muster.

No, he doesn't. I may very well be missing something. Feel free to post the relevant section(s) of his posts.

Quote:

My apologies, I confused you for the post by Cabal and quoted by qst330. Just in the mood to refute arguments I guess.

It would appear that we have something in common today.

Quote:

But therein lies the rub, you're using my point to try to prove yours. My point in this different discussion is that Linux is inherently more secure, and thus less of a good target, than Windows. And so an IT director could reduce his security risk by using a more secure product with a long and celebrated history of being difficult to crack.

So, how does that argument hold up when Linux supporters succeed and have, say, 33% of personal/user desktops? As an open source advocate, I don't imagine that you're a proponent of security by way of obscurity. I fail to see see how Linux is inherently more secure than Windows. The vastly installed corporate server base can be easily explained by financial reasons.

Quote:

As far as needing Linux, I wouldn't ever say that. I know there are many knowledgeable and motivated businesses running a secure system using Microsoft products. But here I was arguing for the validity of Linux in a security sense. It is more secure, and using the marketshare myth to say different is uninformed at best and juvenile at worst.

Good enough. However, I would think a more juvenile argument would be the one in which a corporation switches to Linux with, as SituationSoap mentioned, zero net gain in productivity and, for the purposes of our discussion is no more secure than Windows.

Please keep in mind that the original argument was MarkHy's original post implying that white-listing approved software was Linux's domain. And there is zero upside to the expense and retraining necessary to do so.

If I'm missing an upside, go ahead and fill me in.

Quote:

Beyond all nitpicky arguments, when Microsoft can figure out that servers need to be CLI then they can start to be respected from a security, hardware resource, and uptime standpoint. (Separate issue, I know, but related. And its a big pet peeve of mine.)

You mean the server core series of Windows Server? And all Microsoft servers utilize the CLI. No doubt you've heard of PowerShell. However, I assume you mean CLI-only servers.

I have to agree with Digitali on this one...I've been a Windows System admin, and I am currently a Unix/Linux/Network Admin. As much as I like several distros of Linux, I have not seen anything to make me believe that it is "inherently more secure" other than the fact that Windows is targeted more often. I think we all know why....

Since Mac OSX is based on UNIX, do you also say it's also inherently more secure?Forgive my lack of reference, but I'm pretty sure Mac has been hacked more quickly than Windows several times in Pwn2Own.

Furthermore, many organizations use a wide range of applications, many of which are custom developed. The time and effort alone spent porting such applications to Linux would make an organization-wide migration infeasible for most large corporations as well as many smaller ones.

All that being said, the current argument is centered on mitigating bot infestation. Why do you suppose Windows is targeted by bots more often that Linux? Think of all the Linux machines used in corporate environments and what they are used for. Not many end-users surf the web on Linux desktops.

On a side note, if I were conducting an interview for an IA job opening and the candidate responded with "Switch to Linux. Use a major distro such as Ubuntu. Use the software that's in the repositories, don't add new repositories from untrusted sources, and you will be fairly safe. "