Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ZDOne writes "ZDNet UK is reporting that it will not be known until the Android software development kit comes out on Monday whether the Gphone will be strictly Java-based, but security experts claim that the less smart a phone is, the less vulnerable it is. Android developers should stick to a semi-smartphone platform because the Java sandbox can protect against the normal kinds of attacks, experts claim. The article also discusses some of the pros and cons of open vs. closed source security. 'The debate about the relative security merits of open-source as opposed to proprietary software development has been a very long-running one. Open-source software development has the advantage of many pairs of eyes scrutinizing the code, meaning irregularities can be spotted and ironed out, while updates to plug vulnerabilities can be written and pushed out very quickly. However, one of the disadvantages of open-source development is that anyone can scrutinize the source code to find vulnerabilities and write exploits. The source code in proprietary software, on the other hand, can't be directly viewed, meaning vulnerabilities need to be found through reverse engineering.'"

Experts suggest security-conscious consumers consider the Western Electric 500 [wikipedia.org] for their next smartphone. Lacking Java, JavaScript, ActiveX, and any other type of software, its spartan phone interface makes it virtually immune to any security vulnerabilities, and its innovative "rotary dial" system circumvents attacks possible on touch-tone phones. The casing is constructed of nearly indestructible Bakelite plastic, making it far more durable than the average smartphone. It does however require a service agreement with AT&T.

I know it's meant to be funny, but strangely it's one of the reasons I haven't ditched my land-line to go all wireless. Mobile phones, especially those that try to do everything, aren't particularly good at anything and the more things you cram onto them, the greater their vulnerability profile. My wife just traded her old broken-down phone for a T-Mobile Shadow, and it's not the world's greatest phone (it runs Windows Mobile, but that isn't the root of the problem). The sound quality is horrendous and I haven't tried the MP3 player in it, but I'm not holding out hope.

I don't think we're at the point where phones can handle multiple tasks well, and using one is leaving yourself open to all sorts of mischief.

In March 2006 We got hit by two tornados [wikipedia.org] in one night. They went right through my neighborhood; the big tree behind my apartment looked like Godzilla had stomped on it. Half the utility poles were gone (as were a lot of buildings). My power was out for a week, my cable and internet were out for a month, and the landlines were all out as well.

My cell phone worked, however. It also was a very handy flashlight, as there was no power AT ALL anywhere near my apartment and boy, was it dark there at night! It's been years since I've had a landline.

The sound quality of my AT&T Tilt (same manufacturer as the Shadow - HTC) is just fine. I'd say it was great, in fact.What is the signal strength when you get this "awful sound quality" - T-Mobile has the smallest network (read: least coverage) of the four U.S. carriers. That's why they're so dirt cheap - you get what you pay for.

This article is just a pile of FUD. I laugh at the morons who buy antivirus software for Windows Mobile phones, when there is little to no risk of contracting a virus unless

The rotary dial was a pain in the ass, but we never knew that until they invented pushbutton phones. And you had to look up your police/fire/ambulance in the phone book as there was no 9-1-1 service. Although most people just dialed "O" and when the lady answered (a real live human being, we didn't have voice mail either) you said "MY HOUSE IS ON FIRE" and she'd plug some plug on her switchbopard in and the fire department would come out.

But the Western Electric 500s were hackable! Some of them had no dials; businesses used the dial-less phones for where they wanted a low level employee, like the teenaged me at the ticket booth at the drive in theater, to be able to answer them but not make outgoing calls.

You could, however, "dial" them by repeatedly hitting the hangup buttons. So I was hacking your "unhackable" phone when I was 16. Actually I was cracking not hacking; I was hacking when I made guitar fuzzboxes out of $10 transistor radios and selling them for $50 each to other teenaged guitar players.

-mcgrew

PS- I've almost forgotten this, but in the Metro East St Louis area you could dial Bridge 1300 and a spooky noise cane out of the phone. The other kids said it was a ghost, I never had the heart to educate them about the reality.

The word "Bridge" would have been a mnemonic for the first two digits of the number, BR (27), so the full number would be 271300. Apparently AT&T figured it was easier for people to remember a word and a few digits, rather than remember lots of digits. That's why there are letters next to each number. If your phone number was 654-3210, they'd list it as "Olive 43210".

I meant the first three digits might be used for the mnemonic word in some regions, with the remaining four left as digits, while in other regions, they might have only used the first two digits for the mnemonic word and the remaining five left as digits.

The first three digits being the exchange, and the last four being the local part of the number, is definitely standard in the US now, but I don't think it always has been. There was a time when people didn't even have a number. Read this [pipeline.com] and this [norrisc.com].

Rotary phones a PITA we never thought about?The what the hell does that make PHONE NUMBERS?

Can you imagine anybody creating a communication system today where subscribers are addressed by a seven digit number plus a three digit prefix?

We use GotoAssist at work -- highly recommended by the way if you support Windows clients. GotoAssist issues seven digit tickets, which works great; people are so accustomed to seven digit phone numbers that they are ridiculously adept at keeping them in short term memory.

"Make it smart enough to be useful, but not so smart that it starts becoming a liability". That's what they're saying. Actually it's a very fine line to tread, and one that requires very good programming skills to actually accomplish.

Actually... I think it should be: the smarter the user thinks they are, the less secure the phone is. Reminds me of my PC Tech Support days long ago... "My neighbor came over, and he knows a lot about computers, so he started fixing my computer, now it won't start..."

social scientists have long inferred that dumber people are less likely to fall for hustles/social engineering/hacking/etc., because they lack the imagination to consider alternate realities.i've been consulting for a new york firm for about 9 months now. i do a lot of traveling, but i'm in the new york home base office at least 4 times a week. i often misplace my card-key - and the receptionist refuses to buzz me in, EVERY TIME. She's always like, "I'm sorry, I don't know who you are." her policy is to nev

First: She's always like, "I'm sorry, I don't know who you are." her policy is to never buzz anyone in. She angered the chairman once over it, who was talked out of firing her precisely because he's in the office like 3 times a year. She won't buzz people in and she's unrepentently steadfast about it. She's dumb as dirt.

She's not dumb, she's smart.

Second: Simple systems are more likely to be secure than more complex systems in general as they are less prone to component failure.

The Java sandbox is an extremely complex system, with trusted and untrusted code running in the same address space calling the same libraries, with the security managed by code that's also using the same libraries and running in the same address space. I am honestly amazed that it's worked as well as it has.

The multiuser protection in UNIX is an extremely simple system, with untrusted code running in separate address spaces and, traditionally, with the ability to run security applications using no shared libraries at all. It's also proven extremely effective, and it has the advantage that even if flawed code is run those flaws do not automatically provide an escape route from the whole sandbox the way flaws in libraries called from Java do.

This is not to say that the Java sandbox isn't a useful tool, but rather to say that when analyzing the security of the system as a whole the fact that an application is written in Java should not be given the kind of importance that it seems to be getting here.

Wow, you really think that Unix's sandbox is better than Java's? Unix's which requires "setuid root" executable everywhere, each ready to have a buffer overflow found and hacked? Or how a dodgy piece of HTML can do a buffer overflow in firefox and nuke your home directory?As far as I am aware, there has been 2 hacks of the Sun's Java security manager, both fixed quickly. Apart from that, Java applets have been living safely in people browsers without incident. Java has convinced me that virtual machines are

My point, which you've completely missed, is not that I often run things as root, but that many programs (for example apache) have to run as root. Therefore any buffer overflow in such a program gets root access.However, Java also offers one major advantage for me. Other than having multiple user accounts, which is a pain, I don't know of any good way of stopping a program getting access to files in my home directory. As I'm the only person on my computer, to be honest having a program 'go mad' as me is jus

Based on the evidence you've supplied, she's not dumb, just principled. It's entirely possible that this organization has a security policy which requires staff to act this way. That would explain why the chairman found that he couldn't just tell her to do it differently.

With that in mind, consider the possibility that you often misplace your security card as your failing. Instead of blaming someone else because they won't fix your life for you, take a little responsibility.

People who complain about and call others stupid for not bending security policies to accommodate their own sloppiness and convenience have demonstrated a level of maturity consistent with the condescension heaped upon them.

However, one of the disadvantages of open-source development is that anyone can scrutinize the source code to find vulnerabilities and write exploits. The source code in proprietary software, on the other hand, can't be directly viewed, meaning vulnerabilities need to be found through reverse engineering.'"

If I remember right, that closed source thing... hmmm it seems to be working out really well for Microsoft.

If I remember right, that closed source thing... hmmm it seems to be working out really well for Microsoft.

Yep, they're practicly eradicated by now. Along with every other closed source company. No? If you take the big three - price, functionality and quality, pick any two, then either they can't be far behind in security or their product are a lot better, since they sure don't win on price. And you can't accuse all of them of having the deskatop monopoly of our favorite hate object...

I think researchers and experts, when they talk about how exploits are found, fundamentally mistake the issues. No-one reads source to find exploits: that's the hard way to go about it. Closed source has only disadvantages in this regard, especially with fewer hands to fix things.The "many eyes" argument fails as well, though, simply because many eyes do not make for better security. Many hands, on the other... um... hand, make for better response time. Open source code tends to be more agile because it's o

Yeah, look how well this closed source secure environment played for Apple's latest gadget. Or Xbox, Playstation, Nintendo consoles. It was supposed to be impossible to install and run unauthorised software.

Closed source happen to be even more hackable in that situation:because here we have a situation were the various software have to communicate together. They have to speak a common language.And that standard used to communicate between the device, HAS to be documented well.from the/. entry:

meaning vulnerabilities need to be found through reverse engineering

False.You don't need to actually reverse engineer it.Just get the documentation for the used standard. Then try every possible corner situation:data

Look up 'fuzzing' in the context of security testing, if you didn't already know the word. It's the shorthand term for testing all those edge cases to see where the code breaks. It's been particularly useful on web browsers, which are so large and complex that complete code audits are painful. There are automated tools in distribution for fuzzing different types of software now, so this type of testing is getting much easier to perform.

"Fuzzing" : Yeah, and you got "Gremlins" for testing PDA application, various memory debuggers (Valgrind, DUMA, dmalloc).I know tools exist.And as I said, where are you more likely to find people mucking around with such tools ?- In a proprietary settings where you have a very small team that is short on time to have at least some running code before the deadline ?- Or on some open source project where everyone is free to play around with the code (because of the definition of the GPL) and where you almost

Well, one of the benefits of fuzzing is that you don't need the source or even a version of the binary with debugging info intact. A customer or third party can fuzz-test a binary distribution just fine. It's how a good deal of the third-party security reports about Internet Explorer and other closed-source applications come about. That's not to say it helps someone fix problems, but finding them in the first place doesn't require source.

Yes, security through obfuscation always It seems that perhaps people would learn by now that simply isn't true. Maybe the obfuscation slows down the attacks, but the real issue is how fast the fix can be had. No matter whether the software is open or closed sourced, there will be bugs, and therefore potential attacks on it. At least with open-sourced software anyone can potentially fix the problem, instead of waiting for a company to take potentially very long times to patch it (which is fairly frequent, a

At first I thought this was a repeat of the previous robot article. I guess I really should brush up on the difference between androids and robots.

Anyway, More complex is effectivly as safe as less complex as long as the default options do not immediatly provide vulnerabilities. The more complex a device is the less features ID10T users will be able to misconfigure as it will be to complex for them to move much past the basics such as voice/text messaging.

This is the old telecom industry chant. "Let's put the smarts in the network, they say, where they're out of touch and nobody can even get in to attack them, and have dumb devices out on the edge. Blue boxes are just a rumor."

By all means it should be possible to make dumb phones with Java sandboxes around third party software using Android. Yes, every layer of security is good. But it's not perfect... if you put everything you want to protect inside the sandbox, who cares whether someone breaks out of it or not?

Don't forget, the OS they're basing it on was designed for timesharing use, where it was common for people who had very different security requirements running code together on the same computer. Linux is a relatively young implementation of UNIX, but it's still using the same design that was able to keep some of the world's smartest CS undergrads from getting at the test papers and scores stored on the very same computers as their class accounts in the early '80s.

And some of the biggest vulnerabilities available to attackers on any platform are in application layers, in code doing what it was designed to do, with no individual component violating any constraint that a sandbox would prevent. The biggest problems are not implementation flaws, they're design flaws.

That's why, despite years of warnings from antivirus company experts, we don't have a flood of smartphone viruses... because PalmOS and Pocket PC and the rest don't have multiple internal firewalls like UNIX or Windows NT, but they're also not designed around a model of accepting code from untrusted sources and running it, like Windows is.

Get the application design right, and you're solid. Get it wrong, and you lose... no matter whether the kernel is inviolate or not.

Yes, I totally agree! But try to make any corporate management to understand that, no way (yet?) OS can not protect when application makes stupid things. And for me, if you build a stack, it is an applications, if you build a driver, it is an application, if you build the authorization server,.. you get the picture. Unfortunately security is not (yet?) very high on list, even lower than performance in most cases I have seen. As you say, it is the design! There may be code problems but if the design is good

This is the old telecom industry chant. "Let's put the smarts in the network, they say, where they're out of touch and nobody can even get in to attack them, and have dumb devices out on the edge. Blue boxes are just a rumor."

The desire of the telecom industry to "put the smarts in the network" has nothing to do with security and everything to do with economics. If the "smarts" are in my network, then you have to use my network to use those "smarts". If the "smarts" are in the phone then you can use those "

There is an overwhelming consensus amongst real security professionals that security is achieved through openness, not obscurity and closed source. Just look at the systems that hyper secure organizations like the NSA advocate. Those who continue to rail against open source systems as being insecure because "hackers can look at the source" (yeah but they can't look at my key) seem as out of touch as creationists.

Ah, the new buzzword of the day, "consensus." There is hardly consensus on the superiority of openness in a security model. The scrutiny of many eyes argument is valid, but is arguably countered by a "probing of many eyes" for exploits argument.

And, there are good arguments for security through obscurity -- a concept all too quickly shot down here at Slashdot. For example, leaving a house key inside a fake rock in your garden is arguably more secure than leaving the key under your welcome mat. Another example, in which I have personally experienced the behefits of security through obscurity, is network ports. I used to have ann SSH server running on the standard, port 22. Every day, my logs showed numerous login attempts by unknown individuals trying to gain access to my system. Once I moved the server to a different, more _obscure_ port, though, my logs rarely show any connection attemps. Now, is this new port more secure? No. But, because it's further hidden it does afford _more_ security.

And, as for your final, fanny-pat statement to the "consensus" of the "scientific" world: I'm a creationist, and I'm not out of touch. For me, the incalcuably small probability of spontaneous generation of a lifeform able to be nourished by it's environment and then able to reproduce is not a large-enough foundation on which to build a scientific consensus.

What you describe is more security through difference than security through obfuscation. The problem with the closed source models is that inevitably, all of the targets are the same as what the attacker has, so the attacker can study his copy, find vulnerabilities, and then exploit them elsewhere. Being different than the standard will protect from this, obfuscating the attackers copy will only slow him down slightly.

To be clear, you're talking about abiogenesis, not evolution. Evolution merely describes the natural processes that are known to occur in living organisms here on Earth and doesn't make any claims to how that life got here in the first place.There's not much direct evidence in support of abiogenesis. It's more of a logical argument that life had to come from somewhere, at some point. Even if you accept that God created the Earth and all the life on it, God himself is a living being so the creation of Earth

You're right -- I was talking about abiogenesis. I never mentioned evolution. But, abiogenesis IS a prerequisite to rejecting creationism, and therein lies my point.As for your last sentence, if you include supernatural in your definition of "living being", then you are once again correct. If, however, you assert that creationists must believe the Creator to be a mortal creation Himself, then you're stuck back at the problem of God's spontaneous generation. In that case, nothing is gained and, as you st

But, abiogenesis IS a prerequisite to rejecting creationism, and therein lies my point.

No its not. Just like you don't have to accept a particular cause for the Big Bang to accept that the Big Bang happened and study the development of the universe, you don't have to accept a particular cause for abiogenesis to accept that abiogenesis happened and study evolution.

As the grandparent post said, its fully possible to believe that evolution occurred more or less undisturbed after God provided the initial s

But, abiogenesis IS a prerequisite to rejecting creationism, and therein lies my point.

No it's not. There are other theories. Maybe I don't accept creationism or abiogenesis. I might even say I have no idea how life started without accepting any explanation.

If, however, you assert that creationists must believe the Creator to be a mortal creation Himself

Just to nitpick, it's not necessary for God to be mortal. As long as you consider God to be a form of "life", by whatever definition of that term you choose, then creationism is not a satisfactory explanation for the origin of life. If you stipulate that God is not a form of life but is eternal without beginning or end, then God is in so

"...Again, "non-creationism" != abiogenesis. Regardless, the improbability of abiogenesis doesn't mean it's not true. "When you have eliminated the impossible..." and all that jazz.

Well, ok. Tell me ONE theory for the origin of life that does not either require a supernatural creator, or spontaneous generation from "primordial soup." I'm not aware of any, and intuitively cannot even conceive of another possible explanation. God and abiogenesis are, exhaustively, the two possible theories.

There's an interesting article on Wikipedia called Jainism and non-creationism [wikipedia.org] that you may be interested in. And again "none of the above" is always an option.

Also, if you take a moment to consider the demonstrably infinitessimal probability of abiogenesis, I argue that is IS impossible. It is proven improbable, and has yet to even be proven possible. I submit that it is, in fact, impossible.

First off, until something is proven impossible, it is necessarily possible, just by what it means to be "possible". Secondly, it's quite likely that there is some piece to the puzzle still missing from our understanding of how life formed. The current theories may be wrong, but the logical argument for abiogenesis (that life had to come from somewh

You are still visible, a port scan will show it - it's not *obscure*. If you want *obscure* you should consider port knocking (http://www.portknocking.org/) or such other methods.

I like the portknocking idea. But, you are wrong -- it is obscure. In this case, either exhaustive, manual search or a tool (in this case a port scanner) is required to find the port. By definition, because it is more difficult to find, it is obscure. And, my server logs reflect the effect.

"A port scan is very easy to do and if the port is open it will show. With port knocking, it will be much more difficult to find out because all the ports are closed..."

I whole-heartedly agree that port-knocking would afford an even higher level of security than simply moving the port. In fact, both methods are security through obscurity -- port knocking is just even more so. My original point was that security through obscurity is effectual. It looks like we both agree on that.

This is the second article about Google Android today already and we never even discussed the original announcement, just what Ballmer and now ZDNet have to say. But I suppose there will be a long line of articles in the future so maybe it won't matter, just seems odd.

Thats foolishness. Open source is far and away a more secure platform than "closed" source. One problem with closed source is that no software is truly closed. So you still have a handful of perhaps underpaid folks that get to see the holes just for themselves. Not to mention same folks can add their own holes. And still when holes are found the closed source companies tend to act like they don't exist. And try to write for themselves contracts that prevent them getting in trouble for said holes. There are just too many problems with security in "closed" source software.

Open source does not have any of these problems. Only problem with open source is if you have one person who is significantly smarter than everyone else looking at the code and can come up with an exploit before anyone else notices. This is a more comfortable position to be in as far as I am concerned.

When you compile a program about 90% of the information in the original code is lost. The variable names, the object names, function names, and all comments are stripped out and replaced with something else. A decompiler can see some code, but not the code, and for large applications, that makes a huge difference.

The debate about the relative security merits of open-source as opposed to proprietary software development has been a very long-running one

Indeed. The principle of open security was first proposed by Auguste Kerckhoffs in 1883.

Any time security depends on the secrecy of some mechanism, that security is pepetually at risk. All these millions of instances of the same vulnerable mechanism, no way to tell in general whether their security has been broken, and -- as you point out -- a certainty that the vulnerable secret cannot be contained.

They need to remember the cryptography community and the history of the field. The NSA has made a lot of cryptographic algorithms with some of the most talented mathematicians in their generations. Years later, when they're declassified, the cryptography experts pick them apart and they've found some of the core algorithms were deeply flawed. If the NSA can't keep a closed-source algorithm secure, what makes any private company think they can do it?

People will want to make their phones do special and complex things. To facilitate this, they will write API libraries that other parties will also use because the phone's basic API will not support much.

The results of a non-robust API will be large amounts of object code libraries being built and installed, varying dependencies and conflicts and on and on. As much as possible, it would be best to maintain the API from a single point. This will also enable a much smoother user experience since people won't be forced to create their own GUI libraries and the like.

It needs to be complex and it needs to support everything... at least potentially. Ideally, everything except the data and the object code should be provided through the OS and OS supplied libraries. This would best guarantee compatibility and stability. But we know it won't happen that way. We can't even get KDE and GNOME unified. Some "smarter-than-you-and-me" guy will write something that will be rejected by the masters of the API but will be used by a variety of other developers and then it all begins.

And what happens when the OSS community rebels? Recall how XFree86 became stagnant and people rebelled to create X.org? That wasn't a disaster, but what happens when it happens on users' phones? And will there be multiple phone distros? And will AT&T and T-Mobile try to lock them up? And if they "can't" then will they block those phones from being used on their network (in spite of laws to the contrary)?

There won't be a single API that is maintained. Inevitably such a project will eventually fork because one of the chief maintainers will go crazy because someone deviated from using the correct brace style.As quaint as it sounds, I'm a big fan of static linking when it comes to APIs that are not a part of the base operating system. This is probably because I expect the user to lose each and every related dependency, configuration file, and other random file that my app needs to run. You don't know how nice

Assuming, like many, that for libraries, disk space and bandwidth is close to no concern, just make sure to provide an auto-update feature to your application. (If the device is really constrained then you'll run into problems with that mentality) You get all the benefit of static linking's portability, and for the minor cost of maintaining an online site for distribution, you can update any time any of your libraries get important updates. You could probably even automate the update cycle with a couple scr

If I were to rate actors, Rimmer would definately be at the top...it may not seem like it at first, but look at how many difference characters within a character he has played as Rimmer...I would say that he has the toughest acting duties out of all of them.

I like open source projects (mysql and subversion are tops in my book), but I have to take exeption with the notion that open source software is great because thousands of people from around the world are looking at and trying to fix the code.
I think this is bull$h!t.
Open source code is coded by a small fraction of it's userbase. And each project still has one, or myme two people at the top that approve and integrate each real change. It's not this automated machine. When developing any kind of software

Yabut...The beauty of open source is that it lets people like me contribute little dribbles here and there. I've probably touched a couple of dozen projects; typically only contributing a single fix or small feature, even something as small as the ability to daemonize hot-babe.

Now by itself that's not much, and in the context of progress it's miniscule, but it adds a tiny feature. Certainly I'm not a cathedral builder, I'm more of the guy who comes in and sweeps up the dust by one door.... But with enoug

The source code in proprietary software, on the other hand, can't be directly viewed, meaning vulnerabilities need to be found through reverse engineering.'

This is so wrong it isn't funny. I need know NOTHING about the internals of a program to exploit it - I only need to find a set of inputs that make it crash in interesting ways. Buffer overflows can be trivially used to redirect a running program to jump to a stack frame supplied as part of the crafted inputs. There are other ways to play the game against binaries without reverse engineering.

[S]ecurity experts claim that the less smart a phone is, the less vulnerable it is.

Next they'll be telling us that "smart" functionality is a buzzword-compliant euphemism for complex code, that complex code is harder to debug than simple code, and that code which is hard to debug often has a lot of, surprise, vulnerabilities. How is this news?

The thing that a lot of people do not understand is that for the most part cell phones are one-time-programmable consumer electronic devices. Once the code is released to manufacturing, that is it. There are no more bugs - just unexpected features.

It matters not who is looking at the code in terms of fixing it. It is not updatable. I suppose it is possible that someone might come up with an updatable phone that was 100% impossible to "brick" but so far I've not see it. The risks do not outweigh the rewards with that and the current "experiment" with the iPhone is not proving to be very satisfying. Yes, they have a distribution technique for software updates through iTunes, but how many phones did they lose with the first update?

Treo has a slightly better record, except they do not have a distribution method. You have to download stuff and jump through all kinds of hoops. Perhaps 1 in 10 people update their Treo. I suspect Blackberry isn't much different from that. Also, it is far, far too easy to utterly destroy a Treo with a bad update.

No, I would not count on updates. Too risky and too little penetration. The end result is bugs that get released are features. And they are there to stay.

Huh? The iPhone and the Treo model is identical. The difference is that Apple provides a download manager called iTunes to facilitate the distribution. You still have to go through hoops to install the update (IE, click yes to download, click yes to install, click yes to confirm install).

I also suspect they did not lose many phones at all, though, or we would have heard about it in the earnings... in other words the returns/repairs would have hit them (much like the XBox 360 repair/returns hit Microsoft).

The ROM is broken down into parts. Even if you screw up everything in the very large portion you can mess with, there's still enough smarts to respond to USB/ActiveSync from Windows XP and put a new ROM in there. Trust me - I've bricked it! And was very pleased to see very unbrickable it was.