Posted
by
timothy
on Tuesday November 13, 2001 @06:59PM
from the cheapness-is-good dept.

Slashback with more on cheap satellites, the relative speeds of threads under Linux and two strains of Windows, a skeptical response to the idea that crowds of people are retreating to dial-up access, and some tantalizing hints at products killed along with the HP calculator division.

The computing equivalent of Area 51? A short while back HP closed its calculator division. Many have thought HP's
calculator department was unprofitable. This was not the case. Many have thought they had
no innovation. This was not the case. Turns out that management had 4% workforce to
kill and they were part of the cut.

This article explains more.
It turns out they had designed several Linux based PDA's ready to produce that were
killed by management. Sounds interesting? Go check it out.

The biggest expense was the 12 gross of Estes D engines ...
Satellite Designer writes: "The topic of low cost satellites having been mooted here recently, I though I'd alert readers to another such project. The HETE-2 satellite recently located a cosmic gamma-ray burst precisely enough that (with a lot of help from friends) an afterglow was detected, identifying its source. HETE-2 cost $26 million, only 1/3 of what a 'small' scientific satellite normally costs.

A lot of commercial 'off the shelf' technology went into HETE. Nothing from Radio Shack, but there are quite a few parts from Digi-Key onboard. You can't save money by using cheap parts (but you *can* save money by using easily obtainable parts), and you can't achieve reliability by using expensive parts (but you *can* help reliability by using the parts best suited for your application). The radical thing about HETE's parts selection was that it considered parts in the application context (as one would do in a normal engineering process), rather than restricting selection to a QPL assembled to meet irrelevant requirements.

The real trick to keeping costs down is to do the job with as small a team as possible in the minimum time possible. Rather than employing a large team of specialists, HETE's scientific investigators did much of the engineering and technical work. A small, carefully selected engineering team filled in the knowledge gaps."

Quitting isn't easy, and why bother?dmarsh writes: "This new article from C|Net seems to be a total contradiction to last week's "Dump Broadband, Dig Out Your Modem!" thread's article. I guess the important difference being that this one is backed up by an actual survey by the National Cable and Telecommunications Association."

Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source ;)

Actually, there is little reason to doubt the NCTA data. Although it's called a "survey," their findings are better thought of as sales reports. The NCTA is an industry association, and will certainly try to put a spin on their findings (e.g., "Consumers' strong response to digital cable services, in spite of difficult times, confirms the excellent value of these new services"), but they'll still report accurate sales figures.

The RIAA (boo) would LOVE it if album sales plummeted at the same time that Napster was taking off. Yet, that wasn't the case (although CD singles did suffer a drop last year, the increased sales of full albums was significantly greater), and the RIAA reported the numbers correctly. Although they put their spin on it, ("Look at the drop in Singles sales"), they reported valid sales figures.

It looks like the first article [cnet.com] was a guy trying to create a news story when there really wasn't one. Sure, people will switch from high bandwidth to low bandwidth, but if the general trend overshadows it, we end up with a very different story.

The newer story, which is just a rehash of an NCTA press release [ncta.com] says nothing about the people who are installing (or uninstalling) cable modems. It talks about sales trends. This doesn't negate the other story, it just indicates that it isn't all that big a trend.

It's the equivalent of C|Net reporting that I just bought an AMD processor, so Intel had better watch out. Who cares about me? If there are 50,000 people like me, then you notice.

So, yes there are people switching. But no, it doesn't seem to be affecting the industry. Two separate stories, no conflict.

The 825,000 new subscribers brings the total number of U.S. cable modem customers to 6.4 million, about 9.1 percent of the 70 million homes able to receive the service, the National Cable and Telecommunications Association (NCTA) said...

I wonder whether (1) this many people signed up for the service during the period, or (2) this many people finally received their hardware/installation. Everybody knows that the pool of broadband installers is vastly outnumbered by the pool of broadband salespeople. No flamebait here, just wondering if the mass sign-up occurred in 2Q or 3Q...

Also, consider the source of the statistics ("Our research shows that our product is 100% safe...")

My broadband provider starting sticking extra fees into my bill earlier this year. It's only $6/month, but it's still lame as hell. I'm revolting by dusting off my ol' 56K USR at home & taking advantage of that T-1 at work. BellSouth can rip off someone else.

Then you ditch the connection. Just because they raise the price isn't a good reason to dump it.

Hell, my employer hasn't hired anyone or let anyone go from my group in the last year so just to make up for raises and what not our product will cost at least 7% more. If our customers thought like you, we'd be screwed (but so would our competitors).

What you've discovered, even if you don't seem to be aware of it, is that delivering high speed connections isn't as simple as selling, say, lettuce. There is no skill in selling, growing, or shipping lettuce. You simply do it. Companies work very hard at doing it as inexpensively as possilbe, which makes them large profits. This same mentality has been applied to the cable television industry for years. Get X number of channels into a viewers home (disregard if they're good or not) and charge enough to make a profit.

Now hop over to cable broadband industry. It takes (gasp) skill to implement a WAN/MAN. The technology isn't so simple that you can just pick random parts off a shelf and expect everything to work brilliantly. We should hope that either companies like yours begin to dominate and spread their philosophy of good engineering or that technology improves to the point that setting up a WAN is as simple as setting up a LAN for a game of Quake.

wow, I hadn't heard about HP closing off its calculators division, it's such a shame, as a (still) proud owner of an HP 48sx I'm really saddened by this turn of event.

Maybe some slashdotters don't know it, but before the current palm-craze, HP's calculators were *the* portable thing to program for (at least in my university, I remember being amazed that somebody got pacman working on the HP).

To think that a whole division like that, with great products and a great vision was axed just to get the stock price a few bucks up in the short term seems really backwards, but I guess that's what's happening far too often in this period of stock-price-driven management.

I picked up an old 95lx Palmtop a couple of weeks ago at a thrift store ($4). So far it's been a great replacement PDA (My CE 2.0 PDA died a horrble death)
This thing has roughly the power of a XT and with 512k on board memory (shared ram disk and system memory) and a 512k ram card it does everything I need from a PDA. Notes, phonebook, games, even room for an ebook. Runs FROTZ fine as well. If you can find on of them (or it's bigger brothers the 100lx and 200lx) grab it. Excellent design and will run for weeks on 2 AA batteries.

Very sad -- I treasure my HP16C "Computer Scientist" calculator, vintage 1982 and still in daily use. Made in Corvallis, Oregon. Works as well today as the day it came out of the box -- and it was a gift from a friend in Portland.

No family member would steal it because of the reverse Polish logic. Perfect.

I cannot find a single thing in any financial news that states that HP has closed it's calculator division..
http://calc.org is down or/,ed right now, so I can't even read the article to verify if the story is even matches the/. headline.

Unfortunately, the key word in your subject is "was". The HP company that whose products we loved is no longer around. It's been homogenized, downsized and chipped away by "management teams" like the current one. They have lived up to their titles:they've "managed". The company isn't closed; it has "managed" to survive.
HP was founded by engineers. Engineering is what they knew, and that's how they competed. Today, HP is run by b-schoolers; engineering really isn't their forte. But they know advertising and finance and marketing. So that's what they rely on; that's how they compete. They leave the real innovation to their "partners" (guess who I'm talking about) who promise them success in terms they can understand: market share and intrinsic stock option value. Meanwhile, the company dissolves from the inside into yet another sales staff and yet another brand for the same old Same Old.
The Hewletts and the Packards might stop the Compaq deal, but all the rats together still can't stop their sinking ship from taking them under. It will take great innovation, not great speeches about "innovation". Good luck HP, you're gonna need it.

Question the source? I'm shure my telephone/cable company has been hard at work installing that transponder in the box 25' from my house since January. Every month I call...."Yes sir it should be any time now....."

The other difference between the two articles is that the latter one is talking about Cable in particular, rather than "broadband" (i.e. both Cable and DSL).

I used to have DSL. When I moved, I tried a Cable Modem instead. I found the quality of my connection was better, and the service technicians were far more knowledgeable. Of course, that reflects more on the individual companies (Verizon for DSL vs. Charter for Cable) than it does on DSL vs. Cable, but considering the number of people I know who gave up on DSL because of technical problems, I wouldn't be surprised if DSL is losing business to Cable.

Here in Pasadena, Cable is cheaper and they can come install it within a day or two of your order. When I got DSL, I had to wait six weeks for the first visit, and it took them quite a few tries to get it working.

It all depends... In my experience (DSL at parent's home, DSL at work, Cable at 3 apartments), DSL lines have generally had much better uptime and more consistent bandwidth. This is not to say that they have never gone down.. they all have. Also, I've had the fastest download speeds on Cable.

My experience is the general case, but other people like yourself have had different results. I think it all comes down to the number of subscribers in your area, and the competency of your provider.

Here in California, Cingular Wireless seems to have the worst service of any cell phone provider. However, I consider GSM (the type of network they use) to be the best network technologically. So why do they have all these problems? It all happened when they made the name-switch from PacBell to Cingular, and I believe the major problem is they have reached capacity. Bad planning? Bad management?

I'm going to have to go with this. The office had Cable for a while. Whereas the upload speed was higher than the DSL line I have at home, the speed varied greatly from time to time (ie, we would have d/l speeds of 400k on average one moment, and 11k another with no pattern as to time of day or site d/l'ed from).. I've also noticed that the uptime is much better for DSL, the cable modems would go out for days (if not multiple days) at a time, whereas my DSL may drop off once every couple of weeks for a few minutes.. I think it just depends on location..

The trick with verizon is to never never sign on for the verizon ISP service but get your own. And insist on a real splitter, instead of microfilters -but that goes for any dsl service.

The verizon pppoe software for windows was reported to chew the cpu prodigously like you were running SETI@home. That's where alot of discontent with verizon originated. I heard about that when doing research , but then I already had made other plans. I set Verizon dsl up with a dhcp provding local isp instead of Verizon or earthlink with a Linux floppy based firewall. It's been fabulously great (for the people I set it up for) ever since the splitter was installed. Literally like dialtone.
Oh yes, and another thing: if it isn't a dsl to ethernet bridge, it was never designed to perform with stability in the first place. Whoever your dsl provider is, they don't necessarily want you to be connected 24/7, so as the the technology "matures" , meaning as they get past the tech-savvy, nitpicky, first-adopters, they think : why should they give away hardware that contributes to high uptime when they can buy the cheapest POS usb devices and microfilters instead , saving themselves money and keeping the customers offline a little more ?

It might stay around, but it will probably only be as fast and reliable as the United States Postal Service delivers mail or Amtrak delivers people.

Broadband as an industry is here to stay, regardless of what happens to any one company. So in the long run, government broadband is not really more stable than the private broadband industry as a whole.

In a truly competitive market, other companies would come in to fill the gap left by the departing ones. The problem is, the companies that currently dominate broadband come from industries that are used to having government imposed monopoly status: cable and telephone. The monopoly status is starting to go away in the cable industry, but is persisting for telephone, especially in regards to the "final mile."

The first wave of DSL providers had tremendous problems getting the incumbent carriers (ILECs) to give them support when there were line problems. The ILECs didn't want them to succeed because they wanted to offer their own DSL but hadn't managed to get their act together yet. They had no incentive to provide good service and every incentive to provide bad service. Result: bad service. Now that the first wave of DSL providers has gone bankrupt, the ILECs are moving in to dominate DSL. A typical consequence of government interfering in markets.

So what you're really talking about is a government "solution" to a problem that was created by government in the first place. No thanks.

In a truly competitive market, other companies would come in to fill the gap left by the departing ones. The problem is, the companies that currently dominate broadband come from industries that are used to having government imposed monopoly status: cable and telephone. The monopoly status is starting to go away in the cable industry, but is persisting for telephone, especially in regards to the "final mile."

Yeah, and in the best of all possible worlds...Monopoly status is not imposed by the government in the sense that the government forbids competition; by and large what they do is amelioration of the effects of an existing monopoly (price controls, etc.) Government does not impose monopoly status so much as it acknowledges an existing reality. You seem to forget that it was government "interference" that opened telephone lines up to DSL competitors in the first place, but that's inconvenient, so we'll just forget that, right? Of course, the RBOCs' incentive for doing so was access to long-distance markets they couldn't get into after the AT&T breakup. One of the many woes that introduced to the average consumer was no longer having to hide extra telephones when the repairman came by. Don't forget choosing your long distance carrier.

Cable was deregged under George I. Guess what? Prices went up. Natural gas prices in GA went up when they deregged last year. CA's electricity woes are partially due to a badly-planned dereg, but the consumers still had to take it up the ass. While competition is always good for the competitors (i.e., drive wholesale up by bidding because we're different companies and therefore not a monopoly,) it's not always good for consumers. Rather than parrot armchair libertarianism, maybe you should look at deregulation on a case-by-case basis and support it where it lowers costs to consumers and oppose it where it doesn't. Unless you have a financial stake in a company assraping consumers in the name of the "free market" you really shouldn't have a dog in this fight. If you do have a financial stake in such a company, you should say so up front so there's no confusion. If your interest is strictly ideological I can't see any explanation other than that you favor the concentration of wealth in the hands of a very few people even when that doesn't include you because you somehow find these people more accountable than politicians who can be voted out or recalled.

The first wave of DSL providers had tremendous problems getting the incumbent carriers (ILECs) to give them support when there were line problems. The ILECs didn't want them to succeed because they wanted to offer their own DSL but hadn't managed to get their act together yet. They had no incentive to provide good service and every incentive to provide bad service. Result: bad service

Who's going to provide those incentives to good service? The Tooth Fairy, the Easter Bunny, Santa Claus, or the government? Remember it took legislation just to get the cable companies to answer the phone.

So what you're really talking about is a government "solution" to a problem that was created by government in the first place. No thanks.

In a truly free market you could be bought and ground up for pet food. Never forget that.

at least it should be more stable (i.e., much less chance of bankruptcy) than a lot of these poor companies going out of business.

The government itself may not go out of business, but what will stop them from deciding next year that its broadband services are losing too much money and should be either privatised, discontinued, price increased dramatically, etc?

It's not like those politicians will be saying "sure it loses money but this is way more important than elementary education, so let's subsidise it just a little longer until it starts breaking even". Most governments (well, local governments) have fairly tight purse strings.

this [survey] is backed up by an actual survey by the National Cable and Telecommunications Association.

-Slashback

Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source;)

-Timothy

Well, OK, let's question the source. the National Cable & Telecommmunications Assosciation [ncta.com] is "is the principal trade association of the cable television industry in the United States". So basically, they're the RIAA of the cable industry. And they just published a survey that says that consumers are subscribing to broadband in mass quantitites.

Ok, I question the source. This is like Shell Oil publishing a study that concludes that burning gasoline provides valuable fertilizer for wetlands. Why give PR machines free press?

Many have thought HP's calculator department was unprofitable. This was not the case.

If their calculator division was making money, then why on earth was it chosen to be closed down? They should have chosen something that was loosing them money. If there were no departments loosing money, then they shouldn't have had to cut *any* departments.

Ahh, but you have to remember that being profitable isn't good enough. You have to have double-digit growth in order to keep your stock price going up.

HP has a giant cash cow in the printer business. But printers aren't very buzzword compliant, and don't give analysts anything interesting to talk about. So the money coming in from printers is used to finance whatever projects Carly thinks will give the stock price a boost.

In the last couple years, HP's philosophy has been to concentrate on a few areas. It was the reason that they spun off their test and measurement division as Agilent Technologies. HP currently wants to concentrate on computers and the internet. I guess the calculators did not fit into their vision of a computer and internet world.

Personally, I think they should have given the calculator division to Agilent when it was spun off. It seems to line up with Agilent's mission of making specialized electronic devices.

More information about the very new Mandrake Gaming Edition with The Sims seems to be available here [mandrakesoft.com] and pre-orders seem to be opened at MandrakeStore. Just wanted to let you know because I find this stuff extremely _cool_:-)

The Linux interfaces show the traditional SVR5 semaphores to be the slowest performers while the pthread mutexes are the fastest.

well duh. Just look at the section of the man pages -- semop is in section 2 (system calls) and pthreads are in section 3 (library calls). As a general rule of thumb, system calls will be slower than library calls (a context switch is involved).

Ahem... but most library calls themselves invoke system calls to get the job done. I doubt pthread semaphores and mutexes are implemented without some help from the system (access to shared memory, putting threads into wait queues, etc.)

Furthermore, any library function that does the same thing as a system function will undoubtedly call the system function (fopen calls open, fork calls clone, etc.).

Perhaps this just reflects that the implementation of IPC in Linux, while complete, is not as fast or optimized as it should be. This is probably because everyone uses sockets, mmaps and stuff to do the same things, all of which are already fast, so nobody bitches enough about it to prompt someone to rework it.

Note that I make this statement purely from an observational standpoint; most code to apps I see forgo IPC for other methods. Would somebody care to give an example of some common Linux app that uses IPC heavily?

I don't know about specific implementations of the pthreads API, but there is little reason why the most calls can't be done purely in user space. All the threads are already in the same process, so shared memory isn't an issue. Processor instructions such as "test and set" don't typically need supervisor priveleges.

Of course, I'm overgeneralizing again and someone will jump on me for that:)

As I pointed out above (But realized I may have placed the response in the wrong thread). It would be a good idea to compare critical sections in threads against coroutines [greenend.org.uk] as they both involve transfer of data between different functions without a context change.
Is it really worth using threads with coroutines can be done? Why do round-robin scheduling when you can simply have your functions call each other?

but there is little reason why the most calls can't be done purely in user space.

True; the fact they don't under linux is an artifact of LinuxThreads. As Xavier Leroy notes in his
FAQ [inria.fr], a one-to-one (every thread maps to a kernel thread) thread implementation implies that every context switch must be at the kernel level, which is more expensive than a pure user-space context switch. It's the price you pay for simplicity. This is somewhat mitigated by linux's fast context switching.

1. "Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source;) " says Timothy of slashdot, home of slack journalism and one-sided reporting. So, I don't believe a word he says, and I won't question the source.

2. "Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source;) " says Timothy of slashdot, and I believe him. I will question every source. Goto 1.

I wonder what the results would have been if he used the non-portable (non-pthread) interfaces to the sync/threading primitives in linux... because Windows gets an extra boost not having to go through a compatibility API. Are there non-pthread abstractions for mutexes and such? I don't know much about low-level threading stuff in linux beyond clone.

Reason: downloads could hit 400+K/s uploads could hit 200K/s (not bits, bytes).
After a year, down ~= 200+K/s upload capped at 128K/s. Ok, fine and dandy.
Insult to injury came when dowload rate varied (no biggie) but a second cap at 128kbits.
When questioning the provider and calling the corporate office I got "Oh, we meant 128kbits not Kbytes".

Uh, huh.

The sad part is no one noticed the drop off in cable revenues at, or shortly after 2 things:
Killing off the *.divx groups and 'capping people off at the knees' as far as uploads.

By capping off uploads and killing off the divx groups @home completely negated the purpose of broadband

Include the caving into the MPAA w/o so much as a defense of its own customers much less adhering to "innocent until proven guilty" therom.

If DSL could provide a 128Kbyte up/down rate and eliminate the install hassles and provide the service for 20 to 25 bucks a month...I'd jump on that in a heartbeat.
If the had a you want faster, you pay more scheme (which @home does not do...WTF?) I'd use it and I'd *recommend* other cable users do it as well.

I can not tell you how many ppl I've recommended cable to because I lost count.
Now I tell them DSL first, cable second if they don't mind "getting less" for the same amount of money.

"once bitten, twice shy"

Ok, in my case it was a nip first then a bite.

Now I am shying away from recommending cable as a first step. Second step getting away especially if the 'veeceedee' groups start disappearing.

Then a lot of us will have absolutely *NO* reasons for sticking with cable.

Maybe there should be a T-shirt that says:
"I got broadband and all I got was a large pr0n collection..."

Wait. What was the downside again?

I forgot to mention that @home scans your machine daily to make sure you are not running a news server.
Never mind they *don't provide the bandwith* to run a news server and more often than not the *scans* will disrupt your downloads!

As for my previous post and your question I think the "hint" of fraud is just one more example of @home's...what is the word I'm looking for...incompetance, stupidity, (again) fraud, backward-assed-ness?

I'm sure someone else could think of a more eloquent way to put it, but this kind of reverse logic escapes me.

Seriously, look at the heart of what I am saying: you are paying the same, or more, and getting less and less as @home can take from you. Is this the way to run a business?

If this is the kind of "e-commerce" we can expect?

This kind of business "hari-kari" lends new meaning to "e-viscerated", does it not?

(I apologise for avoiding your question as to moderation. It was intentional. I've never moderated and I'm sure there are guidelines.
Heck I got a chuckle out of the moderations of this comment [slashdot.org].
What is even funnier is that I agreed with the totals because it was too far over the top.
Don't get me wrong. Getting karma points is nice, but I prefer to be challenged on my thinking not on how I'm moderated.
That might be the another point you missed, perhaps?)

I forgot to mention that @home scans your machine daily to make sure you are not running a news server.

They were pressured [cnet.com] to do this by Usenet administrators. If they had not, their IPs would have been blocked by many usenet servers. The levels of spam from @Home addresses were unacceptable. These scans fixed the problem.

Never mind they *don't provide the bandwith* to run a news server

Right. They provide fucking insane downstream bandwidth and fairly modest upstream, suitable for clients. I would prefer more upstream, too, but not if it means paying more...which of course it would. Bandwidth costs money. If you haven't noticed, @Home isn't in the best financial shape.

Why would you run a Usenet server anyway? This is a huge resource drain (much more content than you actually read is sent to you), when there are plenty of other usenet servers (for modest fees, or even using the ones @Home provides) or alternatives to Usenet entirely.

... and more often than not the *scans* will disrupt your downloads!

Bullshit. Their scans consist of SYN packets to port nntp (119/tcp). If your machine properly issues a ICMP connection refused packet, nothing more will happen. I am an @Home customer and was when they started doing this. I have not experienced any problems due to these scans.

Seriously, look at the heart of what I am saying: you are paying the same, or more, and getting less and less as @home can take from you. Is this the way to run a business?

Given their terrible financial situation [nytimes.com], they must do this or go broke. In that case, they would charge you nothing and provide no service. You have that option now. Take it if you like.

My complaint with @Home is that their support is absolutely terrible. When I call about service interruptions, I'm put on hold for way too long before talking to someone who does not have a clue. I'd much rather see them pour money into fixing this problem than into a little more upstream bandwidth.

Why would you run a Usenet server anyway? This is a huge resource drain (much more content than you actually read is sent to you), when there are plenty of other usenet servers

For leaf sites, it's usually more efficent to run a caching newserver. This only downloads the articles you read, but caches them, plus XOVER's and other repetative stuff which reduces the bandwidth required by a large amount.

When you steal something, you deprive the owner of that thing. Watching a movie in a time and place YOU decide on instead of the MPAA is not taking anything from them.
Instantly, a thousand of you are now saying "But you're depriving them of income they would otherwise have." To that I say NO! I am not keeping anyone from seeing a movie at a theater.

By capping off uploads and killing off the divx groups @home completely negated the purpose of broadband

Subscribe to external news sources - probably put you down $10/mo. Sure, that's ANOTHER $10 a month out of your pocket. But if you're feeling squirley, consider what that costs the provider.

The traffic used to have a set cost as defined by upkeep of the internal network - call it "internal cost". Now the same traffic has that internal cost as well as the cost associated with increased traffic from the upstream provider. Its possible that the cost of this external traffic is less than the cost of providing better usenet service. Its also very possible this same traffic now has considerably higher cost.

the article was about IPC (inter process
communication).
win critical sections do not provide
inter-process facilities. in fact, they
don't necessary even work efficiently
on SMP systems either. 'nuff said?

It was about synchronisation - semaphores, mutexes, and critical sections were all compared. For synchronisation between threads within a process, it looks like critical sections may be faster; however, it will of course depend on how much contention there is and, as you say, whether the system has multiple processors. Because of that, I don't think these benchmarks are very useful.

The point about critical-section efficiency on MP systems is pretty key. I think that calls for a multithreaded, multiprocessing retest...

But in any case, if your application is spending a significant amount of time grinding on waits for mutex constructs (i.e. any of the syscalls discussed), you're having a bad day -- it means your threads are spending a lot of time in critical sections, and are going to spend a lot of time waiting for each other.

There are schools of concurrent design where you typically have threads blocked on a mutex waiting to move forward, but I don't think those are particularly high-performance models in the first place. Better to stick to the old dictum: "minimize the critical section", both in length and in frequency.

Actually, critical sections are fast only if there is low contention for them. As soon as threads start contending for them, performance goes out the window. They also don't scale well with the number of threads, and they exhibit horrible performance degradations if the priority of the contending threads is not at a maximum.
There is a great summary of the issues at
http://world.std.com/~jmhart/csmutx.htm [google.com].

I've had Roadrunner access via Time Warner cable for over two years now, and despite various problems, un less they triple the rates I'll never unsubscribe. And so far as I know, the net number of broadband users is still going up on an exponential curve. But I can understand the reasons for the earlier statistics...

The exact determination is that "more people than ever are leaving broadband". Not that the ranks are shrinking, but that a greater number of people are terminating accounts. Obviously, as you increase your customer base, if the same percentage of people unhook every month due to dissatisfaction or because they can't afford it any more, then of course the gross number will increase.

The reason why pthreads 'look pretty good' speed-wise is because the pthread library provides user level threads as opposed to a kernel level threads. User level threads have their own scheduler and are much quicker to swap out--less data to save than during a kernel thread context switch. Meanwhile, pthread semaphores (and condition variables) should also be faster depending on the user-to-kernel thread mapping scheme (windows 2k maps each user thread to a kernel thread, for example; I think linux uses a many-to-many mapping). This'll reflect in how fast threads go through their critical sections because they may have to wait shorter/longer to get access to them.

I've had the misfortune to have done some work on Windows NT, and the question that I could not answer from skimming the article was, "Were the installs of Windows uniprocessor or multiprocessor?"

In Windows, the critical section code will become a single bit test and set instruction on an uniprocessor system (which, being a single machine instruction, is very fast), but a much more complicated operation on a mulitprocessor build.

Under Linux, you don't have to explicitly compile your program to support multiprocessor, so I would guess that Linux is using a more SMP friendly implementation of a mutex than a uniprocessor build of Windows.

If HP is going to stop making calculators, what will people start using? Sure, there is some great math and engineering software out there like Matlab and Mathematica, but some times you just want to add up a couple of numbers. I still would rather use my 48GX for that even if I'm sitting in front of a computer - it has a far better interface for punching in numbers and accessing math functions. And the 48GX fits into a (big) pocket like no laptop ever could.

Does anyone else make high quality calculators? Or are there any good math programs for PDAs?

What I'm wondering is how the synchronization primitives SCALE with number of threads. Really, who uses synchronization for *single-threaded* applications? I'd like to see graphs over thread count and see how operating systems handle higher contention over shared resources. In this test, no blocking was going on whatsoever, because it was just one thread locking and unlocking.

I heard that people aren't flocking like sheep to buy Windows XP, which is good news if it is true.

It might be good news, but not for alternative OSs. It simply means that M$ has saturated the market with their previous versions of Windows, and there aren't any compelling reasons to change. Anybody who was going switch from Win98, just switched to Win2K or ME, and isn't about to run out and buy XP. That said, they ain't buying Linux either.

1. XP has a different (though open and standardised) bootloader removing the task from BIOS. Dualbooting is more difficult under XP.

Bullshit. XP uses the exact same bootloader as NT4, 2K, and even WinME (well, sure, with some minor cosmetic changes and performance enhancements, but for all intents and purposes it's the same loader written way back in 93/94-ish for NT4). As well, it's never been hard to dual-boot with the NT-Loader. There are two mini-howtos on LinuxDoc [linuxdoc.org] that outline two different ways of dual-booting Linux with NT (using LILO):

Using Lilo, either start linux, or start NT/ME/2K/XP, thus bringing you to the NT-Loader screen (in 2K, ME, and XP (and possibly NT4, though it's been a long time since I've played with that), if NT-Loader only has one entry, you won't get a menu and it'll just directly load that one entry)

Using NT-Loader, either start NT/ME/2K/XP, or start linux, thus bringing up the LILO prompt

Both methods work, and I have used both in the past. Interestingly enough, NT-Loader is flexible enough that it can work with pretty much any OS. I've personally used it to dual-boot BeOS 4.5 and Windows 2000, in the past, and never had any problems.

2. The usual obscuring of office formats. While RTF is usable the new version bloats to about ten times the size when containing images - nice one, M$.

First off, what does this have to do with Windows XP? You've obviously confused Windows XP with Office XP. Second, this is not new, and it's unlikely it'll change (although with Microsoft moving more and more towards XML, don't be surprised if you start seeing XML-based Word documents that can thus be easily parsed by anything that understands XML).

Given all the above, I still don't see how these are anti-open source. Hell, even WPA isn't "anti-open source". It's anti-piracy, sure, but I don't see how it has anything to do with open source at all.

I saw XP at a Fry's and was not impressed. It contains more graphics and junk, which means that it needs yet more powerful computers than before to accomplish the same tasks.

You saw Windows XP at Fry's? I'm assuming you mean you saw a demo computer running XP, and not that you merely saw the box sitting on a shelf. By your logic, I could say "I saw Linux at my friend's house and was not impressed. It was nothing but text and stuff."

I shouldn't have to tell you that the interface isn't the OS. If everyone judged Linux by its interface and nothing else (which, unfortunately, is often the case), people would have an absurdly skewed view of Linux. Think about how many different window managers and themes there are for Linux. Just because one of them looks like shit doesn't mean the underlying OS kernel sucks.

The same holds true for Windows. Sure, the interface may be full of goofy alpha blending and unnecessary menu fade-ins and mouse pointer shadows and other things, but when you replace explorer.exe with a third-party shell (or merely disable the extra eye candy via the Control Panel), all that stuff goes away and you're left with what is without a doubt the most stable version of Windows I've ever seen.

The same holds true for Windows. Sure, the interface may be full of goofy alpha blending and unnecessary menu fade-ins and mouse pointer shadows and other things, but when you replace explorer.exe with a third-party shell (or merely disable the extra eye candy via the Control Panel), all that stuff goes away and you're left with what is without a doubt the most stable version of Windows I've ever seen.

stablest windows version isn't saying a whole heck of a lot. An analogous quote would something like "the new twinkie xp is the healthiest twinkie hostess has ever made"

So after paying for 3.1, 3.11, win95, win98, win2k, winme, (forget winNT for the moment becuase it was never marketed for home consumer use), we finally have a windows product that might actually be stable enough to be worth its cost...now if I could only trade in all those old MS licenses for all the MS Oses that I have kicking around for a stable windows product. MS calls it a new Os, I call it a sorely needed basic upgrade...too bad I have to pay through the nose once again for basic functionality I should have had a decade ago.

As far as interface!=windows xp. Show me a major windows application that can fully function from the commandline. Show me a useful scriptable terminal shell environment that comes with windows xp. The interface IS MS windows. You might be able to graft on a less functional 3rd party wm/file manager other than explorer, but what you are paying for when you buy xp is the interface and all the time and effort spent getting the bells and whistles (and MSN ads...dont forget those) in place. If you were paying for the effort MS put into stability from Os release to release, each version of windows would have a fair price of about $2...and the upgrade to xp would be free patch, like the virus patches are. I've never really understood that, poor stability leads to data loss just like virus do...but MS doesn't hand out free stability upgrades, they sell them as new Os releases. I shouldn't have to keep paying for promised stability. Paying for new features is one thing...pay for basic features I should have had when I bought the Os is extortion...but that's okay pretty soon we will all be paying a monthly fee to get access to or windows system thanks to.net....so we will never have to "buy" a MS Os again, ever.

As far as interface!=windows xp. Show me a major windows application that can fully function from the commandline. Show me a useful scriptable terminal shell environment that comes with windows xp. The interface IS MS windows.

How about this -- install vim (yes, there's a native win32 port, not just via cygwin) and Visual C++. Now, in only cmd.exe, you can code your entire application and build it. The VC compiler doesn't require the gui. Oh, sure, you get the gui when you install it, but that doesn't mean you have to use it. You can build by hand or write a makefile, just like with gcc. Alternatively, you can configure nearly everything either through commandline tools (try "net help" from cmd.exe, "ipconfig/?", "route/?", and so on) or via the Windows Scripting Host (wscript.exe if you want gui stuff from your script, cscript.exe if you don't). Hell, you can even install software solely from the commandline (lookup "msiexec" in the Help and Support Center), given that the software is provided as an msi (Microsoft Installer package, which most new applications are using, and is required to get certified for the XP Logo program). As far as "major windows applications" running from the command line, I'm going to ignore this as flamebait. Windows is generally used as a GUI environment (and when it's not, it's because it's being used as a server, where you shouldn't be firing up stuff like Word anyway), and so major applications (Word, Excel, IE, Photoshop, whatever) are obviously gui-oriented. If you need to use those remotely, Terminal Services (now called Remote Desktop in XP) is very nice, and is even better in XP -- 32bpp color depth, tweakable options to help performance, optional audio over the network, full backwards compatibility in both the client and the server so you can connect to win2k or nt4 terminal servers, or connect to XP from NT4 or 2K, and more. You can use TS for remote administration as well, or you can setup the included telnet server, or you can install a third-party ssh server. The first option gives you the most control over the system as you have both console and gui to work with, but the latter two give you nearly as much flexibility even just through the commandline.

If you were paying for the effort MS put into stability from Os release to release, each version of windows would have a fair price of about $2...and the upgrade to xp would be free patch, like the virus patches are. I've never really understood that, poor stability leads to data loss just like virus do...but MS doesn't hand out free stability upgrades, they sell them as new Os releases. I shouldn't have to keep paying for promised stability. Paying for new features is one thing...pay for basic features I should have had when I bought the Os is extortion...but that's okay pretty soon we will all be paying a monthly fee to get access to or windows system thanks to.net....so we will never have to "buy" a MS Os again, ever.

You've obviously never looked at Windows Update [microsoft.com]. Microsoft does a pretty good job of offering critical updates, not-so-critical updates, minor Windows updates, new versions for things like Messenger, and even some drivers. As far as "paying for patches", maybe so. But historically, all the important features from win98 that could be patched back into 95 without significant changes were made available. Same for 98 -> 98SE, and even 98SE->ME. Granted, there's no way you can just patch 98SE and end up with ME, but any critical updates and such were always offered for the older systems (well, maybe not 95, since it was declared obsolete as of 98se, and 98 and 98se were declared obsolete as of ME, but mainly that just means you won't be able to buy them in the store any more -- they will still be supported with critical updates). As far as the path from WinME to WinXP, there's no way you can make a patch to upgrade between the two. That's like saying you can just get a patch to upgrade from DOS to Linux. Not going to happen. WinME was still Win9x. WinXP is based on 2K, which in turn was based on NT. Completely different kernel, completely different driver architecture, no more legacy 16-bit code, etc.

And just as a note on the whole.NET thing you brought up -- it's very likely that at least initially (and probably for the next 5+ years after), both subscription and stand-alone packages will be offered. In otherwords, you can pay $99 for your XP->2004 (or whatever) upgrade and be done, or you can pay $30/year to get 2004, and then 2005, and then 2006, and... Maybe not a great idea for businesses that need to standardize on a platform, but do you really believe Microsoft hasn't thought of this? Just as with XP's anti-piracy activation measures, where site licenses for larger companies (I believe, any package of 5+ licenses) does not require activation, standalone licesnses would be offered on any software that also has a subscription license (Office.NET, Windows.NET, whatever).

True enough...and this is the point. Windows IS generally used as a GUI environment in a home consumer market, the orignal post in this thread was talking about the GUI interface being shotty, the reply I flamebaited was trying to make the point that the interface XP has isn't all that important...the underlying OS kernel is much much better and that is the important thing to MS....and I disagree. Let me also say that I didn't think clearly though the last post...I was trying to avoid the server agruments hence why I said ignore NT...though my comments about a useful shell environment was totally wrong headed...because commandline tools really are server argument and that just garbled my main thrust...which is XP for the consumer desktop solution is all about the interface....and that's what MS has put the time and effort into developing in XP...the things geared toward the consumer desktop market (including the product activation) my point about the commandline and shell interfaces was that in the consumer desktop market these are not important factors...

MS is putting the big dollar developing into the spit and polish of XP...you as a home desktop PC owner are not paying for the promise of stability...you are paying for the features...and MS knows this. The people who paid for the stability were the companies that shelled out big bucks for NT support in the good old days and MS is finally giving the home consumer a taste of a stable system in w2k and now xp.

You've obviously never looked at Windows Update [microsoft.com]. Microsoft does a pretty good job of offering critical updates

Are you honestly telling me that I can get enough windows updates for my win98 systems to bring the stability up to the point to match xp? I'm not talking about new feature rich explorer updates or messenger updates...I'm talking about basic stability issues, which I think are as critically important to keeping data intact as updates to prevent viri and internet exploits. I don't expect any release of any software to be perfect...but I don't think it unreasonable to expect the purchase of a product gives me access to continued updates that help prevent system crashes or system lockups. MS wants to release XP chock full of new kernel and new extra features and abilities, fine that's great...but to drop support for the older Oses which still have glaring stability problems and force people to buy into a new Os yet again...with new hardware yet again...seems a tad disrespectful.
Good think the EULA washes MS clean of any responsbility to make a best effort to ensure the product actually works as claimed before you even open the software box. I'm not asking for a path from 95 to XP...I dont want XP's features I want a computer i bought 4 years ago that met the specs of win98 to be reach a decent level of stability...I don't think I should have to buy a whole new Os with a whole new hardware spec to finally get to the point where the Os can claim to be stable and can last a week without rebooting...hence why I run BSD and linux on the older boxes now...I can be confident that updates affecting stability will be made available for the older architecture. I have no problem paying for productivity updates, (new features, new tools) but I have a big problem being told I have to buy a stability update, when the product I bought should have been stable to begin with.

I very much agree with this. Part of my definition of an operating system is that it is stable. Windows 98 is not stable. Therefore, it cannot be truly called an operating system.

I should not have to pay for junk, especially when it is deliberate junk. If Win XP is stable, then it should be a free upgrade to all those who paid for Windows 95 and 98 and ME, and suffered enormously from the shortcomings that were deliberately left there to try to get us to pay more.

Alternatively, you can configure nearly everything either through commandline tools (try "net help" from cmd.exe, "ipconfig/?", "route/?", and so on)

Ok, here's a big problem we face at my job site with hundreds of student accounts that must be reset every two months (when the next batch come in).

Reset a range of user accounts (xxx100-xxx600) to a specified default password WITH the flag marked to force a password change on initial login. Do that from the command line so it could be batched. I've STFTN. I've STFKB. If you can figure out how to do it, I'll grovel at your feet.

Ok, here's a big problem we face at my job site with hundreds of student accounts that must be reset every two months (when the next batch come in).

Reset a range of user accounts (xxx100-xxx600) to a specified default password WITH the flag marked to force a password change on initial login. Do that from the command line so it could be batched. I've STFTN. I've STFKB. If you can figure out how to do it, I'll grovel at your feet.

It's really fun doing this one-by-one in the GUI.

Just as you would script this in Unix, you can script this in Windows. Obviously, the scripts you write will be different. NT != UNIX and UNIX != NT, so that should be expected. Have a look at this link [microsoft.com] for more information (you'll probably need to use IE to get to that link, but since we're talking about Windows here, that shouldn't be an issue). That link discusses the IADsUser scriptable interface for ADSI in Windows. Based on the connect string you use, you can change a local user account, an NT4 domain user account, or an Active Directory user account. Figuring out what properties you need to change for your problem and what glue you need to write to loop through all indicated users is left as an excercise for you.

Yeah, how annoying, a product that wasn't perfect in version 1.0, and had to be improved over several years of development. Boy, those fellows at Microsoft just have to do things different, I guess. Heh, those folks working on MacOS seem to have the same attitude. OS-X!? Shoulda been perfect at OS-1! Sheesh, what is the OS world coming too?

forget winNT for the moment becuase it was never marketed for home consumer use

Then I assume you are forgetting Linux too, for the purposes of this discussion. (Of course you are, since Linux wasn't perfect on its first release either.)

"I saw XP benchmarks at Tom's Hardware and was not impressed. Damned if I know why, but it gets 25-50% lower 3D framerates at the same games with the same (ATI & NVidia) hardware."

Granted, if it's really as stable as Microsoft promises this time (and about half of the Windows 2000 users I know didn't have any stability problems), then that may be worth it. I get similarly curtailed framerates in Linux by making the same tradeoff, and I think it's worth it... but I'd like to know how many game players who went out to buy XP were making a conscious decision for stability over speed.

You saw Windows XP at Fry's? I'm assuming you mean you saw a demo computer running XP, and not that you merely saw the box sitting on a shelf. By your logic, I could say "I saw Linux at my friend's house and was not impressed. It was nothing but text and stuff."

Well, I have XP on my laptop, because that's what it came with. Is it particularly stable? Well, that's open to debate. No blue screens (so far), but lots of dialog boxes of the form "something really bad happened, but I'll see what I can do". I needed to reboot a few times because it got into weird states. Seemes more like they kludged around problems rather than fixing them. In terms of UI and configuration, XP is slightly worse than previous consumer versions of Windows, although it looks a bit slicker. Networking was actually pretty tricky to get working. And, of course, its APIs are as mediocre as always.

But the biggest problem with XP is its rampant commercialism. Windows, other Microsoft applications, and third party applications constantly bug you for personal information, registration information, etc. And who knows what information it's sending out behind my back. And I already spent about $100 on third party utilities.

Altogether, XP is something I could do without: it runs on applications I want to run, and the software I need to run on it is not particularly high quality. The only reason I have it is because Microsoft has managed to monopolize the market so much that there are applications you simply have to use in business and that only run on Windows. Yuck.

Uptime is a measure of how long a system has been running without a reboot. Uptime generally requires stability, assuming the machine in question is actually doing something. But I could boot up a fresh, clean install of Windows 95 and (after patching the 49.7 day registry uptime counter bug) let it sit in a corner doing nothing, and the damn thing would probably keep running till the next Ice Age.

Stability, on the other hand, is a measure of many things. Mostly it is a measure of how well an operating system responds to instability in software. Linux is incredibly good at this; when a program on Linux crashes or has a problem, the OS steps over it and keeps right on going. Windows has been notoriously bad at this, until Windows 2000 and XP.

Now, if you re-read my message, you'll notice that nowhere did I claim that I thought Windows XP was more stable than Linux. I merely claimed that it was more stable than previous versions of Windows. Furthermore, since Windows XP, as you said, has been out for about a month now, it would be impossible (and incredibly stupid) to rate its stability by comparing the uptime of a Windows system with that of a Linux system.

To illustrate my point (that uptime does not always equal stability), back when uptimes.net was running full force, I achieved an uptime of about 155 days from a beta version of Windows 2000 running on a Pentium 166 with 64 megs of RAM, serving up lots of dynamic webpages at wonko.com [wonko.com]. In the end, I had to turn the machine off because I moved.

Now, the only reason I achieved that incredible uptime with a beta OS running on inferior hardware was that it wasn't doing a whole lot. It was just running IIS and MSSQL Server, and that was about it. Now, if I had been serving Slashdot off that box, it probably wouldn't have lasted a week. Thus, we see that uptime != stability.

I like how Windows software is described as "two strains," like a virus

Funny, but an obvious jab at the notion of GPL as "viral".

I heard that people aren't flocking like sheep to buy Windows XP

Sorry, but untrue. Windows XP sold 300,000 directo-to-consumers versions before the launch date, furthermore, OEM deals are very strong. The register has a piece about Bill Gates "fibbing" about sales, but even the uber skeptical Register can't deny that it is selling very brisquely. The story is here. [theregister.co.uk]

It contains more graphics and junk, which means that it needs yet more powerful computers than before to accomplish the same tasks.

Yes an no. Windows XP is faster at some things and slower at some things than Windows 2000. More code doesn't necessarily mean slower performance. More code often means more optimizations for special cases. Windows XP boots significantly faster, according to both MS and most all ancedotal stories. Windows XP has a higher overall level of overhead, but most of that is "eye-candy" which can be removed to suit people with lower-end machines. Most PC's purchased after the dawn of PII's can run WindowsXP. All that is really needed is >64Mb of RAM and a good gig or two of disk space. I very happily tested Windwos XP RC2 on a P166 with 96 Mb of RAM. Did great for basic apps like web browsing, email, and word processing.

Most folks I ask say they use their computers for email, casual web browsing, word processing and to run one or two other programs (usually custom medical or electronics programs, as most of my friends work in these fields).

The numbers don't lie. Most people use their computers for (a) web browsing, (b) electronic mail, (c) instant messenging and (d) digital music. The other things you mentioned are of course specific to one group of people. But most estimates show that at least 50% of the people in the US use computers just for those four things (and in many cases only one or two of them at that).

And the same folks complain that Windows is too expensive and quite frankly sucks, but they can't do anything about it because they have no choice

Sorry to say but they are wrong. Windows is very moderately priced. MS really is the "bargin basement" of the commerical software world. Go call Sun or IBM up and ask them how much their operating environments run. Those $3k-$15k workstations really get you down especially when comparable systems will run you $500-$1500 in the Windows/AMD/Intel world (granted, Sun and others have low end Unix workstations, but for the most they still cost more than their Intel/AMD/Windows counterparts and typically are much less useful in a broad sense of the word).

Face it, no matter how many alternatives there are out there, there is no choice until developers start moving to a better system.

Terribly sorry, but you are again wrong. The alternatives exisit. In a general sense there is nothing you can do on Windows that you cant do in Linux, or another UNIX-esque OS. Video Editing? No problem. Web design? No problem. Business suites? Linux has got it. Truly, Linux can do just about anything Windows can. Specific software packages of course dont always have a parallel. But, of course, compatibility is a feature of the software package. Evaluate and choose those that offer the portability you desire. If you cant do that, chances are strong that you can run your application emulated under Wine (i know, not an emulator blah), DOSEMU, Bochs, Plex86 or VMWare.

It's difficult to make that move, but it's happening slowly but surely.

So in fact ther is choice. You just said there wasnt. I am confused by your statements.

I've blown Windows 98 (the latest one I have and only because it came preinstalled) off my hard drives on five computers and installed FreeBSD and various Linux distros.

Again, contradictions. You made the switch. Therefore choice and alternatives exisit? How can this be. I am confused.

I've helped some of my friends get started with alternatives and once it works, they love it. That's the only way to fight the Windows virus. Oh well...

Again, thats three contradictions to your "no choice" statement. Please explain to me further the meaning of "no choice".

>Go call Sun or IBM up and ask them how much their
>operating environments run

Last time I looked [sun.com], Solaris cost absolutely nothing. You can download ISO images of the latest release from Sun, burn them yourself, and run it without any license fees, etc, at all on any Sun box with less than eight CPUs, no matter what you're using the machine for (business or personal). If you want a development environment, you can get the Forte compiler suite and a 30-day license (which can be renewed indefinitely) as a free download, or you can get all the GNU goodies at Sunfreeware [sunfreeware.com]. When it comes to applications, the StarOffice suite is also a free download. All you have to pay for is the machine itself, electricity to run it, and an Internet connection for the downloads.

Well I did look, and the OS is indeed free (well mostly, not Linux free, but still, binary free, which is about what MS does when it says "free").

The 30-day license thing you have to admit is majorly hokey. However, all in all, I think my point holds: you are talking major money from IBM or Sun for UNIX workstation (or other vendors, of course: HP to name one). Suns cheapest Solaris workstation, designed to compete with Intel desktops, is $1000 with no monitor and 128 Mb of RAM. To take it to a reasonable level of performance you are looking at 2,450 to 3,950 - still with no monitor. Those are the Blade series (again, lowest end). A quick look at the Ultra series workstations re-enforces what I originally thought: lots of money for UNIX workstations.

But thank you,I was unaware that Sun is now basically giving Solaris away for free.

Where do you get $2450 to $3950? You must be looking at Sun's RAM prices - which are outrageous. 512meg modules for the Blade 1000 are $58.49 right now from Crucial; which means that you can get 2gig of RAM for the machine for a little over $200. For disks, you can use normal IDE drives, or add a commonly-available PCI SCSI card (Symbios 8751SP) for around $50, and then use SCSI devices.

I've got a SunBlade 1000 (their UltraSPARC-III based "big daddy" workstation) on my desk here at home, and have a Blade 100 (with Expert3D-Lite, an additional $1K graphics option) at work, and for day-to-day use (windowmaker, Mozilla, SSH, etc) they're "just about" the same, despite the 400mhz CPU speed difference and 256k of L2 cache versus 8MB, for the tasks I do all the time.

Sun has been giving Solaris away for free for a little over two years now, AFAIK.

Yes, I was looking at the hardware according to Sun - you are so right about it being very pricey - the RAM and HDD's they suggest as options put machines way into the upper ranges of what people spend for x86 hardware.

The register has a piece about Bill Gates "fibbing" about sales, but even the uber skeptical Register can't deny that it is selling very brisquely. The story is here. [theregister.co.uk]

You should work on your reading comprehension friend. What that article is implying, and not surprisingly so, is the MS is dropping FUD about XP sales... that they are changing there tune for no good reason when they MUST assuredly know exactly how many theyve sold, through each channel and EXACTLY what the numbers are.... the quotes from M$ have been contradictory and vague... more likely a sign that XP isnt doing as well as the marketroids would like the public to believe.

I agree.. I find that a nicely tuned install of Linux is pretty high-performing, especially with a hardware-accelerated video subsystem.

About your uptime though - first off, WinXP isnt a server OS. It is specifically tuned not to run IIS, mail, and other services. It is not nor should it be construed to be a server OS. But still, your point holds. Linux makes a nice server.

More than anything, Windows users want a few things: they want their machines to boot up quickly, they want it to be easy to use, and they want it to make it through the business day without crashing. Thats it really. Thats all people really want from a desktop OS. I think both Win2k and WiNXP meet that gaol.

The only thing is that they had initially hoped to have found rather more than one by now!

Some of their problems have been related to the fact that their team is very small. So, it is possible to make things too cheap.

HETE's operations team is indeed too small. HETE-2 was ready for launch in January, 2000 (it was integrated with the rocket!), but after the Mars lander failure NASA got cold feet and ordered it shipped back to MIT for additional testing. HETE-2's operations were also funded below the HETE project's minimum estimate of operations costs. Since people without long term support need to find new jobs, this combination meant that several people left for new employment either before launch (having already lined up new jobs before the delay) or shortly afterward. While a reduction in the team's size post launch was intended, what happened was too drastic.
This definitely made it harder.

It would have been fine if they had no time constraints, but it seems that spending most of the first year essentially in a kind of engineering mode is a bad thing.

Part of the trouble is that HETE needed to be well calibrated before it could generate useful results. A mission like Chandra could do a lot
of interesting stuff (especially pretty pictures) before its calibration was finished. Astrometric calibration takes time, however.

In any case, this extra time has not added much to the overall mission cost.

Is there anyway that the community (esp. NASA) could have helped bring things on line sooner?

A launch in January 2000, when HETE was ready, would have helped. Adequate funding of the operations phase would have helped.

Patience also would have helped, I think. The HETE team, NASA, and the community were all impatient for results. This meant that there was an emphasis on working through the inevitable operational problems rather than taking the time to fix them. A team that is too small cannot do both in parallel. Once some of the more time consuming problems had been fixed, positive feedback set in: operations became less labor intensive which meant more time was available to fix problems.

OK... you're not the only one who has been generally happy with their cable service. I've been on Roadrunner since just before they put the 2Mb/s cap on downloads. Yea, sure, my download speeds dropped when they put the cap on, but they've been pretty stable since them. This may have more to do with the city neighborhood I live in (lower middle income families, with probably not a lot of other people using cable modems), but the reliability has been good. I rarely see outages in service.

Funny thing is the difference I see in download speeds between the Linux and Windows computers I have on my network. I run everything through a Linux NAT box. Nothing out of the ordinary -- a P200 with a pair of plain old 10/100 PCI NIC's, but I can regularly pull 250K bytes/sec through it, if I go to somewhere like www.kernel.org that has screaming fast servers.

However, I run the same download on any one of the Windows computers behind the firewall, all of which have faster processors than my main Linux box, and the best they can squeak out is something like 50K bytes/s. Same site, same file, same firewall, similar NIC's, and I get about 1/10 the effective download speed.