It's the reason I moved from Maxtor to Seagate (aside from the fact the failure rate I was getting on Maxtor drives was nearing 100% within 3 years). Now I'm playing around more with WD, Hitachi and others since I got a 1.5TB Seagate drive that kept having lockup issues due to bad firmware. I questioned their quality control since then and have actively avoided Seagate since without regret.

I can agree and disagree with you to some extent. I disagree to the point that I use CID spoofing using Skype.

Why? I use my cell phone for business all of the time, and about 95% of the time I'm using my cell phone for business, I'm at my computer. I will generally opt to use Skype at 2.4 cents per minute instead of my cell phone at stupid long distance rates, but I want to make sure that clients know it is me calling. I have Skype spoof my cell phone CID so that when I call someone regardless if it is by phone or Skype, they know it is me, and I can get away with significantly cheaper long distance rates. Especially when I'm roaming outside of the country.

What I am kinda hoping is that the updates fix some bugs in the drivers. I've had a number of occasions where the existing drivers caused things like BSOD and crashing. Most notably, I received a full hardware crash while using the webcam within Skype. There are some room for improvements in the driver, I just hope this update addresses them.

This has been the argument that I've seen to justify getting a GeForce over a Quadro in CGI. A few points:

1) The memory system on the Tesla/Quadro is much more rigorously tested and held to a much higher standard of quality than the GeForce. There is plenty of research evidence to prove this, and I have had plenty of anecdotal evidence to prove this point as well. NVIDIA doesn't give a crap about the memory in a GeForce because a miscolored pixel in 1/60th of a second doesn't matter. A soft/hard error in GPU memory for scientific calculations can be catastrophic. This is also a reason that NVIDIA is applying ECC memory to the Tesla C2050 and C2070 GPUs.

2) Some GeForce GPUs will have major threading errors after a few minutes of hard running. I've experienced this with a dgemm torture test with a Tesla and 2 GeForce GTX 285 GPUs in a single system. Give the test about 5 minutes on all 3 GPUs simultaneously and the GeForce GPUs will crash out at nearly the same time. The Tesla will continue the test until completion (which can be about a day or so)

3) Bandwidth starvation is a term to indicate that the cards are getting less bandwidth than they should be getting. On this FASTRA machine, only a few slots are full x16 Gen2, which end up being shared across 2 GPUs, making it effectively x8 Gen2 to each GPU. For other slots, it is even worse when you have a x8 Gen2 link going in. That has to be shared between a pair of GPUs. Technically, you can run the Tesla GPU in a x1 Gen1 slot if you had the right adapter, but the time it will take to transfer data from host memory to GPU memory may end up negating any performance benefits you might see out of using the GPU, unless you are using very heavy computational algorithms that are almost completely compute bound.

I couple years ago, I had a compute rig using 6 Tesla C870 GPUs, and even that setup was starting to get bandwidth starved as all GPUs were using a single x8 Gen1 link being aggregated to 6 x4 Gen1 slots (using adapters). I had to up the output data frame size on an MD simulation in order to have all cards performing equally. With smaller frame sizes, the first 4 GPUs were finishing their computations before the last 2 GPUs got their data.

It is kind of unfair to generalize commercial clusters vs homebrew in that manner. Many institutions that purchase commercial clusters from HP/Dell/SGI/etc opt out of the use of InfiniBand or 10GbE. The logic behind it is when the vender says that for $100,000 they can upgrade to IB, the purchaser goes back and says, "For $100,000, I'll just get more cluster nodes instead." This is probably a big reason that gigabit takes up 52% of the Top500 list of supercomputers.

If this is achieved for a personal aircraft, I'd be very much on board with this. My only beef is the addition of things like parachutes and air bags. I don't really care too much for those features, as I might be able to get TKS de-icing systems installed for similar weight for those IFR flights in the great white north. Or if I don't have a TKS system, maybe a little extra payload capacity so I can actually fit 4 passengers and fuel without going over gross weight.

Because I can appreciate and judge great cuisine, doesn't mean I can make it, yet the feedback these judges provide is the cornerstone for a chefs continuous improvement. People who use and judge interfaces in the field are usually a great resource to find ways to improve it. If it is a hit to the interface designers ego that some interface element isn't where the users would like it to be, suck it up. Make it better, always improve it.

I like the comparison you make here, but in reality, it is even worse. For the most part, a $150,000 plane can barely take my family (2 adults, 2 small children) and some luggage with a full load of fuel (legally). I also burn about 22 Gallons of fuel or so on a 206nm flight.

- Support for ics calendar files in Mail.app- More Bluetooth functions, like sending contacts to other phones via bluetooth, or being able to interact with other peripherals- Support for unlocking functions, like many other sane GSM phones. I want to use my AT&T sim card in the US, but I'm locked to my Rogers SIM in Canada.

There are a number of schemes I ended up using in naming systems at my workplace. There really wasn't a rhyme nor a reason to how I named our machines, I just went with what sounded cool, but it also seemed that I had a tendency of having at least 2 system names related.

For example, Excalibur and Dragoon. Genesis and Revelation. Those I guess were the only two system pair that were somewhat related. One system was named Severn simply because I recall a Redhat distro being named Severn and thought it sounded cool. Another, now dead, system was named Velocity because not only did it sound neat, it was also a reflection of the type of acoustic work it was designed to perform.

Excalibur I think was the only system that I had reason to call it that. Being one of the coolest and most sought after swords of legend, it only seemed fitting to call the most powerful workstation in the office that (which at the time was dual Opteron 250's). The next step when I ended up getting my PowerMac Quad G5 was simply to call it Titan. That name ended up succeeding into one of our product names for my HPC startup company.

Because the aeronautical information is public information doesn't necessarily mean it doesn't cost. Jeppesen is one of the primary providers of aviation databases and is used in pretty much all Garmin handheld and panel mount GPS units. I can't say that it justifies cost as much as FAA certification, but the databases alone do cost every 56 days should you choose to update them that frequently.

I don't necessarily fly that often, but I have a $1200 handheld GPS that costs about $50 to update the Jeppesen aviation database, an additional $50 to update obstacle databases, and $150 to update the terrain database (which is not anywhere near as often as the others). For IFR GPS units, I think it is about $150 to update the Jeppesen database.