3/28/2005

Many Processors in One

Dual-core processors and 64-bit CPUs are grabbing headlines, but make way for the multiprocessor chip. Designed by IBM, Sony, and Toshiba, the new Cell chip contains a main PowerPC-based core microprocessor and up to eight additional processors.
The new CPU, which the companies claim can offer up to ten times the performance of Intel and AMD chips, has a pushpin-size 90-nanometer design. It uses 234 million transistors and can run at speeds of over 4 GHz."It could be the fastest mainstream processor available when they introduce it next year," says Microprocessor Report editor-in-chief Kevin Krewell.
Sony is building the Cell chip into its next-generation PlayStation. IBM is hoping to see the Cell in its Linux servers. And Toshiba, with its big investment in HDTV and digital video recorders, is hoping to use the Cell processor in televisions and consumer-electronics products.
Indeed, say analysts, the eight 128-bit processors, called synergistic processing elements (SPEs), make the Cell ideal for managing high-bandwidth video. "The SPE units can process things like MPEG-2 encoding and decoding, as well as HDTV and audio," explains Krewell.
There's even speculation that Apple could port its OS to the Cell. "The only thing you probably won't see running on it," says Krewell, "isWindows."

3/26/2005

The Source of Google's Power

Much is being written about Gmail, Google's new free webmail system. There's something deeper to learn about Google from this product than the initial reaction to the product features, however. Ignore for a moment the observations about Google leapfrogging their competitors with more user value and a new feature or two. Or Google diversifying away from search into other applications; they've been doing that for a while.
No, the story is about seemingly incremental features that are actually massively expensive for others to match, and the platform that Google is building which makes it cheaper and easier for them to develop and run web-scale applications than anyone else.

Let's make some guesses about how one might build a Gmail.Hotmail has 60 million users. Gmail's design should be comparable, and should scale to 100 million users. It will only have to support a couple of million in the first year though. The most obvious challenge is the storage. You can't lose people's email, and you don't want to ever be down, so data has to be replicated. RAID is no good; when a disk fails, a human needs to replace the bad disk, or there is risk of data loss if more disks fail. One imagines the old ENIAC technician running up and down the isles of Google's data center with a shopping cart full of spare disk drives instead of vacuum tubes. RAID also requires more expensive hardware -- at least the hot swap drive trays. And RAID doesn't handle high availability at the server level anyway.

No. Google has 100,000 servers. [nytimes] If a server/disk dies, they leave it dead in the rack, to be reclaimed/replaced later. Hardware failures need to be instantly routed around by software.

Google has built their own distributed, fault-tolerant, petabyte filesystem, the Google Filesystem. This is ideal for the job. Say GFS replicates user email in three places; if a disk or a server dies, GFS can automatically make a new copy from one of the remaining two. Compress the email for a 3:1 storage win, then store user's email in three locations, and their raw storage need is approximately equivalent to the user's mail size.

The Gmail servers wouldn't be top-heavy with lots of disk. They need the CPU for indexing and page view serving anyway. No fancy RAID card or hot-swap trays, just 1-2 disks per 1U server.

It's straightforward to spreadsheet out the economics of the service, taking into account average storage per user, cost of the servers, and monetization per user per year. Google apparently puts the operational cost of storage at $2 per gigabyte.

Here's an anecdote to illustrate how far Google's cultural approach to hardware cost is different from the norm, and what it means as a component of their competitive advantage.

We had engineers that could imagine algorithms that would give marginally better search results, but if the algorithm was 10 times slower than the current code, ops would have to add 10X the number of machines to the datacenter. If you've already got $20 million invested in a modest collection of Suns, going 10X to run some fancier code is not an option.

Any sane ops person would rather go with a fancy $5000 server than a bare $500 motherboard plus disks sitting exposed on a tray. But that's a 10X difference to the cost of a CPU cycle. And this frees up the algorithm designers to invent better stuff.Without cheap CPU cycles, the coders won't even consider algorithms that the Google guys are deploying. They're just too expensive to run.

Google doesn't deploy bare motherboards on exposed trays anymore; they're on at least the fourth iteration of their cheap hardware platform. Google now has an institutional competence building and maintaining servers that cost a lot less than the servers everyone else is using. And they do it with fewer people.

Think of the little internal factory they must have to deploy servers, and the level of automation needed to run that many boxes. Either network boot or a production line to pre-install disk images. Servers that self-configure on boot to determine their network config and load the latest rev of the software they'll be running. Normal datacenter ops practices don't scale to what Google has.

Competitive Advantage

Google is a company that has built a single very large, custom computer. It's running their own cluster operating system. They make their big computer even bigger and faster each month, while lowering the cost of CPU cycles. It's looking more like a general purpose platform than a cluster optimized for a single application.

This computer is running the world's top search engine, a social networking service, a shopping price comparison engine, a new email service, and a local search/yellow pages engine. What will they do next with the world's biggest computer and most advanced operating system?

3/20/2005

SnailMail 2.0

Any eighth grader who has finished Introductory Geometry can tell you that the shortest distance between two points is a line, but any postal worker who has hauled a mailbag along a 10-kilometer route can tell you that figuring out the shortest distance between 400 or more addresses is nearly impossible. Software aimed at doing just that recently made its commercial debut, in Denmark, with the hope of shortening mail delivery times and slashing postal-service costs.
The software, developed by Paris-based company Eurobios, takes a novel approach to what is known as the “traveling-salesman problem,” which has stymied mathematicians for decades. The central challenge: adding a single new address multiplies the number of possible paths by the total number of addresses, so calculating an ideal route quickly becomes untenably time-consuming. (At present, using a standard PC to compare every possible route spanning just 100 addresses would take years.) Computer scientists have developed various programs that solve the traveling-salesman problem for limited research purposes. But according to Dave Cliff, a complexity expert at Hewlett-Packard’s Bristol laboratories in England, the vast scale of postal systems meant that “until recently it wasn’t worth looking at computer methods, because the processing power wasn’t there.”
Indeed, a single regional mail-sorting area can be responsible for some 30,000 postal addresses—a number that would have hitherto defeated calculation, explains Cliff. Eurobios’s software copes with the challenge in part by reducing the number of possible routes using heuristics, or rules of thumb, to rule out the impractical options. For example, unless a street is very long, the system makes the assumption that mail going to all addresses on one side of the street will be delivered in one trip rather than multiple trips. Then, says Eurobios’s Vince Darley, who created the program, the software employs an iterative technique to optimize the routes. It starts off with a random set of routes and then makes a series of changes to them. By evaluating the outcome after each change and keeping those changes that shorten the route, while rejecting most of those that do not, the system quickly converges on a near-optimum solution.
In February, Post Danmark, the Danish postal service, began using the Eurobios software to determine the shortest routes for postal workers on the Danish island of Fyn. In trials, Eurobios’s system has shown it can reduce the time it takes postal workers to deliver the mail each day by up to 10 percent. At the same time, the software cuts the distance that the delivery people travel by as much as 20 percent. That might not sound like much, but a typical European postal organization has between 10,000 and 50,000 delivery people, says Darley, which is one of the reasons that so-called last-mile distribution accounts for as much as 70 percent of postal systems’ total expenses. Emptying each worker’s mailbag just a few minutes faster could translate into millions of euros in annual savings for even one country, Darley says.

Yahoo tests blend of blogging

Yahoo Inc. is preparing to introduce a new service that blends several of its Web site's popular features with two of the Internet's fastest growing activities -- blogging and social networking.

The hybrid service, called "Yahoo 360," won't be available until March 29, but the Sunnyvale-based company decided to announce the product late Tuesday after details were leaked to The Associated Press and other news outlets.Yahoo is testing the service with a small group of employees, some of whom have been working on the project since last year when the product was operating under the code name "Mingle."

The service is designed to enable Yahoo's 165 million registered users to pull content from the Web site's discussion groups, online photo albums and review section to plug into their own Web logs, or blogs, the Internet shorthand used to describe online personal journals.

Yahoo also is making it easier for the service's users to connect with others who share common interests and friends -- a practice known as social networking. Participants can either choose to open their blogs to the entire world or restrict access to people invited through e-mail.

"We heard from people that they have a strong desire to stay close to the people who are important to them, but at the same time they didn't want to feel like they were exposing themselves online," said Julie Herendeen, Yahoo's vice president of network products.

The service represents Yahoo's effort to tap into the popularity of blogs and social networking sites. Expanding into social networking and blogging mark another significant step in Yahoo's push to make its Web site even more essential to the personal and professional pursuits of its users.

The service is also meant to encourage Yahoo's most frequent visitors to create and share more content, a process the company hopes will attract even more people to its site. If it can increase its audience's size and give visitors more reasons to stick around longer, Yahoo would become an even more attractive marketing vehicle for advertisers.

When it becomes available later this month, Yahoo 360 initially will be restricted to users invited by the company. Those early participants will then be able to invite others.

3/13/2005

UtilityGeek

Great site with a vast collection of diagnistic tools and utilities for your
PC.Tools and utilites divided in more than 25 categories.Also you can discuss
your problems with other users on the site forum.

The site also publish latest technology articles which can be acessed
directly from the main page.

Tools range from System Tweakers,Registry Utlities to Benchmarking,Bios and

3/09/2005

Plug-In Enables PDF Search

Google Desktop utility went live Monday, after about a six-month beta cycle. For the Acrobat faithful who had wondered if the 1.0 release would include support for PDF, the answer is no—but ScanSoft has brought to market a beta plug-in called OmniPage Search Indexer that not only supports PDFs containing text, but also can OCR and index image-based PDFs with scanned text and return the results on a locally served Google-type page.

"We're very pleased to be one of the first developers to work with Google and their new API to enable this," said Robert Weideman, senior vice president of marketing and product strategy for ScanSoft's Productivity Applications Division.

Weideman added that OmniPage Search Indexer also handles other image file formats such as BMP, MAX and TIFF. "We see this as an important event [for ScanSoft] and one we're evaluating, should we decide to support OmniPage Search Indexer with other desktop search products."

That's likely to happen, Weideman said, because there's a need. Most companies that offer desktop search utilities—like Google, Yahoo!, Ask Jeeves and Microsoft Corp.'s MSN—live in the Internet search space, where there is little call for image-based document search, as most companies don't post faxes and scans of paper documents to the Web. So search vendors don't develop tools to search them.

3/06/2005

WinFS on Xp

Microsoft may not be willing to talk file-system futures, but it is working to back-port its WinFS file-system technology to Windows XP, the same way that it is doing with its Windows presentation and communications subsystems, according to company officials. The acknowledgement is significant, given that Microsoft has been reticent to offer any details on WinFS since the company decided in August to cut the WinFS information storage and retrieval feature from both the client and server versions of Longhorn.

A year ago, WinFS was slated to be one of the four main pillars of Longhorn.
The other three, which are still set to be part of the next-gen Windows release, are the "Avalon" presentation subsystem; the "Indigo" communications subsystem; and the "Fundamentals" technologies that will improve Windows performance, security and reliability.

3/02/2005

Intel back with a bang

In the last couple of months, it has been very much about AMD(Advanced Micro Devices) and their innovations. They took the initiative with their desktop processors capable of running 64-Bit applications and later they also practically dominated Intel in the speed game with their latest processors capable of faster speeds than the fastest Intel Processors available. In fact, AMD have again taken the initiative in the multicore technology based processors with the demonstration of a working Dual Dual -Core Athlon 64 Processor which they plan to bring to the retail market in the second half of the current year.

However, Intel seems to have returned to the main scene again. They kick off their developer’s conference in San Francisco . Critics say that this conference will see the sleeping monster wake up again with new announcements and releases that would bring it to the front page of the newspapers and magazines all around the world.

Reports say that the Intel Executives should be releasing details and launch products for the laptops,media center PCs , consumer electronics, and computers with more than one processor core. Some of these announcements/releases might prove to be quite a challenge for the competition in the form of AMD who have seen their share prices taken a hit in the recent days.

Intel has practically caught up with every product release from AMD in the recent history with their latest announcements. In addition, AMD is in quite some trouble of their own with their bottom-line. They recently reported a surprising $30 million loss, which the market analysts claim is based on poor sales of a memory product manufactured by AMD, which is used in digital cameras and other consumer devices.