The same day, another veteran chip designer at ISSCC told EE Times he knows at least two or three circuit designers Google has hired over the last year or so.

A key engineer behind the Hewlett-Packard Moonshoot server moved to Google six months ago. Partha Ranganathan "is currently at Google designing their next-generation systems," according to his website. He worked at HP on systems that can accommodate a wide range of Xeon, Atom, and ARM processors.

Google is exploring whether or not to design its own ARM server chip, according to a Bloomberg report from December based on a single source. The search giant has not made a decision on the project yet, the report said, noting a job posting for a hardware engineer updated in December.

More evidence of Google's intentions can be found on its online job postings.

Google updated on January 28 an opening for an "ASIC top level design engineer." The person is responsible for "creation and delivery of top-level RTL for ASIC and SOC projects."

A separate posting updated January 20 listed an opening for a CAD engineer to "lead the overall IC, ASIC, and/or Chip CAD platforms for multiple design projects." The engineer will support custom EDA flows and install third-party IP blocks and design kits.

The job postings suggest Google may be fairly early on in establishing a deep and broad semiconductor design capability. The postings all include the following boilerplate text making it clear the chip design efforts are for Google's datacenter systems:

Our computational challenges are so big, complex and unique we can't just purchase off-the-shelf hardware, we've got to make it ourselves. Your team designs and builds the hardware, software and networking technologies that power all of Google's services. You develop from the lowest levels of circuit design to large system design and see those systems all the way through to high volume manufacturing.

Google should be careful in getting into chip design as its not their forte. They good with software and think twice before getting so selective in hardware business. In hardware business losses are huge.

If you look at high end blades in some big database systems, they're combining off-the-shelf multi-core x86 chips with FPGA-based hardware to accelerate distributed database processing. Google is bound to have problems that can be solved in a similar fashion.

But they're also really big, and the cost, in parts, power, and speed limits, of FPGAs may be suggesting that custom silicon is the answer. And that kind of thing gets even more interesting if you build it into your CPUs, saving power, eliminating any bus or communications bottlenecks between the two processor areas, etc.

Or maybe it's just plain old ARM chips they can buy from multiple sources.

If you consider all the things Google is investing in besides their bread & butter search & data centers, there are lots of reasons for them to have an in-house IC design organization. I agree, this small team is just the tip of the iceberg.

It's those deep learning algrotihms that could take Google into the future with AI and all that comes with it. They already use a broad range of algorithms for their searches. Now with DeepMind and with parallel processing cores, possibly running on silicon that they themselves make, the sky is the limit.

@rick I don't think you can accelerate mapreduce much, only the operations it does which change by application.

Google uses a variety of algorithms for search. Some of those are machine learning and esp. deep learning algorithms which are quite new and are kinf of breakthrought articial intelligence. For those ASIC's could become cheaper and lower power than GPU's , see [1]. Those same algorithms are also usefull/critical in glass, robots,phones and other places that require AI.

Another option cache server(memcached) acceleration. recently FPGA's shown great promise , and asic could do better.

There are also other search algorithms that could benefit, but accelerating those throught hardware is quite an old idea(and one could use FPGA/GPU) ,so we should ask why now ?

Right, it could serve many masters there. And like others have suggested, Google's group might best be used to set up the architectures (HW,FW,SW) and then partner with providers to implement their visions. Some of those could be proofs of concept, others released as open source, others kept as proprietary though I think the last segment would be small. The more Google can get their ideas used by the world (thus building scale and driving down costs), the more ads they can sell into the world.