Hadoop & Bussiness Intelligent

The group at Yahoo! that I came from was using Hadoop for data analytics and data warehousing. We had something like 100,000 web servers across the world, and once we collected data from across all these servers, we dumped it into Hadoop, which became the place where we stored all of the data, instead of traditional network storage.

Our reasoning for doing that was a matter of economics, given the quantity of hardware. Hadoop lets us scalably process that data, clean it up, and normalize it so we could pass it along to the systems that need it.

Hadoop is getting very wide adoption in the data warehousing and business intelligence domains. One of the biggest uses within Yahoo! right now is dealing with all of the log information from servers. Analyzing that information allows for better spam filtering, ad targeting, content targeting, A/B testing for new features, et cetera.

It’s not web-specific. For example, everybody does data warehousing, and we see very strong adoption there.

Separate from that, your example of oil companies is a very good one, as is the financial sector. Right now, we do have a couple of very large financial institutions working with us on these exact problems, taking huge amounts of data from domains like credit card processing and building predictive models for fraud that enable better decisions, for example, about whether to block or allow a given transaction.

In the stock market, Hadoop is being used to do simulations that help predict option pricing and related problems. That’s another very healthy market that we’ve seen growth in.

Knowing that Yahoo is the biggest contributor and adopter of Hadoop and the company is used Hadoop to solve various problems from data analytics and data warehousing: log processing, gene sequence mapping (basically a fuzzy string matching problem) to business intelligent domains: financial, stock market …

Rumor said that a bank in Singapore invest millions of dollars create a computing and predicting system from scratch using Haskell – a static type, functional programming language to warranty scaling and performance.

I wonder why the bank did not take a look at Distributed File System (DFS) + MapReduce (Hadoop is an open source implementation of it) as a massively scalable on commodity hardware that successfully utilized at biggest IT firms in the world (Google, Yahoo, Facebook … just to name the few) … or they just re-implementing DFS+MapReduce themselves 😀