HPCwire » profitbrickshttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 03 Mar 2015 17:05:06 +0000en-UShourly1http://wordpress.org/?v=4.1.1Why Big Data Needs InfiniBand to Continue Evolvinghttp://www.hpcwire.com/2013/04/01/why_big_data_needs_infiniband_to_continue_evolving/?utm_source=rss&utm_medium=rss&utm_campaign=why_big_data_needs_infiniband_to_continue_evolving
http://www.hpcwire.com/2013/04/01/why_big_data_needs_infiniband_to_continue_evolving/#commentsMon, 01 Apr 2013 07:00:00 +0000http://www.hpcwire.com/?p=4131Increasingly, it’s a Big Data world we live in. Just in case you’ve been living under a rock and need proof of that, <a href="http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/" target="_blank">a major retailer can use an unimaginable number of data points to predict the pregnancy of a teenage girl outside Minneapolis before she gets a chance to tell her family</a>. That’s just one example, but there are countless others that point to the idea that mining huge data volumes can uncover gold nuggets of actionable proportions (although sometimes they freak people out...)

We’re still at the dawn of this Big Data era and as the market is showing, one-size-fits-all data processing is no longer adequate. To take the next step in this evolution, specialized Big Data software can improve not only by using cloud computing, but also by utilizing specialized networking infrastructure, InfiniBand, from the supercomputing community. Before understanding why, though, you need to understand the history of how we got to this Big Data world in the first place.

How Did We Get Here? The Birth of the Relational Database

1970 isn’t just the year of the Unix Epoch, it’s also the year that the granddaddy of all Relational Database (RDB) papers was written. IBM Researcher E. F. Codd wrote “A Relational Model for Large Shared Data Banks” for Communications of the ACM magazine in June of that year, and it became the defining work on data layouts for decades. Codd’s model would be refined over the next 40 years, but what he proposed evolved into a generic toolset for structuring and manipulating data that was used for everything from managing bank assets to storing food recipes.

This general-purpose data analysis software also ran exceptionally well on general-purpose computing hardware. The two got along great, actually, since all you really needed was a disk big enough to handle the structured data and enough CPU and RAM to perform the queries. In fact, some hardware manufacturers such as Hewlett-Packard would give away database software when you purchased the hardware to run it on. For the Enterprise especially, the Relational Database was the killer app of the data center hardware business.

At this point, everybody was happily solving problems and making money. Then something happened that changed everything and completely disrupted this ecosystem forever. It was called Google.

Then Google Happened

During the Nixon Administration, copying the entire Internet was not a difficult problem given its diminutive size. But this was not so by the late 1990s, when the first wave of search engines like Lycos and Alta Vista had supposedly solved the problem of finding information online. Shortly thereafter, Google happened and disrupted not only the online search industry but also data processing.

It turns out that if you can keep a copy of the modern Internet at all times, you can do some amazing things in determining relevance and, therefore, return better search results. However, you can’t use a traditional RDB to tackle that problem for several reasons. First of all, to solve this problem you need to store a lot of data. So much so, it becomes impractical to rely solely on vertical scaling by adding more disk/CPU/Ram to a system and a RDB does not scale horizontally very well. Adding more machines to a RDB does not improve its execution or ability to store more data. That disk/CPU/RAM marriage has been around for 40 years and it’s not easy to break apart.

Further, as the size of the data set in an RDB gets larger the query speed generally degrades. For a financial services company querying trends on stock prices that may be acceptable, since that influences the time of a handful of analysts who can do something else while that processing is going on. But for an Internet search company trying to deliver sub 3-second responses to millions of customers simultaneously that just won’t fly.

Finally, given the large data volumes and the query speed required for Internet searches, the necessity for data redundancy is implied since the data is needed at all times. As such, the simple master-slave model employed by most RDB deployments over the last four decades is a lot less bullet proof than what is needed when you are trying to constantly copy the entire Internet. One big mirror simply won’t cut it.

Distributed File Systems and Map/Reduce Change Everything

If Codd’s seminal RDB paper had grandchildren, they would be a pair of papers released by Google that described how they conquered their data problem. Published in 2003, “The Google File System” by Sanjay Ghermawat, Howard Gobioff, and Shun-tak Leung described how a new way of storing data across many, many different machines provided a mechanism for dealing with huge volumes in a much more economical way than the traditional RDB.

The follow-up paper from 2004 entitled, “MapReduce: Simplified Data Processing on Large Clusters” by Ghermawat and Jeffrey Dean further revealed that Google performs queries across its large, distributed data set by breaking up the problem into smaller parts, sending those smaller parts to nodes out on the distributed system (the Map step), and finally assembling the results of the smaller solution (the Reduce step) into a whole.

Together, these two papers created a data processing renaissance. While RDBs still have their place, they are no longer the single solution to all problems in the data processing world. For problems involving large data volumes in particular, solutions derived from these two papers have emerged over the past decade to give developers and architects far more choice than they had in the RDB exclusive world that existed previously.

Hadoop Democratizes Big Data; Now Where Are You Going to Run It?

The next logical step in this evolution in an era of Open Source programming was for somebody to take the theories laid out in these Google papers and transform them into a reality that everyone could use. This is precisely what Doug Cutting and Michael J. Cafarella did, and they called the result Hadoop. With Hadoop, anyone now had the software to tackle huge data volumes and perform sophisticated queries. What not everybody could afford, however, was the hardware to run it on.

Enter cloud computing, specifically Infrastructure as a Service (IaaS). Primarily invented by Amazon with its Amazon Web Services offering, anyone could lease the 100s if not 1000s of compute nodes necessary to run big Hadoop jobs instead of purchasing the physical machines necessary for the job. Combine that idea with orchestration software from folks like OpsCode or Puppet Labs and you could automate the creation of your virtualized hardware, the installation and configuration of the Hadoop software, and the loading of large data volumes to minimize the costs of performing these queries.

Again, everybody is happily solving problems and making money. But we aren’t done. There’s another step to this evolution, and it’s happening now.

InfiniBand: Making Hadoop Faster and More Economical

Processing Hadoop and other Big Data queries on IaaS produces results, but slowly. This combination is praised for the answers it can find but at the cost of reduced speed. We saw a data processing revolution sparked by different software approaches than those pioneered in the 1970’s. Better-performing Hadoop clusters, with all the network traffic they produce in their Map and Reduce steps, can be found by taking a similar approach with a different network infrastructure.

Ethernet, the most widely used network infrastructure technology today, has followed a path similar to that of RDBs. Invented in 1980, Ethernet uses a hierarchal structure of subnets to string computers together on a network. It is so common that, like RDBs 10 years ago, most people don’t think they have a choice of something different.

The performance problem with Ethernet comes in its basic structure. With hierarchies of subnets connected by routers, network packets have exactly one path they can traverse between any two points on the network. You can increase the size of the pipe between those two points slightly, but fundamentally you still just have the one path.

Born in the supercomputing community during the 21st Century, InfiniBand instead uses a grid system which enables multiple paths for network packets to traverse between two points. Smart routing that knows what part of the grid is currently busy, akin to automobile traffic reporting found on smart phone map apps, keeps the flow of traffic throughout the system working optimally. A typical Ethernet-based network runs at 1 Gigabit per second (Gb/s), and a fast one runs at 10 Gb/s. A dual-channel InfiniBand network runs at 80 Gb/s, making it a great compliment to Map/Reduce steps on a Hadoop cluster.

We’ve seen how a software revolution getting us past the exclusive use of RDBs has enabled data mining that was previously unimaginable. Open Source and cloud computing have made Big Data approachable to a wider audience. Better speed, resulting in shorter query times and time reductions needed in leasing IaaS space, is achievable using public cloud providers offering InfiniBand. This is the next step in the data processing revolution and the next generation of Cloud Computing services (also known as Cloud Computing 2.0) bring InfiniBand to the public cloud. ProfitBricks is the first provider to offer supercomputing like performance to the public cloud at an affordable price. Data is becoming democratized, and now High Performance Computing is as well.

]]>http://www.hpcwire.com/2013/04/01/why_big_data_needs_infiniband_to_continue_evolving/feed/0Why Big Data Needs InfiniBand to Continue Evolvinghttp://www.hpcwire.com/2013/04/01/why_big_data_needs_infiniband_to_continue_evolving/?utm_source=rss&utm_medium=rss&utm_campaign=why_big_data_needs_infiniband_to_continue_evolving
http://www.hpcwire.com/2013/04/01/why_big_data_needs_infiniband_to_continue_evolving/#commentsMon, 01 Apr 2013 07:00:00 +0000http://www.hpcwire.com/?p=8572Increasingly, it’s a Big Data world we live in. Just in case you’ve been living under a rock and need proof of that, <a href="http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/" target="_blank">a major retailer can use an unimaginable number of data points to predict the pregnancy of a teenage girl outside Minneapolis before she gets a chance to tell her family</a>. That’s just one example, but there are countless others that point to the idea that mining huge data volumes can uncover gold nuggets of actionable proportions (although sometimes they freak people out...)

We’re still at the dawn of this Big Data era and as the market is showing, one-size-fits-all data processing is no longer adequate. To take the next step in this evolution, specialized Big Data software can improve not only by using cloud computing, but also by utilizing specialized networking infrastructure, InfiniBand, from the supercomputing community. Before understanding why, though, you need to understand the history of how we got to this Big Data world in the first place.

How Did We Get Here? The Birth of the Relational Database

1970 isn’t just the year of the Unix Epoch, it’s also the year that the granddaddy of all Relational Database (RDB) papers was written. IBM Researcher E. F. Codd wrote “A Relational Model for Large Shared Data Banks” for Communications of the ACM magazine in June of that year, and it became the defining work on data layouts for decades. Codd’s model would be refined over the next 40 years, but what he proposed evolved into a generic toolset for structuring and manipulating data that was used for everything from managing bank assets to storing food recipes.

This general-purpose data analysis software also ran exceptionally well on general-purpose computing hardware. The two got along great, actually, since all you really needed was a disk big enough to handle the structured data and enough CPU and RAM to perform the queries. In fact, some hardware manufacturers such as Hewlett-Packard would give away database software when you purchased the hardware to run it on. For the Enterprise especially, the Relational Database was the killer app of the data center hardware business.

At this point, everybody was happily solving problems and making money. Then something happened that changed everything and completely disrupted this ecosystem forever. It was called Google.

Then Google Happened

During the Nixon Administration, copying the entire Internet was not a difficult problem given its diminutive size. But this was not so by the late 1990s, when the first wave of search engines like Lycos and Alta Vista had supposedly solved the problem of finding information online. Shortly thereafter, Google happened and disrupted not only the online search industry but also data processing.

It turns out that if you can keep a copy of the modern Internet at all times, you can do some amazing things in determining relevance and, therefore, return better search results. However, you can’t use a traditional RDB to tackle that problem for several reasons. First of all, to solve this problem you need to store a lot of data. So much so, it becomes impractical to rely solely on vertical scaling by adding more disk/CPU/Ram to a system and a RDB does not scale horizontally very well. Adding more machines to a RDB does not improve its execution or ability to store more data. That disk/CPU/RAM marriage has been around for 40 years and it’s not easy to break apart.

Further, as the size of the data set in an RDB gets larger the query speed generally degrades. For a financial services company querying trends on stock prices that may be acceptable, since that influences the time of a handful of analysts who can do something else while that processing is going on. But for an Internet search company trying to deliver sub 3-second responses to millions of customers simultaneously that just won’t fly.

Finally, given the large data volumes and the query speed required for Internet searches, the necessity for data redundancy is implied since the data is needed at all times. As such, the simple master-slave model employed by most RDB deployments over the last four decades is a lot less bullet proof than what is needed when you are trying to constantly copy the entire Internet. One big mirror simply won’t cut it.

Distributed File Systems and Map/Reduce Change Everything

If Codd’s seminal RDB paper had grandchildren, they would be a pair of papers released by Google that described how they conquered their data problem. Published in 2003, “The Google File System” by Sanjay Ghermawat, Howard Gobioff, and Shun-tak Leung described how a new way of storing data across many, many different machines provided a mechanism for dealing with huge volumes in a much more economical way than the traditional RDB.

The follow-up paper from 2004 entitled, “MapReduce: Simplified Data Processing on Large Clusters” by Ghermawat and Jeffrey Dean further revealed that Google performs queries across its large, distributed data set by breaking up the problem into smaller parts, sending those smaller parts to nodes out on the distributed system (the Map step), and finally assembling the results of the smaller solution (the Reduce step) into a whole.

Together, these two papers created a data processing renaissance. While RDBs still have their place, they are no longer the single solution to all problems in the data processing world. For problems involving large data volumes in particular, solutions derived from these two papers have emerged over the past decade to give developers and architects far more choice than they had in the RDB exclusive world that existed previously.

Hadoop Democratizes Big Data; Now Where Are You Going to Run It?

The next logical step in this evolution in an era of Open Source programming was for somebody to take the theories laid out in these Google papers and transform them into a reality that everyone could use. This is precisely what Doug Cutting and Michael J. Cafarella did, and they called the result Hadoop. With Hadoop, anyone now had the software to tackle huge data volumes and perform sophisticated queries. What not everybody could afford, however, was the hardware to run it on.

Enter cloud computing, specifically Infrastructure as a Service (IaaS). Primarily invented by Amazon with its Amazon Web Services offering, anyone could lease the 100s if not 1000s of compute nodes necessary to run big Hadoop jobs instead of purchasing the physical machines necessary for the job. Combine that idea with orchestration software from folks like OpsCode or Puppet Labs and you could automate the creation of your virtualized hardware, the installation and configuration of the Hadoop software, and the loading of large data volumes to minimize the costs of performing these queries.

Again, everybody is happily solving problems and making money. But we aren’t done. There’s another step to this evolution, and it’s happening now.

InfiniBand: Making Hadoop Faster and More Economical

Processing Hadoop and other Big Data queries on IaaS produces results, but slowly. This combination is praised for the answers it can find but at the cost of reduced speed. We saw a data processing revolution sparked by different software approaches than those pioneered in the 1970’s. Better-performing Hadoop clusters, with all the network traffic they produce in their Map and Reduce steps, can be found by taking a similar approach with a different network infrastructure.

Ethernet, the most widely used network infrastructure technology today, has followed a path similar to that of RDBs. Invented in 1980, Ethernet uses a hierarchal structure of subnets to string computers together on a network. It is so common that, like RDBs 10 years ago, most people don’t think they have a choice of something different.

The performance problem with Ethernet comes in its basic structure. With hierarchies of subnets connected by routers, network packets have exactly one path they can traverse between any two points on the network. You can increase the size of the pipe between those two points slightly, but fundamentally you still just have the one path.

Born in the supercomputing community during the 21st Century, InfiniBand instead uses a grid system which enables multiple paths for network packets to traverse between two points. Smart routing that knows what part of the grid is currently busy, akin to automobile traffic reporting found on smart phone map apps, keeps the flow of traffic throughout the system working optimally. A typical Ethernet-based network runs at 1 Gigabit per second (Gb/s), and a fast one runs at 10 Gb/s. A dual-channel InfiniBand network runs at 80 Gb/s, making it a great compliment to Map/Reduce steps on a Hadoop cluster.

We’ve seen how a software revolution getting us past the exclusive use of RDBs has enabled data mining that was previously unimaginable. Open Source and cloud computing have made Big Data approachable to a wider audience. Better speed, resulting in shorter query times and time reductions needed in leasing IaaS space, is achievable using public cloud providers offering InfiniBand. This is the next step in the data processing revolution and the next generation of Cloud Computing services (also known as Cloud Computing 2.0) bring InfiniBand to the public cloud. ProfitBricks is the first provider to offer supercomputing like performance to the public cloud at an affordable price. Data is becoming democratized, and now High Performance Computing is as well.

]]>This week, not one but two groups of IT heavyweights launched with plans to expand the scope of the Internet while protecting the free flow of ideas it provides.

First up is the Internet Infrastructure Coalition (i2Coalition), which began as a protest to the highly-controversial Stop Online Piracy Act (SOPA) and Protect IP Act (PIPA). The group matured into a formal organization whose aim was to give infrastructure providers a public policy vote. i2 already has a healthy initial participation of 42 founding members, including Rackspace, Softlayer, ProfitBricks and Tucows.

Christian Dawson, chief operating officer at member company ServInt and i2 board chair, discusses the impetus for the group, highlighting the importance of an open Internet, in this short introductory video:

“One fascinating quality of the Internet is that it’s decentralized. It is not one *thing*. Nobody owns it. Moreover, for how many people use it, whose lives are intimately linked to it, few understand, what it is, how it works, or even who builds it.

…

“We in the Internet Infrastructure industry know how the Internet works because we are the ones building it every day. So it makes sense for us to come together to fight for it.”

Not only do these companies understand how the Internet works, they operate an industry that generates billions of dollars. In an official statement, i2 referred to a Tier1 Research study, which estimated that Internet infrastructures created $46 billion in direct and indirect revenue back in 2010.

Wasting no time, the i2 Coalition has already identified 13 areas that fall within the scope of its public policy mission. The group’s first priority is to promote a growing Internet infrastructure while keeping the interests of its members intact. Other areas include leaving political partisanship at the door, enabling individuals to exercise freedom of speech and protecting their privacy.

On the whole, the organization appears to be primarily focused on end user protections and the continued success of infrastructure providers. But some of these policies may butt against the actions of non-member organizations, like say Google. While Google will certainly benefit from the public advocacy for further infrastructure development, it has landed itself in hot water for capturing data without user permission. Earlier this month, the search company was fined $22.5 million by the FTC for bypassing security features in Apple’s Safari web browser to track the surfing activity of users.

While at first glance, non-participation from companies like Google and Facebook could suggest a lack of alignment with the coalition goals, it was soon made clear that these vendors had simply joined a different, albeit similar, group, announced two days later.

In what seems like an unlikely coincidence, a second organization also announced its formation last week as “the unified voice of the Internet economy in Washington.” This one, named the Internet Association, includes several high-profile companies that were curiously missing from the Internet Infrastructure Coalition, namely Google, Yahoo, Facebook, eBay and Amazon. The complete roster of 14 companies is led by President and CEO Michael Beckerman.

Like the i2 Coalition, the Internet Association’s mission is to represent “…the interests of America’s leading Internet companies and their global community of users.” It plans to accomplish these tasks by shaping public policy.

A short video, which incorporates clips from YouTube’s popular collection of human and animal antics, delivers a concise explanation of the association’s policy platform.

According to the association’s official launch announcement, its policy platform consists of three planks.

Protecting Internet Freedom

The Internet Association believes that the borderless nature of the World Wide Web has enabled innovations and entrepreneurship that would have otherwise been impossible to recreate. In their bid to protect the free flow of information across the Internet, the organization opposes government censorship and regulations that aim to inhibit free expression. This freedom can take the form of silly cat videos or the political activism enabled through social media.

Despite these stated goals, Google recently blocked access to an anti-Islamic movie, which sparked a number of deadly protests in Muslim countries. While the decision was most likely made to reduce the potential for further violence, it went against the grain of the alliance’s first policy plank.

Fostering Innovation and Economic Growth

Reiterating its focus on reducing regulatory constraints, the Internet Association believes that minimal barriers to building a Web-based business help foster a growing economy. This means that members of the association want governments, businesses and individuals to have their choice of Internet providers and platforms.

Empowering Users

As technologies that enable and leverage the Internet have evolved, end users have received increased flexibility with the services they access. For example, platform developer Techila built an Android application that allowed a user to spin up cloud compute resources through a mobile device.

Both associations believe the best way to continue empowering such innovations is to minimize any potential government interference, be it a mandates or regulations.

Membership in the two groups is not mutually exclusive, although currently Rackspace is the only player with dual-citizenship. Rackspace senior vice president and general counsel Alan Schoenbaum shared via email that his company’s “success is very much rooted in a free, safe Internet with no barriers to innovation.”

“We have joined both the Internet Infrastructure Coalition (i2Coalition) and the Internet Association because we support an Internet that unleashes unprecedented entrepreneurship, creativity and innovation and we see value in influencing policies to fuel economic growth,” his message continued.

Ultimately, these fledgling trade organizations stand against any legislation that could reduce the speed limit on the information superhighway, and by extension threaten the public cloud market. While SOPA and PIPA provided the launch pad for action, there are multiple markets at stake, including cloud, mobile, search and social media. It will be interesting to see what type of power these groups wield against similar legislation that emerges down the road.

]]>IaaS provider ProfitBricks is hoping to raise some eyebrows with some recent benchmark results. The Berlin-based company commissioned Cloud Spectator, an IaaS benchmarking service, to compare its performance against Amazon and Rackspace. Given the outcomes, it appears that ProfitBricks is positioning itself as a viable service provider for HPC workloads.

According to their performance report, the company believes that infrastructure performance is primarily determined by processor, storage and interconnect capabilities. ProfitBricks US and German datacenters follow this mantra by running AMD bulldozer servers with an InfiniBand backbone. The interconnect design is rather impressive with servers running dual QDR cards, supporting up to 80Gbit/s connectivity.

“We deconstructed cloud infrastructure down to its most basic elements and discovered there was a far better way to deliver the service… Our platform allows ProfitBricks to offer unprecedented services at prices others cannot touch because of legacy design and built-in costs,” said Achim Weiss, ProfitBricks chief executive officer, in an official statement.

To determine each IaaS provider’s capabilities, Cloud Spectator ran a number of open-source benchmark tests including UnixBench, DBENCH, Iperf and Apache Kernel Compilation. The study took place over a two-day period in July and gauged server, CPU, storage and local network performance.

UnixBench

This test determined the overall performance of a system. Scores can be affected by both hardware and software, including the OS, compiler and libraries. When running UnixBench, ProfitBricks posted and average score of 1,567 vs. 1,031 and 948 for AWS and Rackspace respectively.

Apache Compilation

The Apache compilation test records how much time is required to build an Apache HTTP server. ProfitBricks was the only contender to complete the task in less than 1 minute (56 seconds). Amazon created the server in 61 seconds and Rackspace finished in 68 seconds.

DBENCH

DBENCH is an application that performs a file system stress test. The program returns the amount of concurrent programs or clients a file system can handle before experiencing heavy latency. Again, ProfitBricks came in ahead at 643. Both Rackspace and Amazon were far behind, scoring 164 and 77.

Iperf

Iperf or Internal network performance, tests a network’s throughput using TCP and UDP streams. Cloud Spectator ran these tests between two servers at the same datacenter. As expected, ProfitBricks took an exponential lead in this test, which is due to the InfiniBand infrastructure. It scored 5,754 vs. 397 and 289 for Rackspace and Amazon.

The company has also just announced real-time vertical scaling of CPU cores and RAM – with no reboot required. Until now, customers have had to preselect server sizes and add new servers as needed, but the new approach enables the selection of server instances from 1 CPU core to 48 cores, and from 1GB of RAM to 196 GB of RAM on the fly. All CPU cores are dedicated.

Given ProfitBricks’ hardware and dedication to low-latency, high-throughput interconnects, it’s somewhat surprising that GPU computing or bare metal services are not being offered. If those features are made available in the future, and are offered at the right price point, the IaaS provider will have a comprehensive product set for the HPC community.