A few weeks back, I was on a panel of media pundits asked to make predictions about what might happen in the technology business in 2011. One prominent member of our group, seeking to stake out a bold, contrarian position, asserted that 2011 would be the year that “cloud computing” would be revealed to be a big nothing, an over-hyped concept that in the end is more about marketing than actual technology.

He’s so, so wrong.

For starters, the cloud is not quite as mystifying as you might think at first glance. All of the red-hot consumer Internet sites now dominating the conversation about the Web are built on a public cloud infrastructure. Facebook, Zynga, Twitter – they all rely on public clouds from Amazon.com (AMZN) and others to host their data. As Crosby notes, the public cloud is expanding at an astonishing rate: he says the cloud will quarduple in size over the next two years. The phenomenon is driven by a host of factors, not the least of which is the proliferation of of tablets and smart phones dependent on the cloud to store data. And as Crosby says, consumers have largely come to grips with the security and privacy trade-offs that come with the cool apps the cloud enables.

A substantial building block for the future success of the cloud was put in place yesterday as Amazon released Software Development Kits (SDKs) for Google’s Android and Apple’s iOS. These will allow easy access to the Amazon Web Services (AWS) platform.

If that sounds confusing, here’s the short version: Amazon has just made it a piece of cake for even modest programmers to make phone and tablet apps that can use its cloud services, such as Simple Storage Service (S3).

Excerpt: Most applications, data or IT assets within an organisation can be moved to the Cloud today with minimal effort. This 2-part series will help you build an enterprise application migration strategy for your organisation. The step by step, phase-driven approach, shared high-level insights on how to identify ideal projects for migration, build the necessary support within the organisation and help you migrate applications with confidence.

This phase enables you to build a business case for moving to the Cloud. It includes examining the costs, security and compliance, and identifies the gaps between your current traditional architecture and next-generation Cloud architecture.

Excerpt: CloudFront can be used to distribute website images, videos, media files or software downloads, from edge servers located in the U.S., Europe and Asia, according to Amazon. Users are automatically routed to the nearest edge location to improve performance, the company said.

The SLA specifies that if the availability of a user’s content drops below 99.9 percent in any given month, it can apply for a credit equal to 10 percent of the monthly bill, according to a blog post. And if the availability drops below 99 percent, users can apply for a 25 percent discount.

Excerpt: Jeff Barr, Amazon Web Services evangelist, explains their new service – : “You can now break your larger objects into chunks and upload a number of chunks in parallel. If the upload of a chunk fails, you can simply restart it. You’ll be able to improve your overall upload speed by taking advantage of parallelism. In situations where your application is receiving (or generating) a stream of data of indeterminate length, you can initiate the upload before you have all of the data.”

Amazon S3 presents the data as a single object when all parts of the object get uploaded. Also, if transmission of any part fails, users can re-transmit that part without affecting other parts.

“Amazon has come up with a very interesting proposition for those considering deploying a cloud-based solution that will be hard for small to medium size organizations as well as state and local governments to pass up – try it for free for a year. [...] this might be just the enticement that some need to try out cloud computing.”

A "micro" Linux server in the Elastic Compute Cloud will be free for a year as an inducement for prospects to try their first cloud experience.
Like an apartment building offering a month of free rent, Amazon Web Services is looking for EC2 tenants by offering a year of free use to those just getting started in the cloud.
Beginning Nov. 1, new customers will be able to run one micro instance in the Elastic Compute Cloud for a year at no charge. The virtual server can be combined with free use of Amazon S3 permanent storage and EC2’s Elastic Block Store, which supplies disk space for running systems. Elastic Load Balancing and AWS data transfer will also be thrown in at no charge. AWS calls the package its “free usage tier.”

"Facebook, Amazon and Zynga will invest in a fund to help entrepreneurs develop applications and services for a new era of the social web." "There's going to be an opportunity over the next five years or so to pick any industry and rethink it in a social way," said Facebook's Mark Zuckerberg. "We think that every industry is going to be fundamentally re-thought and designed around people."
That was a view backed by KPCB partner John Doerr, best known for investing in Amazon, Google and Netscape. "These social networks are going to go from a half a billion people to billions of people connected on the planet and so represents an extraordinarily exciting time on the internet," he told BBC News. "Think of it as a quarter-billion-dollar party. The third great wave of the internet is mobile and social together. It's going to be tectonic."
Amazon will help businesses get access to the company's web services platform for one year, as well as provide business and technical support.

This spring, a startup called Makara came out of stealth mode with a beta of a product called Cloud Application Platform, which as the name suggests allowed companies to set up private platform clouds like Microsoft Azure, Google App Engine, or Engine Yard internally on their own iron. Today, Makara is doing something that will perhaps be more interesting, which is layering atop Amazon’s EC2 infrastructure cloud to turn it into a platform cloud.

Article Excerpt: Now that Amazon’s EC2 compute cloud can be brought to bear using the Cloud Application Platform, the set of monitoring and management tools that allow customers to manage their private clouds running atop Eucalyptus (KVM), Ubuntu Enterprise Cloud (KVM), Red Hat KVM, and VMware vSphere hypervisors and cloud frameworks can now be used to manage EC2 images.

In fact, says Isaac Roth, co-founder and chief executive officer at Makara, the whole point of the Cloud Application Platform is to make the differences between a public cloud and private cloud moot and to abstract both up one layer so companies just worry about the services their applications need to run – a Web server, a database, and so forth – and leave the Makara system at work to manage how the underlying infrastructure is scaled up and down.

Black Duck uncovered open source project information by analyzing its own KnowledgeBase of open source projects. The approach helped Black Duck identify cloud environments. Black Duck found Amazon to be the leading cloud environment followed, in order, by Microsoft’s Azure, Google Apps Engine and Force.com.

Article Excerpt: “When we began the analysis we expected to find projects focused on security, privacy and management,” said Peter Vescuso, Black Duck’s executive vice president, in a statement.

“The variety of projects that self-identify as ‘cloud’ is much broader in scope, reflecting the sophistication of the cloud application ecosystem and confirming the importance of OS cloud software developed to support the needs of enterprise IT,” said Vescuso.

Black Duck noted that the private and public clouds collected in its searches includes such well-known projects as Hadoop, Eucalyptus, Hyperic, deltaCloud, Open Stack and OpenECP.

Amazon Web Services says its latest cluster computing service, which it announced Tuesday, can provide the same results as custom-built infrastructures for high-performance applications at organizations that don’t want to build their own.

The service, called Cluster Compute Instances for Amazon’s EC2 cloud computing infrastructure, can deliver up to 10 times the network throughput of current instance types on its EC2 service depending on the usage pattern, the company said.

Amazon has tested Cluster Computer Instances with the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory. The lab found their HPC applications ran 8.5 times faster than on previous Amazon EC2 instance types, according to an Amazon news release.

A new option for Amazon Web Services has arrived: the raw computing power of supercomputing clusters now widely used in research circles.

The service, called Cluster Compute, is a variation of one of the earliest services Amazon offered, EC2, or Elastic Compute Cloud. Compared with the standard EC2, Cluster Compute offers more processing power and faster network connections among the cluster’s computing nodes for better communications, Amazon said Tuesday. The service retains the same general philosophy, though: customers pay as they go, with more usage incurring more fees.

How fast is it? An 880-node cluster reached 41.82 teraflops, or floating-point operations per second, using the Linpack mathematical speed test. By contrast, the 145th-fastest machine on the most recent “Top500” list of the fastest supercomputers reached a sustained speed of 41.88 teraflops.

The service is sold on the basis of much smaller nodes than what was used in Amazon’s test: a server with two quad-core Intel X5570 Nehalem Xeon processors. Each such instance costs $1.60 per hour to use. Alternatively, with a payment of $4,290 for one year or $6,590 for three years, the per-hour fee drops to 56 cents.

“Businesses and researchers have long been utilizing Amazon EC2 to run highly parallel workloads ranging from genomics sequence analysis and automotive design to financial modeling,” said Peter De Santis, general manager of Amazon EC2, in a statement. “At the same time, these customers have told us that many of their largest, most complex workloads required additional network performance. Cluster Compute Instances provide network latency and bandwidth that previously could only be obtained with expensive, capital intensive, custom-built compute clusters. For perspective, in our last pre-production test run, we saw an 880 server sub-cluster achieve a network rate of 40.62 TFlops – we’re excited that Amazon EC2 customers now have access to this type of HPC performance with the low per-hour pricing, elasticity, and functionality they have come to expect from Amazon EC2.”

Cluster Compute Instances complement other AWS offerings designed to make large-scale computing easier and more cost effective, Amazon officials said. For example, Public Data Sets on AWS provide a repository of useful public data sets that can be easily accessed from Amazon EC2, allowing fast, cost-effective data analysis by researchers and businesses, Amazon said in its press release. These large data sets are hosted on AWS at no charge to the community. Additionally, the Amazon Elastic MapReduce service enables low-friction, cost effective implementation of the Hadoop framework on Amazon EC2. Hadoop is a popular tool for analyzing very large data sets in a highly parallel environment, and Amazon EC2 provides the scale-out environment to run Hadoop clusters of all sizes.

Amazon today said it would now also offer high-performance computing (HPC) through Cluster Compute Instances, which use more powerful processors and clustered nodes to help get around some of the latency issues associated with distributed computing. HPC has long taken advantage of many nodes working in tandem, but has typically connected those processors using Infiniband or low-latency networks. However, as the need for HPC becomes more popular, and the next generation of faster supercomputers grows more difficult to build (GigaOM Pro sub req’d), the cloud may offer some hope.

The rise of cloud computing owes a bit to HPC, but for the most part demanding workloads couldn’t take advantage of the cost benefits and flexibility associated with the cloud because of the latency within the cloud and the cost and time associated with moving terabytes of data back and forth from a lab to the cloud. Amazon has solved the data problem by using the postal service and also by acting as a free repository for some types of scientifically valuable information.

The offering, dubbed Cluster Compute Instances for Amazon EC2, is designed for intensive computational workloads like parallel processes used by the likes of Lawrence Berkeley National Laboratory for research.

Amazon Web Services’ (AWS) cluster computing move will either be an interesting showpiece for a limited market or end up democratizing supercomputing. In the meantime, Lawrence Berkeley National Labs was a early tester of AWS’ Cluster Compute Instances. AWS’ HPC service features pay-as-you-go pricing and the ability to scale up or down on demand. These cluster computer instances operate the same as Amazon’s Elastic Compute Cloud.

Article Excerpts: “We wanted to expand further and the most logical option was a partnership with Amazon,” says Hughes. “The partnership allowed us to focus on our core skill – retailing. Cloud computing offers us the ability to scale our operation quickly and economically as required in response to trading conditions.”

Cult bohemian fashion store Anthropologie defied a global recession to open its first London store in October. And while the 11,000 sq ft space in Regent Street represented the brand’s first strategic move into the highly competitive European fashion market, the website anthropologie.eu had to make a similar impact online.

Several major names do rely on the cloud for their e-commerce operations. US retail giant Target, which recorded more than 23.2m users in April this year, is one of the biggest corporate clients for Amazon’s Enterprise Services, which provides order fulfilment for the company, and it has been working with Target since 2002. In the UK, the e-commerce platforms of high-street favourites Mothercare and Marks & Spencer are also powered by Amazon.

The 1000 Genomes Project, an international public-private consortium to build the most detailed map of human genetic variation to date, has completed three pilot projects and announced the deposition of the final resulting data in freely available public databases for use by the research community.

Article Excerpt: Launched in 2008, the 1000 Genomes Project first conducted three pilot studies to test multiple strategies to produce a catalogue of genetic variants that are present in one per cent or greater frequency in the different populations chosen for study (European, African and East Asian). Disease researchers will use the catalogue, which is being developed over the next two years, to study the contribution of genetic variation to illness. In addition to distributing the results on the Project’s own web sites, the pilot data set is available via the Amazon Web services (AWS) computing cloud to enable anyone to access this unprecedentedly large data set, even if they do not have capacity to download it locally.

“The federal government has moved Recovery.gov, the Web site people can use to track spending under last year’s $787 million economic stimulus package, to Amazon’s Elastic Compute Cloud infrastructure-as-a-service platform, the Recovery Accountability and Transparency Board announced Thursday.

The move marks a milestone for the Obama administration’s cloud computing initiative. Federal CIO Vivek Kundra said in a conference call with reporters it is the first government-wide system to move to a cloud computing infrastructure. It’s also the first federal government production system to run on Amazon EC2, Kundra said.

Cloud computing has been one of Kundra’s top priorities since becoming federal CIO in March 2009. In next year’s IT budget requests, for example, federal agencies will have to discuss whether they’ve considered cloud computing as an alternative to investing in on-premises IT systems.”

“The current private cloud model that requires companies to invest heavily in virtualization and maintaining their own data centers is not a “real cloud” model, according to an Amazon executive.

Andy Jassy, senior vice president of Amazon Web Services (AWS), argued that the current model of private clouds does not provide the capital expenditure (capex) savings and scalability of a true cloud offering. AWS is a subsidiary of Amazon.com.

“Companies usually are not able to provision accurately the amount of data center capacity that they require, and this problem recurs when they create their own internal cloud infrastructure,” said Jassy, who was in town Thursday for the Infocomm Development Authority (IDA) Distinguished Infocomm Speaker Series. He noted that companies would still have to dedicate a proportion of their IT resources, such as software engineers, to maintain the data centers instead of freeing them up to create products that will differentiate their companies from the competition.

This runs counter to the definition of cloud computing, which Jassy said constitutes five attributes: changing capex to variable operational costs; a pay-as-you-go payment model; allowing companies to have truly elastic, scalable data center capacity, fast product time-to-market; and reducing the focus on hardware management. These attributes are in accordance to Gartner’s definition of cloud computing, he said.”

“SURGING demand for Amazon’s cloud services has forced the internet giant to open a regional office and data centre in Singapore to service the Asia Pacific region.”

“Many local companies and developers already use its services in the US and Europe, including electronics design software provider Altium and graphics design house 99designs.com.”

“Altium chief information officer Alan Perkins said the AWS facility in Singapore would allow the company to launch global services and fully take advantage of opportunities in China. He said there were cost-savings benefits to using the Amazon model. Switching all our customer-facing content delivery from Akamai to Amazon S3 and Amazon Cloudfront has resulted in us paying less than one-quarter of what we used to for the same level of service,” Mr Perkins said.”

“With Microsoft and IBM now offering rival services, Amazon says its own efforts could one day surpass retailing revenues. To widen its lead in the cloud computing market, Amazon.com is working to make its Web services compelling to more customers as computing giants such as Microsoft (MSFT) and IBM (IBM) attack the same territory.”

“On Apr. 28, Amazon (AMZN) announced the opening of a new data center in Singapore to give customers in Asia, India, and Australia speedier access to its Amazon Web Services business. That business lets companies buy from Amazon, on an hourly basis, computing power that’s delivered over the Internet. The Singapore data center will target Asian customers and Western companies that have many users in the region.”

“Beyond small startups, Amazon counts larger organizations including Eli Lilly (LLY), Pfizer (PFE), NASA, Adobe Systems (ADBE), and Netflix (NFLX) as cloud computing customers. On Mar. 23, it released a software development kit so Java developers could more easily write applications that take advantage of its computing services. An Apr. 15 deal with NetSuite (N) lets customers of the online accounting software firm store their data on Amazon’s computers.”

Streaming Media Magazine has named Amazon CloudFront in their 2010 Editors’ Picks – an annual list of the most innovative, most important, and just plain coolest stuff in online video. “Amazon CloudFront: Debuting in November 2008, Amazon’s entry into the CDN market quickly became a major player…. As Dan Rayburn wrote on his Business of Video blog, “Amazon will be in the driver’s seat to own the market for small and medium sized content owners who need simple delivery at a great price.”

“This year, Netflix made what looked like a peculiar choice: the DVD-by-mail company decided that over the next two years, it would move most of its Web technology — customer movie queues, search tools and the like — over to the computer servers of one of its chief rivals, Amazon.com.” “Amazon, like Netflix, wants to deliver movies to people’s homes over the Internet. But the online retailer, based in Seattle, has lately gained traction with a considerably more ambitious effort: the business of renting other companies the remote use of its technology infrastructure so they can run their computer operations. In the parlance of technophiles, they would operate “in the cloud.””

“Despite being among the first to successfully and profitably implement cloud computing solutions, AWS officials said the company still has to constantly deal with questions about the reliability, security, cost, elasticity and other features of the cloud. In short, there are myths about cloud computing that persist despite increased industry adoption and thousands of successful cloud deployments. However, in an exclusive interview with eWEEK at Amazon’s headquarters in Seattle, Adam Selipsky, vice president of AWS, set out to shoot down some of the myths of the cloud. Specifically, Selipsky debunked five cloud myths.”