May 19, 2016:

With OpenStack in tow you’ll go far — be it your house, your bank, your city or your car.

Just look at all of the exciting places we’re going:

From the phone in your pocket

The telecom industry is undergoing a massive shift, away from hundreds of proprietary devices in thousands of central offices accumulated over decades, to a much more efficient and flexible software plus commodity hardware approach. While some carriers like AT&T have already begun routing traffic from the 4G networks over OpenStack powered clouds to millions of cellphone users, the major wave of adoption is coming with the move to 5G, including plans from AT&T, Telefonica, SK Telekom, and Verizon.

We are on the cusp of a revolution that will completely re-imagine what it means to provide services in the trillion dollar telecom industry, with billions of connected devices riding on OpenStack-powered infrastructure in just a few years.

To the living room socket

The titans of TV like Comcast, DirecTV, and Time Warner Cable all rely on OpenStack to bring the latest entertainment to our homes efficiently, and innovators like DigitalFilm Tree are producing that content faster than ever thanks to cloud-based production workflows.

Your car, too, will get smart

Speaking of going places, back here on earth many of the world’s top automakers, such as BMW and the Volkswagen group, which includes Audi, Lamborghini, and even Bentley, are designing the future of transportation using OpenStack and big data. The hottest trends to watch in the auto world are electric zero emissions cars and self-driving cars. Like the “smart city” mentioned above, a proliferation of sensors plus connectivity call for distributed systems to bring it all together, creating a huge opportunity for OpenStack.

And your bank will take part

Money moves faster than ever, with digital payments from startups and established players alike competing for consumer attention. Against this backdrop of enormous market change, banks must meet an increasingly rigid set of regulatory rules, not to mention growing security threats. To empower their developers to innovate while staying diligent on regs and security, financial leaders like PayPal, FICO, TD Bank, American Express, and Visa are adopting OpenStack.

Your city must keep the pace

Powering the world’s cities is a complex task and here OpenStack is again driving automation, this time in the energy sector. State Grid Corporation, the world’s largest electric utility, serves over 120 million customers in China while relying on OpenStack in production.

Looking to the future, cities will be transformed by the proliferation of fast networks combined with cheap sensors. Unlocking the power of this mix are distributed systems, including OpenStack, to process, store, and move data. Case in point: tcpcloud in Prague is helping introduce “smart city” technology by utilizing inexpensive Raspberry Pis embedded in street poles, backed by a distributed system based on Kubernetes and OpenStack. These systems give city planners insight into traffic flows of both pedestrians and cars, and even measure weather quality. By routing not just packets but people, cities are literally load balancing their way to lower congestion and pollution.

From inner to outer space

The greatest medical breakthroughs of the next decade will come from analyzing massive data sets, thanks to the proliferation of distributed systems that put supercomputer power into the hands of every scientist. And OpenStack has a huge role to play empowering researchers all over the globe: from Melbourne to Madrid, Chicago to Chennai, or Berkeley to Beijing, everywhere you look you’ll find OpenStack.

To explore this world, I recently visited the Texas Advanced Computing Center (TACC) at the University of Texas at Austin where I toured a facility that houses one of the top 10 supercomputers in the world, code named “Stampede

But what really got me excited about the future was the sight of two large OpenStack clusters: one called Chameleon, and the newest addition, Jetstream, which put the power of more than 1,000 nodes and more than 15,000 cores into the hands of scientists at 350 universities. In fact, the Chameleon cloud was recently used in a class at the University of Arizona by students looking to discover exoplanets. Perhaps the next Neil deGrasse Tyson is out there using OpenStack to find a planet to explore for NASA’s Jet Propulsion Laboratories.

Where should we go next?

Mark Collier is OpenStack co-founder, and currently the OpenStack Foundation COO. This article was first published in Superuser Magazine, distributed at the Austin Summit.

May 9, 2016:

THE 451 TAKE OpenStack mindshare continues to grow for enterprises interested in deploying cloud-native applications in greenfield private cloud environments. However, its appeal is limited for legacy applications and enterprises sold on hyperscale multi-tenant cloud providers like AWS and Azure. There are several marquee enterprises with OpenStack as the central component of cloud transformations, but many are still leery of the perceived complexity of configuring, deploying and maintaining OpenStack-based architectures. Over the last few releases, processes for installation and upgrades, tooling, and API standardization across projects have improved as operators have become more vocal during the requirements phase. Community membership continues to grow on a global basis, and the supporting organization also depicts a similar geographic trend.

… Horizontal scaling of Nova is much improved, based on input from CERN and Rackspace. CERN, an early OpenStack adopter, demonstrated the ability for the open source platform to scale – it now has 165,000 cores running OpenStack. However, Walmart, PayPal and eBay are operating larger OpenStack environments.

May 18, 2015:

Walmart‘s Cloud Journey by Amandeep Singh Juneja
Sr. Director, Cloud Engineering and Operations, WalmartLabs: Introduction to World’s largest retailer and its journey to build a large private Cloud.

Amandeep Singh Juneja is Senior Director for Cloud Operations and Engineering at WalmartLabs. In his current role, Amandeep is responsible for the build out of elastic cloud used by various Walmart Ecommerce properties. Prior to his current role at Walmart Labs, Amandeep has held various leadership roles at HP, WebOS (Palm) and eBay.

May 19, 2015:

Subbu is the Chief Engineer of cloud at eBay Inc. His team builds and operates a multi-tenant geographically distributed OpenStack based private cloud. This cloud now serves 100% of PayPal web and mid tier workloads, significant parts of eBay front end and services, and thousands of users for their dev/test activities.

May 18, 2015:

Graeme cut his teeth in the financial services consulting industry by designing and developing real-time Trading, Risk and Clearing applications. He then joined NatWest Markets and J.P. Morgan in executive level roles within the Equity Derivatives business lines.

Graeme then moved to a Silicon Valley Startup to expand his skillset as V.P. of Engineering at Application Networks. His responsibility extended to Strategy, Innovation, Product Development, Release Management and Support to some of the biggest names in the Financial Services Sector.

For the last 10 years, he has held Divisional CIO roles at Citigroup and Deutsche Bank, both of which saw him responsible for Credit, Securitized and Emerging Market businesses.

Graeme moved back to a V.P. of Engineering role at TD Bank Group several years ago. He currently oversees all Infrastructure Innovation — everything form Mobile and Desktop to Database, Middleware and Cloud. His focus is on the transformational: software development techniques, infrastructure design patterns, and DevOps processes.

North American retail banking outfit TD Bank is using OpenStack among a range of other open source cloud technologies to help catalyse cultural change as it looks to reduce costs and technology redundancy, explained TD Bank group vice president of engineering Graeme Peacock.

TD Bank is one of Canada’s largest retail banks, having divested many of its investment banking divisions over the past ten years while buying up smaller American retail banks in a bid to offer cross-border banking services.

Peacock, who was speaking at the OpenStack Summit in Vancouver this week, said TD Bank is in the midst of a massive transition in how it procures, deploys and consumes technology. The bank aims to have about 80 per cent of its 4,000 application estate moved over to the cloud over the next five years.

“If they can’t build it on cloud they need to get my permission to obtain a physical server. Which is pretty hard to get,” he said.

But the company’s legacy of acquisition over the past decade has shaped the evolution of both the technology and systems in place at the bank as well as the IT culture and the way those systems and technologies are managed.

“Growing from acquisition means we’ve developed a very project-based culture, and you’re making a lot of transactional decisions within those projects. There are consequences to growing through acquisition – TD is very vendor-centric,” he explained.

“There are a lot of vendors here and I’m fairly certain we’ve bought at least one of everything you’ve ever made. That’s led to the landscape that we’ve had, which has lots of customisation. It’s very expensive and there is little reused.”

Peacock said much of what the bank wants to do is fairly straightforward: moving off highly customised expensive equipment and services, and moving on to more open, standardised commodity platforms, and OpenStack is but one infrastructure-centric tool helping the bank deliver on that goal (it’s using it to stand up an internal private cloud). But the company also has to deal with other aspects a recent string of acquisition has left at the bank, including the fact that its development teams are still quite siloed, in order to reach its goals.

In order to standardise and reduce the number of services the firm’s developers use, the bank created an engineering centre in Manhattan and elected a team of engineers and developers (currently numbering 30, but will hit roughly 50 by the end of the year) spread between Toronto and New York City, all focused on helping it embrace a cloud-first, slimmed-down application landscape.

The centre and the central engineering team work with other development teams and infrastructure specialists across the bank, collecting feedback through fortnightly Q&As and feeding that back into the solutions being developed and the platforms being procured. Solving developer team fragmentation will ultimately help the bank move forward on this new path sustainably, he explained.

“When your developer community is so siloed you don’t end up adopting standards… you end up with 27 versions of Softcat. Which we have, by the way,” he said.

“This is a big undertaking, and one that has to be continuous. Business lines also have to move with us to decompose those applications and help deliver against those commitments,” he added.

While OpenStack may have been conceived as an open source multi-tenant IaaS, its future success will mainly come from hosted and on-premises private cloud deployments. Yes, there are many pockets of success with regional or vertical-focused public clouds based on OpenStack, but none with the scale of AWS or the growth of Microsoft Azure. Hewlett Packard Enterprise shuttered its OpenStack Helion-based public cloud, and Rackspace shifted engineering resources away from its own public cloud. Rackspace, the service provider with the largest share of OpenStack-related revenue, says its private cloud is growing in the ‘high double digits.’ Currently, 56% of OpenStack’s service-provider revenue total is public cloud-based, but we expect private cloud will account for a larger portion over the next few years.

October 21, 2015:

Over the past several years, HP has built its strategy on the idea that a hybrid infrastructure is the future of enterprise IT. In doing so, we have committed to helping our customers seamlessly manage their business across traditional IT and private, managed or public cloud environments, allowing them to optimize their infrastructure for each application’s unique requirements.

The market for hybrid infrastructure is evolving quickly. Today, our customers are consistently telling us that in order to meet their full spectrum of needs, they want a hybrid combination of efficiently managed traditional IT and private cloud, as well as access to SaaS applications and public cloud capabilities for certain workloads. In addition, they are pushing for delivery of these solutions faster than ever before.

With these customer needs in mind, we have made the decision to double-down on our private and managed cloud capabilities. For cloud-enabling software and solutions, we will continue to innovate and invest in our HP Helion OpenStack®platform. HP Helion OpenStack® has seen strong customer adoption and now runs our industry leading private cloud solution, HP Helion CloudSystem, which continues to deliver strong double-digit revenue growth and win enterprise customers. On the cloud services side, we will focus our resources on our Managed and Virtual Private Cloud offerings. These offerings will continue to expand, and we will have some very exciting announcements on these fronts in the coming weeks.

Public cloud is also an important part of our customers’ hybrid cloud strategy, and our customers are telling us that the lines between all the different cloud manifestations are blurring. Customers tell us that they want the ability to bring together multiple cloud environments under a flexible and enterprise-grade hybrid cloud model. In order to deliver on this demand with best-of-breed public cloud offerings, we will move to a strategic, multiple partner-based model for public cloud capabilities, as a component of how we deliver these hybrid cloud solutions to enterprise customers.

Therefore, we will sunset our HP Helion Public Cloud offering on January 31, 2016. As we have before, we will help our customers design, build and run the best cloud environments suited to their needs – based on their workloads and their business and industry requirements.

To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments. To enable this flexibility, we are helping customers build cloud-portable applications based on HP Helion OpenStack® and the HP Helion Development Platform. In Europe, we are leading the Cloud28+ initiative that is bringing together commercial and public sector IT vendors and EU regulators to develop common cloud service offerings across 28 different countries.

For customers who want access to existing large-scale public cloud providers, we have already added greater support for Amazon Web Services as part of our hybrid delivery with HP Helion Eucalyptus, and we have worked with Microsoft to support Office 365 and Azure. We also support our PaaS customers wherever they want to run our Cloud Foundry platform – in their own private clouds, in our managed cloud, or in a large-scale public cloud such as AWS or Azure.

All of these are key elements in helping our customers transform into a hybrid, multi-cloud IT world. We will continue to innovate and grow in our areas of strength, we will continue to help our partners and to help develop the broader open cloud ecosystem, and we will continue to listen to our customers to understand how we can help them with their entire end-to-end IT strategies.

December 1, 2015:

London, U.K. – December 1, 2015 – Today at Hewlett Packard Enterprise Discover, HPE and Microsoft Corp. announced new innovation in Hybrid Cloud computing through Microsoft Azure, HPE infrastructure and services, and new program offerings. The extended partnership appoints Microsoft Azure as a preferred public cloud partner for HPE customers while HPE will serve as a preferred partner in providing infrastructure and services for Microsoft’s hybrid cloud offerings.

“Hewlett Packard Enterprise is committed to helping businesses transform to hybrid cloud environments in order to drive growth and value,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “Public cloud services, like those Azure provides, are an important aspect of a hybrid cloud strategy and Microsoft Azure blends perfectly with HPE solutions to deliver what our customers need most.”

The partnering companies will collaborate across engineering and services to integrate innovative compute platforms that help customers optimize their IT environment, leverage new consumption models and accelerate their business further, faster.

“Our mission to empower every organization on the planet is a driving force behind our broad partnership with Hewlett Packard Enterprise that spans Microsoft Azure, Office 365 and Windows 10,” said Satya Nadella, CEO, Microsoft. “We are now extending our longstanding partnership by blending the power of Azure with HPE’s leading infrastructure, support and services to make the cloud more accessible to enterprises around the globe.”

Product Integration and Collaboration HPE and Microsoft are introducing the first hyper-converged system with true hybrid cloud capabilities, the HPE Hyper-Converged 250 for Microsoft Cloud Platform System Standard. Bringing together industry leading HPE ProLiant technology and Microsoft Azure innovation, the jointly engineered solution brings Azure services to customers’ datacenters, empowering users to choose where and how they want to leverage the cloud. An Azure management portal enables business users to self-deploy Windows and Linux workloads, while ensuring IT has central oversight. Azure services provide reliable backup and disaster recovery, and with HPE OneView for Microsoft System Center, customers get an integrated management experience across all system components. HPE offers hardware and software support, installation and startup services to customers to speed deployment to just a matter of hours, lower risk and decrease total cost of ownership. The CS 250 is available to order today.

As part of the expanded partnership, HPE will enable Azure consumption and services on every HPE server, which allows customers to rapidly realize the benefits of hybrid cloud.

Extended Support and Services to Simplify Cloud

HPE and Microsoft will create HPE Azure Centers of Excellence in Palo Alto, Calif. and Houston, Texas, to ensure customers have a seamless hybrid cloud experience when leveraging Azure across HPE infrastructure, software and services. Through the work at these centers, both companies will invest in continuing advancements in Hybrid IT and Composable Infrastructure.

Because Azure is a preferred provider of public cloud for HPE customers, HPE also plans to certify an additional 5,000 Azure Cloud Architects through its Global Services Practice. This will extend its Enterprise Services offerings to bring customers an open, agile hybrid cloud with improved security that integrates with Azure.

Partner Program Collaboration

Microsoft will join the HPE Composable Infrastructure Partner Program to accelerate innovation for the next-generation infrastructure and advance the automation and integration of Microsoft System Center and HPE OneView orchestration tools with today’s infrastructure.

As of the Mitaka release, two new gold members were added: UnitedStack and EasyStack, both from China. Other service providers and vendors shared their customer momentum and product updates with 451 Research during the summit. Among the highlights are: 

AT&T has cobbled together a DevOps team from 67 different organizations, in order to transform into a software company. 

All of GoDaddy’s new servers are going into its OpenStack environment. It is also using the Ironic (bare metal) project and exploring containers on OpenStack. 

SwiftStack built a commercial product with an AWS-like consumption model using the Swift (object storage) project. It now has over 60 customers, including eBay, PayPal, Burton Snowboards and Ancestry.com. 

OVH is based in France and operates a predominately pan-Europe public cloud. It added Nova compute in 2014, and currently has 75PB on Swift storage. 

Unitas Global says OpenStack-related enterprise engagements are a large part of its 100% Y/Y growth. While it does not contribute code, it is helping to develop operational efficiencies and working with Canonical to deploy ‘vanilla’ OpenStack using Juju charms. Tableau Software is a client. 

DreamHost is operating an OpenStack public cloud, DreamCompute, and is a supporter of the Astara (network orchestration) project. It claims 2,000 customers for DreamCompute and 10,000 customers for its object storage product. 

Platform9 is a unique OpenStack in SaaS startup with 20 paying customers. Clients bring their own hardware, and the software provides the management functions and takes care of patching and upgrades. 

AppFormix is a software startup focused on cloud operators and application developers that has formed a licensing agreement with Rackspace. Its analytics and capacity-planning dashboard software will now be deployed on Rackspace’s OpenStack private cloud. The software also works with Azure and AWS. 

Tesora is leveraging the Trove project to offer DBaaS. The vendor built a plug-in for Mirantis’ Fuel installer. The collaboration claims to make commercial, open source relational and NoSQL databases easier for administrators to deploy.

April 25, 2016:

OpenStack + AT&T Innovation = AT&T Integrated Cloud.

AT&T’s network has experienced enormous growth in traffic in the last several years and the trend continues unabated. Our software defined network initiative addresses the escalating traffic demands and brings greater agility and velocity to delivering features to end customers. The underlying fabric of this software defined network is AT&T Integrated Cloud (AIC).

Sorabh Saxena, AT&T’s SVP of Software Development & Engineering, will share several use cases that will highlight a multi-dimensional strategy for delivering an enterprise & service provider scale cloud. The use cases will illustrate OpenStack as the foundational element of AIC, AT&T solutions that complement it, and how it’s integrated with the larger AT&T ecosystem.

As the Senior Vice President of Software Development and Engineering at AT&T, Sorabh Saxena is leading AT&T’s transformation to a software-based company. Towards that goal, he is leading the development of platforms that include AT&T’s Integrated Cloud (AIC), API, Data, and Business Functions. Additionally, he manages delivery and production support of AT&T’s software defined network.

Sorabh and his organization are also responsible for technology solutions and architecture for all IT projects, AT&T Operation Support Systems and software driven business transformation programs that are positioning AT&T to be a digital first, integrated communications company with a best in class cost structure. Sorabh is also championing a cultural shift with a focus on workforce development and software & technology skills development.

Through Sorabh and his team’s efforts associated with AIC, AT&T is implementing an industry leading, highly complex and massively scaled OpenStack cloud. He is an advocate of OpenStack and his organization contributes content to the community that represents the needs of large enterprises and communication services providers.

AUSTIN, Texas — The OpenStack Austin Summit kicked off day one by awarding the Superuser Award to AT&T.

NTT, winners of the Tokyo edition, passed the baton onstage to the crew from AT&T.

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond. They have almost too many OpenStack accomplishments to list–read their full application here.

Sorabh Saxena gives a snapshot of AT&Ts OpenStack projects during the keynote.

The OpenStack Foundation launched the Superuser Awards to recognize, support and celebrate teams of end-users and operators that use OpenStack to meaningfully improve their businesses while contributing back to the community.

The legacy telecom is in the top 20 percent for upstream contributions with plans to increase this significantly in 2016.

It’s time for the community to determine the winner of the Superuser Award to be presented at the OpenStack Austin Summit. Based on the nominations received, the Superuser Editorial Advisory Board conducted the first round of judging and narrowed the pool to four finalists.

Now, it’s your turn.

The team from AT&T is one of the four finalists. Review the nomination criteria below, check out the other nominees and cast your vote before the deadline, Friday, April 8 at 11:59 p.m.Pacific Daylight Time. Voting is limited to one ballot per person.

How has OpenStack transformed your business?

AT&T is a legacy telco which is transforming itself by adopting virtual infrastructure and a software defined networking focus in order to compete in the market and create value for customers in the next five years and beyond.

Virtualization and virtual network functions (VNFs) are of critical importance to the Telecom industry to address growth and agility. AT&T’s Domain 2.0 Industry Whitepaper released in 2013 outlines the need as well as direction.

AT&T chose OpenStack as the core foundation of their cloud and virtualization strategy

OpenStack has reinforced AT&T’s open source strategy and strengthened our dedication to the community as we actively promote and invest resources in OpenStack

AT&T is committing staff and resources to drive the vision and innovation in the OpenStack and OPNFV communities to help drive OpenStack as the default cloud orchestrator for the Telecom industry

AT&T as a founding member of the ETSI ISG network functions virtualization (NFV) helped drive OpenStack as the cloud orchestrator in the NFV platform framework. OpenStack was positioned as the VIM – Virtual Infrastructure Manager. This accelerated the convergence of the Telco industry onto OpenStack.

OpenStack serves as a critical foundation for AT&T’s software-defined networking (SDN) and NFV future and we take pride in the following:

AT&T has deployed 70+ OpenStack (Juno & Kilo based) clouds globally, which are currently operational. Of the 70+ clouds 57 are production application and network clouds.

AT&T plans 90% growth, going to 100+ production application and network clouds by the end of 2016.

AT&T connects more than 14 million wireless customers via virtualized networks, with significant subscriber cut-over planned again in 2016

AT&T controls 5.7% of our network resources (29 Telco production grade VNFs) with OpenStack, with plans to reach 30% by the end of 2016 and 75% by 2020.

AT&T trained more than 100 staff in OpenStack in 2015

AT&T plans to expand to expand its community team of 50+ employees in 2016 As the chosen cloud platform OpenStack enabled AT&T in the following SDN and NFV related initiatives:

Our recently announced 5G field trials in Austin

Re-launch of unlimited data to mobility customers

Launch of AT&T Collaborate a next generation communication tool for enterprise

Provisioning of a Network on Demand platform to more than 500 enterprise customers

Connected Car and MVNO (Mobile Virtual Network Operator)

Mobile Call Recording

Internally we are virtualizing our control services like DNS, NAT, NTP, DHCP, radius, firewalls, load balancers and probes for fault and performance management.

Since 2012, AT&T has developed all of our significant new applications in a cloud native fashion hosted on OpenStack. We also architected OpenStack to support legacy apps.

OpenStack currently resides on over 15,000 VMs worldwide, with the expectation of further, significant growth coming in 2016-17

AT&T’s OpenStack integrated Orchestration framework has resulted in a 75% reduction in turnaround time for requests for virtual resources

AT&T Plans to move 80% of our Legacy IT into the OpenStack based virtualized cloud environment within coming years

Uniform set of APIs exposed by OpenStack allows AT&T business units to leverage a “develop-once-run-everywhere” set of tools OpenStack helps AT&T’s strategy to begin to adopt best of the breed solutions at five 9’s of reliability for:

NFV

Internet-scale storage service

SDN

Putting all AT&T’s workloads on one common platform Deployment Automation: OpenStack modules have enabled AT&T to cost-effectively manage the OpenStack configuration in an automated, holistic fashion.

Using OpenStack Heat, AT&T pushed rolling updates and incremental changes across 70+ OpenStack clouds. Doing it manually would be take many more people and a much longer schedule.

Using OpenStack Fuel as a pivotal component in its cloud deployments AT&T accelerates the otherwise consuming, complex, and error-prone process of deploying, testing, and maintaining various configuration flavors of OpenStack at scale. AT&T was a major contributor towards Fuel 7.0 and Fuel 8.0 requirements. OpenStack has been a pivotal driver of AT&T’s overall culture shift. AT&T as an organization is in the midst of a massive culture shift from a Legacy Telco to a company where new skills, techniques and solutions are embraced.

OpenStack has been a key driver of this transformation in the following ways:

AT&T is now building 50 percent of all software on open source technologies

Allowing for the adoption of a dev ops model that creates a more unified team working towards a better end product

Development transitioned from a waterfall to cloud-native CICD methodologies

Developers continue to support OpenStack and make their applications cloud-native whenever possible.

How has the organization participated in or contributed to the OpenStack community?

AT&T was the first U.S. telecom service provider to sign up for and adopt the then early stage NASA-spawned OpenStack cloud initiative, back in 2011.

AT&T has been an active OpenStack contributor since the Bexar release.

AT&T has been a Platinum Member of the OpenStack Foundation since its origins in 2012 after helping to create its bylaws.

Toby Ford, AVP AT&T Cloud Technology has provided vision, technology leadership, and innovation to OpenStack ecosystem as an OpenStack Foundation board member since late 2012.

AT&T is founding member of ETSI, and OPNFV.

AT&T has invested in building an OpenStack upstream contribution team with 25 current employees and a target for 50+ employees by the end of 2016.

During the early years of OpenStack, AT&T brought many important use-cases to the community. AT&T worked towards solving those use-cases by leveraging various OpenStack modules, in turn encouraging other enterprises to have confidence in the young ecosystem.

AT&T drove these following Telco-grade blueprint contributions to past releases of OpenStack:

AT&T is proud to drive OpenStack adoption by sharing knowledge back to the OpenStack community in the form of these summit sessions at the upcoming Austin summit:

Telco Cloud Requirements: What VNFs Are Asking For

Using a Service VM as an IPv6 vRouter

Service Function Chaining

Technology Analysis Perspective

Deploying Lots of Teeny Tiny Telco Clouds

Everything You Ever Wanted to Know about OpenStack At Scale

Valet: Holistic Data Center Optimization for OpenStack

Gluon: An Enabler for NFV

Among the Cloud: Open Source NFV + SDN Deployment

AT&T: Driving Enterprise Workloads on KVM and vCenter using OpenStack as the Unified Control Plane

Striving for High-Performance NFV Grid on OpenStack. Why you, and every OpenStack community member should be excited about it

OpenStack at Carrier Scale

AT&T is the “first to market” with deployment of OpenStack supported carrier-grade Virtual Network Functions. We provide the community with integral data, information, and first-hand knowledge on the trials and tribulations experienced deploying NFV technology.

AT&T ranks in the top 20 percent of all companies in terms of upstream contribution (code, documentation, blueprints), with plans to increase this significantly in 2016.

Commits: 1200+

Lines of Code: 116,566

Change Requests: 618

Patch Sets: 1490

Draft Blueprints: 76

Completed Blueprints: 30

Filed Bugs: 350

Resolved Bugs: 250

What is the scale of the OpenStack deployment?

AT&T’s OpenStack based AIC is deployed at 70+ sites across the world. Of the 70+ 57 are production app and network clouds.

AT&T plans 90% growth, going to 100+ production app and network clouds by end of 2016.

AT&T connects more than 14 million of the 134.5 million wireless customers via virtualized networks with significant subscriber cutover planned again in 2016

AT&T controls 5.7% of our network resources (29 Telco production grade VNF) with a goal of high 80s by end of 2016) on OpenStack.

Production workloads also include AT&T’s Connected Car, Network on Demand, and AT&T Collaborate among many more.

How is this team innovating with OpenStack?

AT&T and AT&T Labs are leveraging OpenStack to innovate with Containers and NFV technology.

Containers are a key part of AT&Ts Cloud Native Architecture. AT&T chairs the Open Container Initiative (OCI) to drive the standardization around container formats.

AT&T is leading the effort to improve Nova and Neutron’s interface to SDN controllers.

Margaret Chiosi, an early design collaborator to Neutron, ETSI NFV, now serves as President of OPNFV. AT&T is utilizing its position with OPNFV to help shape the future of OpenStack / NFV. OpenStack has enabled AT&T to innovate extensively.

The following recent unique workloads would not be possible without the SDN and NFV capabilities which OpenStack enables: * Our recent announcements of 5G field trials in Austin * Re-launch of unlimited data to mobility customers * Launch of AT&T Collaborate * Network on Demand platform to more than 500 enterprise customers * Connected Car and MVNO (Mobile Virtual Network Operator) * Mobile Call Recording New services by AT&T Entertainment Group (DirecTV) that would use OpenStack based cloud infrastructure in coming years: * NFL Sunday Ticket with up to 8 simultaneous games * DirecTV Streaming Service Without Need For satellite dish

In summary – the innovation with OpenStack is not just our unique workloads, but also to support them together under the same framework, management systems, development/test, CI/CD pipelines, and deployment automation toolset(s).

Swisscom has one of the largest in-production industry standard Platform as a Service built on OpenStack. Their offering is focused on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry, and PLUMgrid. With services such as MongoDB, MariaDB, RabbitMQ, ELK, and an object storage, the PaaS cloud offers what developers need to get started right away. Join this panel for take-away lessons on Swisscom’s journey, the technologies, partnerships, and developers who are building apps everyday on Swisscom’s OpenStack cloud.

Swisscom has one of the largest in-production industry standard platform-as-a-service built on OpenStack.

Their offering focuses on providing an enterprise-grade PaaS environment to customers worldwide and with various delivery models based on Cloud Foundry and OpenStack. Swisscom, Switzerland’s leading telecom provider, embarked early on the OpenStack journey to deploy their app cloud partnering with Red Hat, Cloud Foundry and PLUMgrid.

Superuser interviewed Marcel Härry, chief architect, PaaS at Swisscom and member of theTechnical Advisory Board of the Cloud Foundry Foundation to find out more.

How are you using OpenStack?

OpenStack has allowed us to rapidly develop and deploy our Cloud Foundry-based PaaS offering, as well as to rapidly develop new features within SDN and containers. OpenStack is the true enabler for rapid development and delivery.

An example: after half a year from the initial design and setup, we already delivered two production instances of our PaaS offering built on multiple OpenStack installations on different sites. Today we are already running multiple production deployments for high-profile customers, who further develop their SaaS offerings using our platform. Additionally, we are providing the infrastructure for numerous lab and development instances. These environments allow us to harden and stabilize new features while maintaining a rapid pace of innovation, while still ensuring a solid environment.

We are running numerous OpenStack stacks, all limited – by design – to a single region, and single availability zone. Their size ranges from a handful of compute nodes, to multiple dozens of compute nodes, scaled based on the needs of the specific workloads. Our intention is not to build overly large deployments, but rather to build multiple smaller stacks, hosting workloads that can be migrated between environments. These stacks are hosting thousands of VMs, which in turn are hosting tens of thousands of containers to run production applications or service instances for our customers.

What kinds of applications or workloads are you currently running on OpenStack?

We’ve been using OpenStack for almost three years now as our infrastructure orchestrator. Swisscom built its Elastic Cloud on top of OpenStack. On top of this we run Swisscom’s Application Cloud, or PaaS, built on Cloud Foundry with PLUMgrid as the SDN layer. Together, the company’s clouds deliver IaaS to IT architects, SaaS to end users and PaaS to app developers among other services and applications. We mainly run our PaaS/Cloud Foundry environment on OpenStack as well as the correlated managed services (i.e. a kind of DBaaS, Message Service aaS etc.) which are running themselves in Docker containers.

What challenges have you faced in your organization regarding OpenStack, and how did you overcome them?

The learning curve for OpenStack is pretty steep. When we started three years ago almost no reference architectures were available, especially none with enterprise-grade requirements such as dual-site, high availability (HA) capabilities on various levels and so forth. In addition, we went directly into the SDN, SDS levels of implementation which was a big, but very successful step at the end of the day.

What were your major milestones?

Swisscom’s go-live for its first beta environment was in spring of 2014, go live for an internal development (at Swisscom) was spring of 2015, and the go-live for its public Cloud Foundry environment fully hosted on OpenStack was in the fall of 2015. The go-live date for enterprise-grade and business-critical workloads on top of our stack from various multinational companies in verticals like finance or industry is spring, 2016, and Swisscom recently announced Swiss Re as one of its first large enterprise cloud customers.

What have been the biggest benefits to your organization as a result of using OpenStack?

Pluggability and multi-vendor interoperability (for instance with SDN like PLUMgrid or SDS like ScaleIO) to avoid vendor lock in and create a seamless system. OpenStack enabled Swisscom to experiment with deployments utilizing a DevOps model and environment to deploy and develop applications faster. It simplified the move from PoC to production environments and enabled us to easily scale out services utilizing a distributed cluster-based architecture.

What advice do you have for companies considering a move to OpenStack?

It’s hard in the beginning but it’s really worth it. Be wise when you select your partners and vendors, this will help you to be online in a very short amount of time. Think about driving your internal organization towards a dev-ops model to be ready for the first deployments, as well as enabling your firm to change deployment models (e.g. going cloud-native) for your workloads when needed.

How do you participate in the community?

This year’s Austin event was our second OpenStack Summit where we provided insights into our deployment and architecture, contributing back to the community in terms of best practices, as well as providing real-world production use-cases. Furthermore, we directly contribute patches and improvements to various OpenStack projects. Some of these patches have already been accepted, while a few are in the pipeline to be further polished for publishing. Additionally, we are working very closely together with our vendors – RedHat, EMC, ClusterHQ/Flocker, PLUMgrid as well as the Cloud Foundry Foundation – and work together to further improve their integration and stability within the OpenStack project. For example, we worked closely together with Flocker for their cinder-based driver to orchestrate persistency among containers. Furthermore, we have provided many bug reports through our vendors and have worked together with them on fixes which then have made their way back into the OpenStack community.

What’s next?

We have a perfect solution for non-persistent container workloads for our customers. We are constantly evolving this product and are working especially hard to meet the enterprise- and finance-verticals requirements when it comes to the infrastructure orchestration of OpenStack.

Härry spoke about OpenStack in production at the recent Austin Summit, along with Pere Monclus of PLUMgrid, Chip Childers of the Cloud Foundry Foundation, Chris Wright of Red Hat and analyst Rosalyn Roseboro.

BEIJING, May 10, 2016 /PRNewswire/ — In 2015, the Chinese IT superpower Lenovo chose EasyStack to build an OpenStack-based enterprise cloud platform to carry out their “Internet Strategy”. In six months, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It is expected that by the end of 2016, 20% of the IT system will be migrated onto the Cloud.

OpenStack is the foundation for Cloud, and perhaps has matured in the overseas market. In China, OpenStack practices worthy of noticing often come from the relatively new category of Internet Companies. Though it has long been marketed as “enterprise-ready”, traditional industries still tend to hold back towards OpenStack. This article aims to turn this perception around by presenting an OpenStack practice from the Chinese IT Superpower Lenovo, detailing their journey of transformation in both the technology and business realms to a private cloud built upon OpenStack. Although OpenStack will still be largely a carrier for internet businesses, Lenovo plans to migrate 20% of its IT system onto the cloud before the end of 2016 – taking a much applauded step forward.

Be it the traditional PC or the cellphone, technology’s evolving fast amidst this move towards mobile and social networking, and the competition’s fierce. In response to rapidly changing market dynamics, the Lenovo Group made the move of going from being product-oriented to a user-oriented strategy that can only be supported by an agile, flexible and scalable enterprise-level cloud platform capable of rapid iterations. After thorough consideration and careful evaluation, Lenovo chose OpenStack as the basis for their enterprise cloud platform to carry out this “Internet Strategy”. After six months of practice, this platform has evolved into an enterprise-level OpenStack production environment of over 3000 cores with data growth peaking at 10TB/day. It’s expected that 20% of the IT system will be migrated onto the Cloud by the end of 2016.

Transformation and Picking the Right Cloud

In the past, internal IT at Lenovo has always been channel- and key client-oriented, with a traditional architecture consisting of IBM Power, AIX, PowerVM, DB2 and more recently, VMware virtualization. In the move towards becoming an Internet Company, such traditional architecture was far from being able to support the user and business volume brought by the B2C model. Cost-wise, Lenovo’s large-scale deployment of commercial solutions were reliable but complex to scale and extremely expensive.

Also, this traditional IT architecture was inadequate in terms of operational efficiency, security and compliance and unable to support Lenovo’s transition towards eCommerce and mobile business. In 2015, Lenovo’s IT entered a stage of infrastructural re-vamp, in need of using a cloud computing platform to support new businesses.

To find the right makeup for the cloud platform, Lenovo performed meticulous analyses and comparisons on mainstream x86 virtualization technologies, private cloud platforms, and public cloud platforms. After evaluating stability, usability, openness and ecosystem vitality and comprehensiveness, Lenovo deemed the OpenStack cloud platform technology able to fulfill its enterprise needs and decided to use OpenStack as the infrastructural cloud platform supporting their constant businesses innovations.

Disaster recovery plans on virtual machines, cloud hard drives and databases were considered early on into the OpenStack architectural design to ensure prompt switch over when needed to maintain business availability.

To ensure high availability and improve the cloud platform’s system efficiency, Lenovo designed a physical architecture, and used capable servers with advanced configurations to make up the compute, storage network all-in-one, then using OpenStack to integrate into a single resource pool, placing compute nodes and storage nodes on the same physical node.

Two-way X3650 servers and four-way ThinkServer RQ940 server as backbones at the hardware layer. For every node there are five SSD hard drivers and 12 SAS hard drives to make up the storage module. SSD not only acts as the storage buffer, but also is the high performance storage resource pool, accessing the distributed storage through the VM to achieve high availability.

Lenovo had to resolve a number of problems and overcome numerous hurdles to elevate OpenStack to the enterprise-level.

Compute

Here, Lenovo utilized high-density virtual machine deployment. At the base is KVM virtualization technology, optimized in multiple way to maximize physical server performance, isolating CPU, Memory and other hardware resources under the compute-storage convergent architecture. The outcome is the ability to have over 50 VMs running smoothly and efficiently on every two-core CPU compute node.

In the cloud environment, it’s encouraged to achieve high availability through not hardware, but solutions. Yet still there are some traditional applications that hold certain requirements to a single host server. For such applications unable to achieve High Availability, Lenovo used Compute HA technology to achieve high availability on compute nodes, performing fault detection through various methods, migrating virtual machines on faulted physical machine to other available physical machines when needed. This entire process is automated, reducing as much as possible business disruptions caused by physical machine breakdowns.

Network

Network Isolation

Using different NIC, different switch or different VLAN to isolate various networks such as stand-alone OpenStack management networks, virtual production networks, storage networks, public networks, and PXE networks, so that interferences are avoided, increasing overall bandwidth and enabling better network control.

Multi-Public Network

Achieve network agility through multiple public networks to better manage security strategies. The Public Networks from Unicom, Telecom and at the office are some examples

Network and Optimization

Better integrate with the traditional data center network through the VLAN network model, then optimize its data package processing to achieve improved capability on network data pack process, bringing closer the virtual machine bandwidth to that of the physical network.

Dual Network Protocol Bundling and Multi Switch

Achieve high availability of physical networks through dual network protocol bundling to different switches.

Network Node HA

Achieve public network load balance, high availability and high performance through multiple network nodes, at which router-level Active/Standby methodology is used to achieve HA, which is ensured through independent network router monitoring services.

Storage

The Lenovo OpenStack Cloud Platform used Ceph as the unified storage backend, in which data storage for Glance image mirroring, Nova virtual machine system disc, and Cinder cloud hard drive are provided by Ceph RBD. Using Ceph’s Copy on Write function to revise OpenStack codes can deploy virtual machines within seconds.

With Ceph as the unified storage backend, its functionality is undoubtedly a key metric on whether the critical applications of an enterprise can be virtualized and cloud-ready. In a super-convergent deployment architecture where compute and storage run alongside each other, storage function optimization not only have to maximize storage capability, but also have to ensure the isolation between storage and compute resources to maintain system stability. For the IO stack below, Lenovo conducted bottom-up layer-by-layer optimization:

Leverage Solid State Disc as the Ceph OSD log to improve overall cluster IO functionality, to fulfill performance demands of critical businesses ( for example the eCommerce system’s database businesses, etc.) and achieve function-cost balance. SSD is known for its low power consumption, prompt response, high IOPS, and high throughput. In the Ceph log system, these are aligned to multithread access; using SSD to replace mechanical hard drives can fully unleash SSD’s trait of random access, rapid response and high IO throughput. Appropriately optimizing IO coordination strategy and further suit it to SSD and lower overall IO latency.

Purposeful Planning

Plan the number of Ceph OSD under the super-convergent node reasonably according to virtual machine density on the server, while assign in advance CPU and other memory resources. Cgroup, taskset and other tools can be used to perform resource isolation for QEMU-KVM and Ceph OSD

Parameter Tuning

Regarding parameter tuning for Ceph, performance can be effectively improved by fine-tuning parameters on FileStore’s default sequence, OSD’s OP thread and others. Additional tuning can be done through performing iteration test to find the most suitable parameter for the current hardware environment.

By employing exclusive low-latency fiber-optic cable, data can be simultaneously stored in local backup centers, and started asynchronously in long-distance centers, maximizing data security.

AD Integration

In addition, Lenovo has integrated its own business demands into the OpenStack enterprise cloud platform. As a mega company with tens of thousands of employees, AD activity logs are needed for authorization so that staffs won’t need to be individually set up user commands. Through customized development by part of the collaborator, Lenovo has successfully integrated AD functions into its OpenStack Enterprise Cloud Platform.

Overall Outcomes

Lenovo’s transformation towards being “internet-driven” was able to begin after the buildup of this OpenStack Enterprise Cloud Platform. eCommerce, Big Data and Analytics, IM, Online Mobile Phone Support and other internet based businesses, all supported by this cloud platform. Judging from feedback from the team, the Lenovo OpenStack Enterprise Cloud Platform is functioning as expected.

In the process of building up this OpenStack based enterprise cloud platform, Lenovo chose EasyStack, the leading Chinese OpenStack Company to provide professional implementation and consulting services, helping to build the initial platform, fostering a number of OpenStack experts. For Lenovo, community compatibility and continuous upgrade, as well as experiences in delivering services at the enterprise level are the main factors for consideration when choosing an OpenStack business partner.

When an open-source database written in Java that runs primarily in production on Linux becomes THE solution for the cloud platform from Microsoft (i.e. Azure) in the fully distributed, highly secure and “always on” transactional database space then we should take a special note of that. This is the case of DataStax:

July 15, 2015: Building the intelligent cloudScott Guthrie’s keynote on the Microsoft Worldwide Partner Conference 2015, the DataStax related segment in 7 minutes only

SCOTT GUTHRIE, EVP of Microsoft Cloud and Enterprise: What I’d like to do is invite three different partners now on stage, one an ISV, one an SI, and one a managed service provider to talk about how they’re taking advantage of our cloud offerings to accelerate their businesses and make their customers even more successful.

First, and I think, you know, being able to take advantage of all of these different capabilities that we now offer.

Now, the first partner I want to bring on stage is DataStax. DataStax delivers an enterprise-grade NoSQL offering based on Apache Cassandra. And they enable customers to build solutions that can scale across literally thousands of servers, which is perfect for a hyper-scale cloud environment.

And one of the customers that they’re working with is First American, who are deploying a solution on Microsoft Azure to provide richer insurance and settlement services to their customers.

What I’d like to do is invite Billy Bosworth, the CEO of DataStax, on stage to join me to talk about the partnership that we’ve had and how some of the great solutions that we’re building together. Here’s Billy. (Applause.)

SCOTT GUTHRIE: So tell us a little bit about DataStax and the technology you guys build.

BILLY BOSWORTH: Sure. At DataStax, we deliver Apache Cassandra in a database platform that is really purpose-built for the new performance and availability demands that are being generated by today’s Web, mobile and IOT applications.

Now, that probably sounds like a lot of other database vendors out there as well. But, Scott, we have something that’s really different and really important to us and our customers, and that’s the notion of being always on. And when you talk about “always on” and transactional databases, things can get pretty complicated pretty fast, as you well know.

The reason for that is in an always-on world, the datacenter itself becomes a single point of failure. And that means you have to build an architecture that is going to be comprehensive and include multiple datacenters. That’s tough enough with almost any other piece of the software stack. But for transactional databases, that is really problematic.

Fortunately, we have a masterless architecture in Apache Cassandra that allows us to have DataStax enterprise scale in a single datacenter or across multiple datacenters, and yet at the same time remain operationally simple. So that’s really the core of what we do.

SCOTT GUTHRIE: Is the always-on angle the key differentiator in terms of the customer fit with Azure?

BILLY BOSWORTH: So if you think about deployment to multiple datacenters, especially and including Azure, it creates an immediate benefit. Going back to your hybrid clouds comment, we see a lot of our customers that begin their journey on premises. So they take their local datacenter, they install DataStax Enterprise, it’s an active database up and running. And then they extend that database into Azure.

Now, when I say that, I don’t mean they do so for disaster recovery or failover, it is active everywhere. So it is taking full read-write requests on premises and in Azure at the same time.

So if you lose connectivity to your physical datacenter, then the Azure active nodes simply take over. And that’s great, and that solves the always-on problem.

But that’s not the only thing that Azure helps to solve. Our applications, because of their nature, tend to drive incredibly high throughput. So for us, hundreds of millions or even tens and hundreds of billions of transactions a day is actually quite common.

You guys are pretty good, Scott, but I don’t think you’ve changed the laws of physics yet. And so the way that you get that kind of throughput with unbelievable performance demands, because our customers demand millisecond and microsecond response times, is you push the data closer to the end points. You geographically distribute it.

Now, what our customers are realizing is they can try and build 19 datacenters across the world, which I’m sure was really cheap and easy to do, or they can just look at what you’ve already done and turn to a partnership like ours to say, “Help us understand how we do this with Azure.”

So not only do you get the always-on benefit, which is critical, but there’s also a very important performance element to this type of architecture as well.

SCOTT GUTHRIE: Can you tell us a little bit about the work you did with First American on Azure?

BILLY BOSWORTH: Yeah. First American is a leading name in the title insurance and settlement services businesses. In fact, they manage more titles on more properties than anybody in the world.

Every title comes with an associated set of metadata. And that metadata becomes very important in the new way that they want to do business because each element of that needs to be transacted, searched, and done in real-time analysis to provide better information back to the customer in real time.

And so for that on the database side, because of the type of data and because of the scale, they needed something like DataStax Enterprise, which we’ve delivered. But they didn’t want to fight all those battles of the architecture that we discussed on their own, and that’s where they turned to our partnership to incorporate Microsoft Azure as the infrastructure with DataStax Enterprise running on top.

And this is one of many engagements that you know we have going on in the field that are really, really exciting and indicative of the way customers are thinking about transforming their business.

SCOTT GUTHRIE: So what’s it like working with Microsoft as a partner?

BILLY BOSWORTH: I tell you, it’s unbelievable. Or, maybe put differently, highly improbable that you and I are on stage together. I want you guys to think about this. Here’s the type of company we are. We’re an open-source database written in Java that runs primarily in production on Linux.

Now, Scott, Microsoft has a couple of pretty good databases, of which I’m very familiar from my past, and open source and Java and Linux haven’t always been synonymous with Microsoft, right?

So I would say the odds of us being on stage were almost none. But over the past year or two, the way that you guys have opened up your aperture to include technologies like ours — and I don’t just say “include.” His team has embraced us in a way that is truly incredible. For a company the size of Microsoft to make us feel the way we do is just remarkable given the fact that none of our technologies have been something that Microsoft has traditionally said is part of their family.

So I want to thank you and your team for all the work you’ve done. It’s been a great experience, but we are architecting systems that are going to drive businesses for the coming decades. And that is super exciting to have a partner like you engaged with us.

SCOTT GUTHRIE: Fantastic. Well, thank you so much for joining us on stage.

BILLY BOSWORTH: Thanks, Scott. (Applause.)

The typical data framework capabilities of DataStax in all respects is best understood via the the following webinar which presents Apache Spark as well as the part of the complete data platform solution:
– Apache Cassandra is the leading distributed database in use at thousands of sites with the world’s most demanding scalability and availability requirements.
– Apache Spark is a distributed data analytics computing framework that has gained a lot of traction in processing large amounts of data in an efficient and user-friendly manner.
– The joining of both provides a powerful combination of real-time data collection with analytics.
After a brief overview of Cassandra and Spark, (Cassandra till 16:39, Spark till 19:25) this class will dive into various aspects of the integration (from 19:26).
August 19, 2015: Big Data Analytics with Cassandra and Spark by Brian Hess, Senior Product Manager of Analytics, DataStax

SANTA CLARA, CA – September 23, 2015 – (Cassandra Summit 2015) DataStax, the company that delivers Apache Cassandra™ to the enterprise, today announced a strategic collaboration with Microsoft to deliver Internet of Things (IoT), Web and mobile applications in public, private or hybrid cloud environments. With DataStax Enterprise (DSE), a leading fully-distributed database platform, available on Azure, Microsoft’s cloud computing platform, enterprises can quickly build high-performance applications that can massively scale and remain operationally simple across public and private clouds, with ease and at lightning speed.

PERSPECTIVES ON THE NEWS

“At Microsoft we’re focused on enabling customers to run their businesses more productively and successfully,” said Scott Guthrie, Executive Vice President, Cloud and Enterprise, Microsoft. “As more organizations build their critical business applications in the cloud, DataStax has proved to be a natural Azure partner through their ability to enable enterprises to build solutions that can scale across thousands of servers which is necessary in today’s hyper-scale cloud environment.”

“We are witnessing an increased adoption of DataStax Enterprise deployments in hybrid cloud environments, so closely aligning with Microsoft benefits any organization looking to quickly and easily build high-performance IoT, Web and mobile apps,” said Billy Bosworth, CEO, DataStax. “Working with a world-class organization like Microsoft has been an incredible experience and we look forward to continuing to work together to meet the needs of enterprises looking to successfully transition their business to the cloud.”

“As a leader in providing information and insight in critical areas that shape today’s business landscape, we knew it was critical to transform our back-end business processes to address scale and flexibility” said Graham Lammers, Director, IHS. “With DataStax Enterprise on Azure we are now able to create a next generation big data application to support the decision-making process of our customers across the globe.”

BUILD SIMPLE, SCALABLE AND ALWAY-ON APPS IN RECORD SPEED

To address the ever-increasing demands of modern businesses transitioning from on-premise to hybrid cloud environments, the DataStax Enterprise on Azure on-demand cloud database solution provides enterprises with both development and production ready Bring Your Own License (BYOL) DSE clusters that can be launched in minutes on theMicrosoft Azure Marketplaceusing Azure Resource Management (ARM) Templates. This enables the building of high-performance IoT, Web and mobile applications that can predictably scale across global Azure data centers with ease and at remarkable speed. Additional benefits include:

Hybrid Deployment: Easily move DSE workloads between data centers, service providers and Azure, and build hybrid applications that leverage resources across all three.

Continuous Availability: DSE’s peer-to-peer architecture offers no single point of failure. DSE also provides maximum flexibility to distribute data where it’s needed most by replicating data across multiple data centers, the cloud and mixed cloud/on-premise environments.

MICROSOFT ENTERPRISE CLOUD ALLIANCE & FAST START PROGRAM

DataStax also announced it has joined Microsoft’s Enterprise Cloud Alliance, a collaboration that reinforces DataStax’scommitment to provide the best set of on-premise, hosted and public cloud database solutions in the industry. The goal of Microsoft’s Enterprise Cloud Alliance partner program is to create, nurture and grow a strong partner ecosystem across a broad set of Enterprise Cloud Products delivering the best on-premise, hosted and Public Cloud solutions in the industry. Through this alliance, DataStax and Microsoft are working together to create enhanced enterprise-grade offerings for the Azure Marketplace that reduce the complexities of deployment and provisioning through automated ARM scripting capabilities.

Additionally, as a member of Microsoft Azure’s Fast Start program, created to help users quickly deploy new cloud workloads, DataStax users receive immediate access to the DataStax Enterprise Sandbox on Azure for a hands-on experience testing out DSE on Azure capabilities. DataStax Enterprise Sandbox on Azure can be found here.

Cassandra Summit 2015, the world’s largest gathering of Cassandra users, is taking place this week and Microsoft Cloud and Enterprise Executive Vice President Scott Guthrie, DataStax CEO Billy Bosworth, and Apache Cassandra Project Chair and DataStax Co-founder and CTO Jonathan Ellis, will deliver the conference keynote at 10 a.m. PT on Wednesday, September 23. The keynote can be viewed at DataStax.com.

ABOUT DATASTAX

DataStax delivers Apache Cassandra™ in a database platform purpose-built for the performance and availability demands for IoT, Web and mobile applications. This gives enterprises a secure, always-on database technology that remains operationally simple when scaling in a single datacenter or across multiple datacenters and clouds.

With more than 500 customers in over 50 countries, DataStax is the database technology of choice for the world’s most innovative companies, such as Netflix, Safeway, ING, Adobe, Intuit and eBay. Based in Santa Clara, Calif., DataStax is backed by industry-leading investors including Comcast Ventures, Crosslink Capital, Lightspeed Venture Partners, Kleiner Perkins Caufield & Byers, Meritech Capital, Premji Invest and Scale Venture Partners. For more information, visit DataStax.com or follow us @DataStax.

Datastax is a California-based database management company. It offers an enterprise-grade NoSQL database that seamlessly and securely integrates real-time data with Apache Cassandra. Databases built on Apache Cassandra offer more flexibility than traditional databases. Even in case of calamities and uncertainties, like floods and earthquakes, data is available due to its replication at other data centers. NoSQL and Cassandra are open-source software.

Cassandra database was developed by Facebook (FB) to handle its enormous volumes of data. The technology behind Cassandra was developed by Amazon (AMZN) and Google (GOOGL). Oracle’s MySQL (ORCL), Microsoft’s SQL Server (MSFT), and IBM’s DB2 (IBM) are the traditional databases present in the market .

Datastax raised $106 million in September 2014 to expand its database operations. MongoDB Inc. and Couchbase Inc.—both open-source NoSQL database developers—raised $231 million and $115 million, respectively, in 2014. According to Market Research Media, a consultancy firm, spending on NoSQL technology in 2013 was less than $1 billion. It’s expected to reach $3.4 billion by 2020. This explains why this segment is attracting such huge investments.

Oracle’s dominance in the database market is uncertain

Oracle claims it’s a market leader in the relational database market, with a revenue share of 48.3%. In 2013, it launched Oracle Database 12C. According to Oracle, “Oracle Database 12c introduces a new multitenant architecture that simplifies the process of consolidating databases onto the cloud; enabling customers to manage many databases as one — without changing their applications.” To know in detail about Database 12c, please click here .

In July 2013, DataStax announced that dozens of companies have migrated from Oracle databases to DataStax databases. Customers cited scalability, disaster avoidance, and cost savings as the reasons for shifting databases. Datastax databases’ rising popularity jeopardizes Oracle’s dominant position in the database market.

Cassandra Summit is in high gear this week in Santa Clara, CA, representing the largest NoSQL event of its kind! This is the largest Cassandra Summit to date. With more than 7,000 attendees (both onsite and virtual), this is the first time the Summit is a three-day event with over 135 speaking sessions. This is also the first timeDataStax will debut a formalized Apache Cassandra™ training and certification program in conjunction with O’Reilly Media. All incredibly exciting milestones!

We are excited to share another milestone. Yesterday, we announcedour formal strategic collaboration with Microsoft. Dedicated DataStax and Microsoft teams have been collaborating closely behind the scenes for more than a year on product integration, QA testing, platform optimization, automated provisioning, and characterization of DataStax Enterprise (DSE) on Azure, and more to ensure product validation and a great customer experience for users of DataStax Enterprise on the Azure cloud. There is strong coordination across the two organizations – very close executive, field, and technical alignment – all critical components for a strong partnership.

This partnership is driven and shaped by our joint customers. Our customers oftentimes begin their journey with on-premise deployments of our database technology and then have a requirement to move to the cloud – Microsoft is a fantastic partner to help provide the flexibility of a true hybrid environment along with the ability to migrate to and scale applications in the cloud. Additionally, Microsoft has significant breadth regarding their data centers – customers can deploy in numerous Azure data centers around the globe, in order to be ‘closer’ to their end users. This is highly complementary to DataStax Enterprise software as we are a peer-to-peer distributed database and our customers need to be close to their end users with their always-on, always available enterprise applications.

To highlight a couple of joint customers and use cases we have First American Title and IHS, Inc. First American is a leading provider of title insurance and settlement services with revenue over $5B. They ingest and store the largest number (billions) of real estate property records in the industry. Accessing, searching and analyzing large data-sets to get relevant details quickly is the new way they want to do business – to provide better information back to their customers in real-time and allow end users to easily search through the property records on-line. They chose DSE and Azure because of the large data requirements and because of the need to continue to scale the application.

A second great customer and use case is IHS, Inc., a $2B revenue-company that provides information and analysis to support the decision-making process of businesses and governments. This is a transformational project for IHS as they are building out an ‘internet age’ parts catalog – it’s a next generation big data application, using NoSQL, non-relational technology and they want to deploy in the cloud to bring the application to market faster.

As you can see, we are enabling enterprises to engage their customer like never before with their always on, highly available and distributed applications. Stay tuned for more as we move forward together in the coming months!

When Microsoft says that it is embracing Linux as a peer to Windows, it is not kidding. The company has created its own Linux distribution for switches used to build the Azure cloud, and it has embraced Spark in-memory processing and Cassandra as its data storefor its first major open source big data project – in this case to help improve the quality of its Office365 user experience. And now, Microsoft is embracing Cassandra, the NoSQL data store originally created by Facebook when it could no longer scale the MySQL relational database to suit its needs, on the Azure public cloud.

Billy Bosworth, CEO at DataStax, the entity that took over steering development of and providing commercial support for Cassandra, tells The Next Platform that the deal with Microsoft has a number of facets, all of which should help boost the adoption of the enterprise-grade version of Cassandra. But the key one is that the Global 2000 customers that DataStax wants to sell support and services to are already quite familiar with both Windows Server in their datacenters and they are looking to burst out to the Azure cloud on a global scale.

“We are seeing a rapidly increasing number of our customers who need hybrid cloud, keeping pieces of our DataStax Enterprise on premise in their own datacenters and they also want to take pieces of that same live transactional data – not replication, but live data – and in the Azure cloud as well,” says Bosworth. “They have some unique capabilities, and one of the major requirements of customers is that even if they use cloud infrastructure, it still has to be distributed by the cloud provider. They can’t just run Cassandra in one availability zone in one region. They have to span data across the globe, and Microsoft has done a tremendous job of investing in its datacenters.”

With the Microsoft agreement, DataStax is now running its wares on the three big clouds, with Amazon Web Services and Google Compute Engine already certified able to run the production-grade Cassandra. And interestingly enough, Microsoft is supporting the DataStax implementation of Cassandra on top of Linux, not Windows. Bosworth says that while Cassandra can be run on Windows servers, DataStax does not recommend putting DataStax Enterprise (DSE), the commercial release, on Windows. (It does have a few customers who do, nonetheless, and it supports them.) Bosworth adds that DataStax and the Cassandra community have been “working diligently” for the past year to get a Windows port of DSE completed and that there has been “zero pressure” for the Microsoft Azure team to run DSE on anything other than Linux.

It is important to make the distinction between running Cassandra and other elements of DSE on Windows and having optimized drivers for Cassandra for the .NET programming environment for Windows.

“All we are really talking about is the ability to run the back-end Cassandra on Linux or Windows, and to the developer, it is irrelevant on what that back end is running,” explains Bosworth. This takes away some of that friction, and what we find is that on the back end, we just don’t find religious conviction about whether it should run on Windows or Linux, and this is different from five years ago. We sell mostly to enterprises, and we have not had one customer raise their hand and say they can’t use DSE because it does not run on Windows.”

What is more important is the ability to seamless put Cassandra on public clouds and spread transactional data around for performance and resiliency reasons – the same reasons that Facebook created Cassandra for in the first place.

What Is In The Stack, Who Uses It, And How

The DataStax Enterprise distribution does not just include the Apache Cassandra data store, but has an integrated search engine that is API compatible with the open source Solr search engine and in-memory extensions that can speed up data accesses by anywhere from 30X to 100X compared to server clusters using flash SSDs or disk drives. The Cassandra data store can be used to underpin Hadoop, allowing it to be queried by MapReduce, Hive, Pig, and Mahout, and it can also underpin Spark and Spark Streaming as their data stores if customers decide to not go with the Hadoop Distributed File System that is commonly packaged with a Hadoop distribution.

It is hard to say for sure how many organizations are running Cassandra today, but Bosworth reckons that it is on the order of tens of thousands worldwide, based on a number of factors. DataStax does not do any tracking of its DataStax Community edition because it wants a “frictionless download” like many open source projects have. (Developers don’t want software companies to see what tools they are playing with, even though they might love open source code.) DataStax provides free training for Cassandra, however, where it does keep track, and developers are consuming over 10,000 units of this training per month, so that probably indicates that the Cassandra installed base (including tests, prototypes, and production) is in the five figures.

DataStax itself has over 500 paying customers – now including Microsoft after its partner tried to build its own Spark-Cassandra cluster using open source code and decided that the supported versions were better thanks to the extra goodies that DataStax puts into its distro. DataStax has 30 of the Fortune 100 using its distribution of Cassandra in one form or another, and it is always for transactional, rather than batch analytic, jobs and in most cases also for distributed data stores that make use of the “eventual consistency” features of Cassandra to replicate data across multiple clusters. The company has another 600 firms participating in its startup program, which gives young companies freebie support on the DSE distro until they hit a certain size and can afford to start kicking some cash into the kitty.

The largest installation of Cassandra is running at Apple,which as we previously reportedhas over 75,000 nodes, with clusters ranging in size from hundreds to over 1,000 nodes and with a total capacity in the petabytes range. Netflix, which used to employ the open source Cassandra, switched to DSE last May and had over 80 clusters with more than 2,500 nodes supporting various aspects of its video distribution business. In both cases, Cassandra is very likely housing user session state data as well as feeding product or play lists and recommendations or doing faceted search for their online customers.

We are always intrigued to learn how customers are actually deploying tools such as Cassandra in production and how they scale it. Bosworth says that it is not uncommon to run a prototype project on as few as ten nodes, and when the project goes into production, to see it grow to dozens to hundreds of nodes. The midrange DSE clusters range from maybe 500 to 1,000 nodes and there are some that get well over 1,000 nodes for large-scale workloads like those running at Apple.

In general, Cassandra does not, like Hadoop, run on disk-heavy nodes. Remember, the system was designed to support hot transactional data, not to become a lake with a mix of warm and cold data that would be sifted in batch mode as is still done with MapReduce running atop Hadoop.

The typical node configuration has changed as Cassandra has evolved and improved, says Robin Schumacher, vice president of products at DataStax. But before getting into feeds and speeds, Schumacher offered this advice. “There are two golden rules for Cassandra. First, get your data model right, and second, get your storage system right. If you get those two things right, you can do a lot wrong with your configuration or your hardware and Cassandra will still treat you right. Whenever we have to dive in and help someone out, it is because they have just moved over a relational data model or they have hooked their servers up to a NAS or a SAN or something like that, which is absolutely not recommended.”

Only four years ago, because of the limitations in Cassandra (which like Hadoop and many other analytics tools is coded in Java), the rule of thumb was to put no more than 512 GB of disk capacity onto a single node. (It is hard to imagine such small disk capacities these days, with 8 TB and 10 TB disks.) The typical Cassandra node has two processors, with somewhere between 12 and 24 cores, and has between 64 GB and 128 GB of main memory. Customers who want the best performance tend to go with flash SSDs, although you can do all-disk setups, too.

Fast forward to today, and Cassandra can make use of a server node with maybe 5 TB of capacity for a mix of reads and writes, and if you have a write intensive application, then you can push that up to 20 TB. (DataStax has done this in its labs, says Schumacher, without any performance degradation.) Pushing the capacity up is important because it helps reduce server node count for a given amount of storage, which cuts hardware and software licensing and support costs. Incidentally, only a quarter of DSE customers surveyed said they were using spinning disks, but disk drives are fine for certain kinds of log data. SSDs are used for most transactional data, but the bits that are most latency sensitive should use DSE to store data on PCI-Express flash cards, which have lower latency.

Schumacher says that in most cases, the commercial-grade DSE Cassandra is used for a Web or mobile application, and a DSE cluster is not set up for hosting multiple applications, but rather companies have a different cluster for each use case. (As you can see is the case with Apple and Netflix.) Most of the DSE shops to make use of the eventual consistency replication features of Cassandra to span multiple datacenters with their data stores, and span anywhere from eight to twelve datacenters with their transactional data.

Here’s where it gets interesting, and why Microsoft is relevant to DataStax. Only about 30 percent of the DSE installations are running on premises. The remaining 70 percent are running on public clouds. About half of DSE customers are running on Amazon Web Services, with the remaining 20 percent split more or less evenly between Google Compute Engine and Microsoft Azure. If DataStax wants to grow its business, the easiest way to do that is to grow along with AWS, Compute Engine, and Azure.

So Microsoft and DataStax are sharing their roadmaps and coordinating development of their respective wares, and will be doing product validation, benchmarking, and optimization. The two will be working on demand generation and marketing together, too, and aligning their compensation to sell DSE on top of Azure and, eventually, on top of Windows Server for those who want to run it on premises.

In addition to announcing the Microsoft partnership at the Cassandra Summit this week, DataStax is also releasing its DSE 4.8 stack, which includes certification for Cassandra to be used as the back end for the new Spark 1.4 in-memory analytics tool. DSE Search has a performance boosts for live indexing, and running DSE instances inside of Docker containers has been improved. The stack also includes Titan 1.0, the graph database overlay for Cassandra, HBase, and BerkeleyDB that DataStax got through its acquisition of Aurelius back in February. DataStax is also previewingCassandra 3.0, which will include support for JSON documents, role-based access control, and a lot of little tweaks that will make the storage more efficient, DataStax says. It is expected to ship later this year.

Scott Guthrie Executive Vice President Microsoft Cloud and Enterprise group As executive vice president of the Microsoft Cloud and Enterprise group, Scott Guthrie is responsible for the company’s cloud infrastructure, server, database, management and development tools businesses. His engineering team builds Microsoft Azure, Windows Server, SQL Server, Active Directory, System Center, Visual Studio and .NET. Prior to leading the Cloud and Enterprise group, Guthrie helped lead Microsoft Azure, Microsoft’s public cloud platform. Since joining the company in 1997, he has made critical contributions to many of Microsoft’s key cloud, server and development technologies and was one of the original founders of the .NET project. Guthrie graduated with a bachelor’s degree in computer science from Duke University. He lives in Seattle with his wife and two children. Source: Microsoft

Well, I don’t know if I’d say there’s been a big change from that perspective. I mean, I think obviously we’ve been saying for a while this mobile-first, cloud-first…”devices and services” is maybe another way to put it. That’s been our focus as a company even before Satya became CEO. From a strategic perspective, I think we very much have been focused on cloud now for a couple of years. I wouldn’t say this now means, “Oh, now we’re serious about cloud.” I think we’ve been serious about cloud for quite a while.

… I think there’s certainly a first mover advantage that they’ve been able to benefit from. … In terms of where we’re at today, we’ve got about 57% of the Fortune 500 that are now deployed on Microsoft Azure. … Ultimately the way we think we do that [gain on the current leader] is by having a unique set of offerings and a unique point of view that is differentiated.

about uniqueness of Microsoft offering:

One is, we’re focused on and delivering a hyper-scale cloud platform with our Azure service that’s deployed around the world. …

… that geographic footprint, as well as the economies of scale that you get when you install and have that much capacity, puts you in a unique position from an economic and from a customer capability perspective …

Where I think we differentiate then, versus the other two, is around two characteristics. One is enterprise grade and the focus on delivering something that’s not only hyper-scale from an economic and from a geographic reach perspective but really enterprise-grade from a capability, support, and overall services perspective. …

The other thing that we have that’s fairly unique is a very large on-premises footprint with our existing server software and with our private cloud capabilities. …

July 22 (Bloomberg) — When Microsoft CEO Satya Nadella defined the future of his company in a memo to his 127,100 employees, he singled out the struggling Surface tablet as key to a future built around the cloud and productivity. Microsoft assembled an elite team of designers, engineers, and programmers to spend years holed up in Redmond, Washington to come up with a tablet to take on Apple, Samsung, and Amazon. Bloomberg’s Cory Johnson got an inside look at the Surface labs.

July 23 (Bloomberg) — Microsoft’s motion detecting camera was thought to be a game changer for the video gaming world when it was launched in 2010. While appetite for it has since decreased, Microsoft sees the technology as vital in its broader offering as it explores other sectors like 3d mapping and live surgery. (Source: Bloomberg

In this video, Pier 1 Imports discuss how they are using Microsoft Cloud technologies such as Azure Machine Learning to to predict which the product the customer might want to purchase next, helping to build a better relationship with their customers. Learn more: http://www.azure.com/ml

http://cnet.co/1nOygqh Microsoft made a direct comparison between the Surface Pro 3 and the MacBook Air 13″, so we’re throwing them into the Prizefight Ring to settle the score once and for all. Let’s get it on!

Thank you. And Amy one quick question, we saw a significant acceleration this quarter in cloud revenue, or I guess Amy or Satya. You saw acceleration in cloud revenue year-over-year what’s – is this Office for the iPad, is this Azure, what’s driving the acceleration and how long do you think we can keep this going?

Mark, I will take it and if Satya wants to add, obviously, he should do that. In general, I wouldn’t point to one product area. It was across Office 365, Azure and even CRM online. I think some of the important dynamics that you could point to particularly in Office 365; I really think over the course of the year, we saw an acceleration in moving the product down the market into increasing what we would call the mid-market and even small business at a pace. That’s a particular place I would tie back to some of the things Satya mentioned in the answer to your first question.

Improvements to analytics, improvements to understanding the use scenarios, improving the product in real-time, understanding trial ease of use, ease of sign-up all of these things actually can afford us the ability to go to different categories, go to different geos into different segments. And in addition, I think what you will see more as we initially moved many of our customers to Office 365, it came on one workload. And I think what we’ve increasingly seen is our ability to add more workloads and sell the entirety of the suite through that process. I also mentioned in Azure, our increased ability to sell some of these higher value services. So while, I can speak broadly but all of them, I think I would generally think about the strength of being both completion of our product suite ability to enter new segments and ability to sell new workloads.

The only thing I would add is it’s the combination of our SaaS like Dynamics in Office 365, a public cloud offering in Azure. But also our private and hybrid cloud infrastructure which also benefits, because they run on our servers, cloud runs on our servers. So it’s that combination which makes us both unique and reinforcing. And the best example is what we are doing with Azure active directory, the fact that somebody gets on-boarded to Office 365 means that tenant information is in Azure AD that fact that the tenant information is in Azure AD is what makes EMS or our Enterprise Mobility Suite more attractive to a customer manager iOS, Android or Windows devices. That network effect is really now helping us a lot across all of our cloud efforts.

Excellent, thank you for the question and a very nice quarter. First, I think to talk a little bit about the growth strategy of Nokia, you guys look to cut expenses pretty aggressively there, but this is – particularly smartphones is a very competitive marketplace, can you tell us a little bit about sort of the strategy to how you actually start to gain share with Lumia on a going forward basis? And may be give us an idea of what levels of share or what levels of kind unit volumes are you going to need to hit to get to that breakeven in FY16?

Let me start and Amy you can even add. So overall, we are very focused on I would say thinking about mobility share across the entire Windows family. I already talked about in my remarks about how mobility for us even goes beyond devices, but for this specific question I would even say that, we want to think about mobility not just one form factor of a mobile device because I think that’s where the ultimate price is.

But that said, we are even year-over-year basis seen increased volume for Lumia, it’s coming at the low end in the entry smartphone market and we are pleased with it. It’s come in many markets we now have over 10% that’s the first market I would sort of say that we need to track country-by-country. And the key places where we are going to differentiate is looking at productivity scenarios or the digital work and life scenario that we can light up on our phone in unique ways.

When I can take my Office Lens App use the camera on the phone take a picture of anything and have it automatically OCR recognized and into OneNote in searchable fashion that’s the unique scenario. What we have done with Surface and PPI shows us the way that there is a lot more we can do with phones by broadly thinking about productivity. So this is not about just a Word or Excel on your phone, it is about thinking about Cortana and Office Lens and those kinds of scenarios in compelling ways. And that’s what at the end of the day is going to drive our differentiation and higher end Lumia phones.

And Keith to answer your specific question, regarding FY16, I think we’ve made the difficult choices to get the cost base to a place where we can deliver, on the exact scenario Satya as outlined, and we do assume that we continue to grow our units through the year and into 2016 in order to get to breakeven.

Thanks. I’m wondering if you could talk about the Office for a moment. I’m curious whether you think we’ve seen the worst for Office here with the consumer fall off. In Office 365 growth in margins expanding their – just sort of if you can look through the dynamics and give us a sense, do you think you are actually turned the corner there and we may be seeing the worse in terms of Office growth and margins?

Rick, let me just start qualitatively in terms of how I view Office, the category and how it relates to productivity broadly and then I’ll have Amy even specifically talk about margins and what we are seeing in terms of I’m assuming Office renewals is that probably the question. First of all, I believe the category that Office is in, which is productivity broadly for people, the group as well as organization is something that we are investing significantly and seeing significant growth in.

On one end you have new things that we are doing like Cortana. This is for individuals on new form factors like the phones where it’s not about anything that application, but an intelligent agent that knows everything about my calendar, everything about my life and tries to help me with my everyday task.

On the other end, it’s something like Delve which is a completely new tool that’s taking some – what is enterprise search and making it more like the Facebook news feed where it has a graph of all my artifacts, all my people, all my group and uses that graph to give me relevant information and discover. Same thing with Power Q&A and Power BI, it’s a part of Office 365. So we have a pretty expansive view of how we look at Office and what it can do. So that’s the growth strategy and now specifically on Office renewals.

And I would say in general, let me make two comments. In terms of Office on the consumer side between what we sold on prem as well as the Home and Personal we feel quite good with attach continuing to grow and increasing the value prop. So I think that’s to address the consumer portion.

On the commercial portion, we actually saw Office grow as you said this quarter; I think the broader definition that Satya spoke to the Office value prop and we continued to see Office renewed in our enterprise agreement. So in general, I think I feel like we’re in a growth phase for that franchise.

Hi, thanks. Satya, I wanted to ask you about two statements that you made, one around responsibly making the market for Windows Phone, just kind of following on Keith’s question here. And that’s a – it’s a really competitive market it feels like ultimately you need to be a very, very meaningful share player in that market to have value for developer to leverage the universal apps that you’re talking about in terms of presentations you’ve given and build in and so forth.

And I’m trying to understand how you can do both of those things once and in terms of responsibly making the market for Windows Phone, it feels difficult given your nearest competitors there are doing things that you might argue or irresponsible in terms of making their market given that they monetize it in different ways?

Yes. One of beauties of universal Windows app is, it aggregates for the first time for us all of our Windows volume. The fact that even what is an app that runs with a mouse and keyboard on the desktop can be in the store and you can have the same app run in the touch-first on a mobile-first way gives developers the entire volume of Windows which is 300 plus million units as opposed to just our 4% share of mobile in the U.S. or 10% in some country.

So that’s really the reason why we are actively making sure that universal Windows apps is available and developers are taking advantage of it, we have great tooling. Because that’s the way we are going to be able to create the broadest opportunity to your very point about developers getting an ROI for building to Windows. For that’s how I think we will do it in a responsible way.

Great. Thank you so much for your time. I wanted to ask a question about – Satya your comments about combining the next version of Windows and to one for all devices and just wondering if you look out, I mean you’ve got kind of different SKU segmentations right now, you’ve got enterprise, you’ve got consumer less than 9 inches for free, the offering that you mentioned earlier that you recently announced. How do we think about when you come out with this one version for all devices, how do you see this changing kind of the go-to-market and also kind of a traditional SKU segmentation and pricing that we’ve seen in the past?

Yes. My statement Heather was more to do with just even the engineering approach. The reality is that we actually did not have one Windows; we had multiple Windows operating systems inside of Microsoft. We had one for phone, one for tablets and PCs, one for Xbox, one for even embedded. So we had many, many of these efforts. So now we have one team with the layered architecture that enables us to in fact one for developers bring that collective opportunity with one store, one commerce system, one discoverability mechanism. It also allows us to scale the UI across all screen sizes; it allows us to create this notion of universal Windows apps and being coherent there.

So that’s what more I was referencing and our SKU strategy will remain by segment, we will have multiple SKUs for enterprises, we will have for OEM, we will have for end-users. And so we will – be disclosing and talking about our SKUs as we get further along, but this my statement was more to do with how we are bringing teams together to approach Windows as one ecosystem very differently than we ourselves have done in the past.

Hi, good afternoon. Satya you made some comments about harmonizing some of the different products across consumer and enterprise and I was curious what your approach is to viewing your different hardware offerings both in phone and with Surface, how you’re go-to-market may change around that and also since you decided to make the operating system for sub 9-inch devices free, how you see the value proposition and your ability to monetize that user base evolving over time?

Yes. The statement I made about bringing together our productivity applications across work and life is to really reflect the notion of dual use because when I think about productivity it doesn’t separate out what I use as a tool for communication with my family and what I use to collaborate at work. So that’s why having this one team that thinks about outlook.com as well as Exchange helps us think about those dual use. Same thing with files and OneDrive and OneDrive for business because we want to have the software have the smart about separating out the state carrying about IT control and data protection while me as an end user get to have the experiences that I want. That’s how we are thinking about harmonizing those digital life and work experiences.

On the hardware side, we would continue to build hardware that fits with these experiences if I understand your question right, which is how will be differentiate our first party hardware, we will build first party hardware that’s creating category, a good example is what we have done with Surface Pro 3. And in other places where we have really changed the Windows business model to encourage a plethora of OEMs to build great hardware and we are seeing that in fact in this holiday season, I think you will see a lot of value notebooks, you will see clamshells. So we will have the full price range of our hardware offering enabled by this new windows business model.

And I think the last part was how will we monetize? Of course, we will again have a combination, we will have our OEM monetization and some of these new business models are about monetizing on the backend with Bing integration as well as our services attached and that’s the reason fundamentally why we have these zero-priced Windows SKUs today.

Day two of the Microsoft Build developer conference in San Francisco wrapped up with the company announcing 44 new services. Most of those are based on Microsoft Azure – it’s cloud computing platform that manages applications across data centers. CCTV’s Mark Niu reports from San Francisco.

Mark Russinovich is a Technical Fellow in the Windows Azure Group at Microsoft working on Microsoft’s cloud platform. He is a widely recognized expert in operating systems, distributed systems, and cybersecurity. In this keynote from #ChefConf 2014, he gives an overview of Microsoft Azure and a demonstration of the integration between Azure and Chef

Then here is a fast talk and Q&A on Azure with Scott Guthrie after his keynote preseantation at BUILD 2014:Cloud Cover Live – Ask the Gu! [jlongo62 YouTube channel, published on April 21, 2014]

With Scott Guthrie, Executive Vice President Microsoft Cloud and Enterprise group

2. Microsoft Azure Momentum on the Market

The day began with Scott Guthrie, Executive Vice President, Microsoft Cloud and Enterprise group, touting Microsoft progress with Azure for the last 18 months when:

… we talked about our new strategy with Azure and our new approach, a strategy that enables me to use both infrastructure as a service and platform as a service capabilities together, a strategy that enables developers to use the best of the Windows ecosystem and the best of the Linux ecosystemtogether, and one that delivers unparalleled developer productivity and enables you to build great applications and services that work with every device …

Last year … shipped more than 300 significant new features and releases

… we’ve also been hard at work expanding the footprint of Azure around the world. The green circles you see on the slide here represent Azure regions, which are clusters of datacenters close together, and where you can go ahead and run your application code. Just last week, we opened two new regions, one in Shanghai and one in Beijing. Today, we’re the only global, major cloud provider that operates in mainland China. And by the end of the year, we’ll have more than 16 public regions available around the world, enabling you to run your applications closer to your customers than ever before.

More than 57 percent of the Fortune 500 companies are now deployed on Azure.

Customers run more than 250,000 public-facing websites on Azure, and we now host more than 1 million SQL databases on Azure.

More than 20 trillion objects are now stored in the Azure storage system. We have more than 300 million users, many of them — most of them, actually, enterprise users, registered with Azure Active Directory, and we process now more than 13 billion authentications per week.

We have now more than 1 million developers registered with our Visual Studio Online service, which is a new service we launched just last November.

Let’s go beyond the big numbers, though, and look at some of the great experiences that have recently launched and are using the full power of Azure and the cloud.

“Titanfall” was one of the most eagerly anticipated games of the year, and had a very successful launch a few weeks ago. “Titanfall” delivers an unparalleled multiplayer gaming experience, powered using Azure.

Let’s see a video of it in action, and hear what the developers who built it have to say.

‘Developers from Respawn Studios and Xbox discuss how cloud computing helps take Titanfall to the next level.

One of the key bets the developers of “Titanfall” made was for all game sessions on the cloud. In fact, you can’t play the game without the cloud, and that bet really paid off.

As you heard in the video, it enables much, much richer gaming experiences. Much richer AI experiences. And the ability to tune and adapt the game as more users use it.

To give you a taste of the scale, “Titanfall” had more than 100,000 virtual machines deployed and running on Azure on launch day. Which is sort of an unparalleled size in terms of a game launch experience, and the reviews of the game have been absolutely phenomenal.

Another amazing experience that recently launched and was powered using Azure was the Sochi Olympics delivered by NBC Sports.

NBC used Azure to stream all of the games both live and on demand to both Web and mobile devices. This was the first large-scale live event that was delivered entirely in the cloud with all of the streaming and encoding happening using Azure.

Traditionally, with live encoding, you typically run in an on-premises environment because it’s so latency dependent. With the Sochi Olympics, Azure enabled NBC to not only live encode in the cloud, but also do it across multiple Azure regions to deliver high-availability redundancy.

More than 100 million people watched the online experience, and more than 2.1 million viewers alone watched it concurrently during the U.S. versus Canada men’s hockey match, a new world record for online HD streaming.… RICK CORDELLA [Senior Vice President and General Manager of NBC Sports Digital]: The company bets about $1 billion on the Olympics each time it goes off. And we have 17 days to recoup that investment. Needless to say, there is no safety net when it comes to putting this content out there for America to enjoy. We need to make sure that content is out there, that it’s quality, that our advertisers and advertisements are being delivered to it. There really is no going back if something goes wrong.…

The following post is from Scott Guthrie, Executive Vice President, Cloud and Enterprise Group, Microsoft.

On Thursday at Build in San Francisco, we took an important step by unveiling a first-of-its kind cloud environment within Microsoft Azure that provides a fully integrated cloud experience – bringing together cross-platform technologies, services and tools that enable developers and businesses to innovate with enterprise-grade scalability at startup speed. Announced today, our new Microsoft Azure Preview [Management]Portal is an important step forward in delivering our promise of the cloud without complexity.

When cloud computing was born, it was hailed as the solution that developers and business had been waiting for – the promise of a quick and easy way to get more from your business-critical apps without the hassle and cost of infrastructure. But as the industry transitions toward mobile-first, cloud-first business models and scenarios, the promise of “quick and easy” is now at stake. There’s no question that developing for a world that is both mobile-first and cloud-first is complicated. Developers are managing thousands of virtual machines, cobbling together management and automation solutions, and working in unfamiliar environments just to make their apps work in the cloud – driving down productivity as a result.

Many cloud vendors tout the ease and cost savings of the cloud, but they leave customers without the tools or capabilities to navigate the complex realities of cloud computing. That’s why today we are continuing down a path of rapid innovation. In addition to our groundbreaking new Microsoft Azure Preview [Management] Portal, we announced several enhancements our customers need to fully tap into the power of the cloud. These include:

In addition, the company announced several new milestones in Visual Studio Online and .NET that give developers access to the most complete platform and tools for building in the cloud. Thursday’s announcements are part of Microsoft’s broader vision to erase the boundaries of cloud development and operational management for customers.

“Developing for a mobile-first, cloud-first world is complicated, and Microsoft is working to simplify this world without sacrificing speed, choice, cost or quality,” said Scott Guthrie, executive vice president at Microsoft. “Imagine a world where infrastructure and platform services blend together in one seamless experience, so developers and IT professionals no longer have to work in disparate environments in the cloud. Microsoft has been rapidly innovating to solve this problem, and we have taken a big step toward that vision today.”

One simplified cloud experience

The new Microsoft Azure Preview [Management] Portal provides a fully integrated experience that will enable customers to develop and manage an application in one place, using the platform and tools of their choice. The new portal combines all the components of a cloud application into a single development and management experience. New components include the following:

Simplified Resource Management. Rather than managing standalone resources such as Microsoft Azure Web Sites, Visual Studio Projects or databases, customers can now create, manage and analyze their entire application as a single resource group in a unified, customized experience, greatly reducing complexity while enabling scale. Today, the new Azure Manager is also being released through the latest Azure SDK for customers to automate their deployment and management from any client or device.

Integrated billing. A new integrated billing experience enables developers and IT pros to take control of their costs and optimize their resources for maximum business advantage.

Gallery. A rich gallery of application and services from Microsoft and the open source community, this integrated marketplace of free and paid services enables customers to leverage the ecosystem to be more agile and productive.

Visual Studio Online. Microsoft announced key enhancements through the Microsoft Azure Preview [Management] Portal, available Thursday. This includes Team Projects supporting greater agility for application lifecycle management and the lightweight editor code-named “Monaco” for modifying and committing Web project code changes without leaving Azure. Also included is Application Insights, an analytics solution that collects telemetry data such as availability, performance and usage information to track an application’s health. Visual Studio integration enables developers to surface this data from new applications with a single click.

…

Building an open cloud ecosystem

Showcasing Microsoft’s commitment to choice and flexibility, the company announced new open source partnerships with Chef and Puppet Labs to run configuration management technologies in Azure Virtual Machines. Using these community-driven technologies, customers will now be able to more easily deploy and configure in the cloud. In addition, today Microsoft announced the release of Java Applications to Microsoft Azure Web Sites, giving Microsoft even broader support for Web applications.

…. Bill Staples then came on stage to show off the new Azure [management] portal design and features. Bill walked through a number of the new innovations in the portal, such as improved UX, app insights, “blade” views [the “blade” term is used for the dropdown that allows a drilldown], etc. A screen shot of the new portal is shown below.

Bill also walked through the comprehensive analytics (such as compute and billing) that are now available on the portal. He also walked through “Application Insights,” which is a great way to instrument your code in both the portal and in your code with easy-to-use, pre-defined code snippets. He completed his demo walkthrough by showing the Azure [management] portal as a “NOC” [Network Operations Center] view on a big-screen TV.

BILL STAPLES at [1:43:39]: Now, to conclude the operations part of this demo, I wanted to show you an experience for how the new Azure Portal works on a different device. You’ve seen it on the desktop, but it works equally well on a tablet device, that is really touch friendly. Check it out on your Surface or your iPad, it works great on both devices.

But we’re thinking as well if you’ve got a big-screen TV or a projector lying around your team room, you might want to think about putting the Microsoft Azure portal as your own personal NOC.

In this case, I’ve asked the Office developer team if we could have access to their live site log. So they made me promise, do not hit the stop button or the delete button, which I promised to do.

[1:44:24]This is actually theOffice developer log site. And you can see it’s got almost 10 million hits already today running on Azure Websites. So very high traffic.

They’ve customized it to show off the browser usage on their website. Imagine we’re in a team Scrum with the Office developer guys and we check out, you know, how is the website doing? We’ve got some interesting trends here.

In fact, there was a spike of sessions it looks like going on about a week ago. And page views, that’s kind of a small part. It would be nice to know which page it was that spiked a week ago. Let’s go ahead and customize that.

This screen is kind of special because it has touch screen. So I can go ahead and let’s make that automatically expand there. Now we see a bigger view. Wow, that was a really big spike last week. What page was that? We can click into it. We get the full navigation experience, same on the desktop, as well as, oh, look at that. There’s a really popular blog post that happened about a week ago. What was that? Something about announcing Office on the iPad you love. Makes sense, huh? So we can see the Azure Portal in action here as the Office developer team might imagine it.[1:45:44]

The last thing I want to show is the Azure Gallery.

We populated the gallery with all of the first-party Microsoft Azure services, as well as the [services from] great partners that we’ve worked with so far in creating this gallery.

And what you’re seeing right here is just the beginning. We’ve got the core set of DevOps experiences built out, as well as websites, SQL, and MySQL support. But over the coming months, we’ll be integrating all of the developer and IT services in Microsoft as well as the partner services into this experience.

Let me just conclude by reminding us what we’ve seen. We’ve seen a first-of-its-kind experience from Microsoft that fuses our world-class developer services together with Azure to provide an amazing dev-ops experience where you can enjoy the entire lifecycle from development, deployment, operations, gathering analytics, and iterating right here in one experience.

We’ve seen an application-centric experience that brings together all the dev platform and infrastructure services you know and love into one common shell. And we’ve seen a new application model that you can describe declaratively. And through the command line or programmatically, build out services in the cloud with tremendous ease. [1:47:12]

Today, at Build, we unveiled a new Azure [Management] Portal experience we are building. I want to give you some insights into the work that VS Online team is doing to help with it. I’m not on the Azure team and am no expert on how they’d like to describe to the world, so please take any comments I make here about the new Azure portal as my perspective on it and not necessarily an official one.

Bill Staples first presented to me almost a year ago an idea of creating a new portal experience for Azure designed to be an optimal experience for DevOps. It would provide everything a DevOps team needs to do modern cloud based development. Capabilities to provision dev and test resources, development and collaboration capabilities, build, release and deployment capabilities, application telemetry and management capabilities and more. Pretty quickly it became clear to me that if we could do it, it would be awesome. An incredibly productive and easy way for devs to do soup to nuts app development.

What we demoed today (and made available via http://portal.azure.com”) is the first incarnation of that. My team (the VS Online Team) has worked very hard over the past many months with the Azure team to build the beginnings of the experience we hope to bring to you. It’s very early and it’s nowhere near done but it’s definitely something we’d love to start getting some feedback on.

For now, it’s limited to Azure websites, SQL databases and a subset of the VS Online capabilities. If you are a VS Online/TFS user, think of this as a companion to Visual Studio, Visual Studio Online and all of the tools you are used to. When you create a team project in the Azure portal, it’s a VS Online Team Project like any other and is accessible from the Azure portal, the VS Online web UI, Visual Studio, Eclipse and all the other ways your Visual Studio Online assets are available. For now, though, there are a few limitations – which we are working hard to address. We are in the middle of adding Azure Active Directory support to Visual Studio Online and, for a variety of reasons, chose to limit the new portal to only work with VS Online accounts linked to Azure Active Directory.

The best way to ensure this is just to create a new Team Project and a new VS Online account from within the new Azure portal. You will need to be logged in to the Azure portal with an identity known to your Azure Active Directory tenant and to add new users, rather than add them directly in Visual Studio Online, you will add them through Azure Active directory. One of the ramifications of this, for now, is that you can’t use an existing VS Online account in the new portal – you must create a new one. Clearly that’s a big limitation and one we are working hard to remove. We will enable you to link existing VS Online accounts to Active Directory we just don’t have it yet – stay tuned.

Brian Keller talks with Jonah Sterling and Vishal Joshi about the new Microsoft Azure portal preview. This Preview portal is a big step forward in the journey toward integrated DevOps tools, technologies, and cloud services. See how you can deliver and scale business-ready apps for every platform more easily and rapidly—using what you already know and whatever toolset you like most

4. New Azure features: IaaS, web, mobile and data announcements

[IaaS] First up, let’s look at some of the improvements we’re making with our infrastructure features and some of the great things we’re enabling with virtual machines.

Azure enables you to run both Windows and Linuxvirtual machines in the cloud. You can run them as stand-alone servers, or join them together to a virtual network, including one that you can optionally bridge to an on-premises networking environment.

This week, we’re making it even easier for developers to create and manage virtual machines in Visual Studio without having to leave the VS IDE: You can now create, destroy, manage and debug any number of VMs in the cloud. (Applause.)

Prior to today, it was possible to create reusable VM image templates, but you had to write scripts and manually attach things like storage drives to them. Today, we’re releasing support that makes it super-easy to capture images that can contain any number of storage drives. Once you have this image, you can then very easily take it and create any number of VM instances from it, really fast, and really easy. (Applause.)

Starting today, you can also now easily configure VM images using popular frameworks like Puppet, Chef, and our own PowerShell and VSD tools. These tools enable you to avoid having to create and manage lots of separate VM images. Instead, you can define common settings and functionality using modules that can cut across every type of VM you use.

You can also create modules that define role-specific behavior, and all these modules can be checked into source control and they can also then be deployed to a Puppet Master or Chef server.

And one of the things we’re doing this week is making it incredibly easy within Azure to basically spin up a server farm and be able to automatically deploy, provision and manage all of these machines using these popular tools.

…

We’re also excited to announce the general availability of our auto-scale service, as well as a bunch of great virtual networking capabilities including point-to-site VPN support going GA, new dynamic routing, subnet migration, as well as static internal IP address. And we think the combination of this really gives you a very flexible environment, as you saw, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud.

So we think infrastructure as a service is super-flexible, and it really kind of enables you to manage your environments however you want.

We also, though, provide prebuilt services and runtime environments that you can use to assemble your applications as well, and we call these platform as a service [PaaS] capabilities.

One of the benefits of these prebuilt services is that they enable you to focus on your application and not have to worry about the infrastructure underneath it.

We handle patching, load balancing, high availability and auto scale for you. And this enables you to work faster and do more.

What I want to do is just spend a little bit of time talking through some of these platform as a service capabilities, so we’re going to start talking about our Web functionality here today.

[Web] One of the most popular PaaS services that we now have on Windows Azure is something we call the Azure Website Service. This enables you to very easily deploy Web applications written in a variety of different languages and host them in the cloud. We support .NET, NOJS, PHP, Python, and we’re excited this week to also announce that we’re adding Java language support as well.

This enables you as a developer to basically push any type of application into Azure into our runtime environment, and basically host it to any number of users in the cloud.

Couple of the great features we have with Azure include auto-scale capability. What this means is you can start off running your application, for example, in a single VM. As more load increases to it, we can then automatically scale up multiple VMs for you without you having to write any script or take any action yourself. And if you get a lot of load, we can scale up even more.

You can basically configure how many VMs you maximally want to use, as well as what the burn-down rate is. And as your traffic — and this is great because it enables you to not only handle large traffic spikes and make sure that your apps are always responsive, but the nice thing about auto scale is that when the traffic drops off, or maybe during the night when it’s a little bit less, we can automatically scale down the number of machines that you need, which means that you end up saving money and not having to pay as much.

One of the really cool features that we’ve recently introduced with websites is something we call our staging support. This solves kind of a pretty common problem with any Web app today, which is there’s always someone hitting it. And how do you stage the deployments of new code that you roll out so that you don’t ever have a site in an intermediate state and that you can actually deploy with confidence at any point in the day?

And what staging support enables inside of Azure is for you to create a new staging version of your Web app with a private URL that you can access and use to test. And this allows you to basically deploy your application to the staging environment, get it ready, test it out before you finally send users to it, and then basically you can push one button or send a single command called swap where we’ll basically rotate the incoming traffic from the old production site to the new staged version.

What’s nice is we still keep your old version around. So if you discover once you go live you still have a bug that you missed, you can always swap back to the previous state. Again, this allows you to deploy with a lot of confidence and make sure that your users are always seeing a consistent experience when they hit your app.

Another cool feature that we’ve recently introduced is a feature we call Web Jobs. And this enables you to run background tasks that are non-HTTP responsive that you can actually run in the background. So if it takes a while to run it, this is a great way you can offload that work so that you’re not stalling your actual request response thread pool.

Basically, you know, common scenario we see for a lot of people is if they want to process something in the background, when someone submits something, for example, to the website, they can go ahead and simply drop an item into a queue or into the storage account, respond back down to the user, and then with one of these Web jobs, you can very easily run background code that can pull that queue message and actually process it in an offline way.

And what’s nice about Web jobs is you can run them now in the same virtual machines that host your websites. What that means is you don’t have to spin up your own separate set of virtual machines, and again, enables you to save money and provides a really nice management experience for it.

The last cool feature that we’ve recently introduced is something we call traffic manager support. With Traffic Manager, you can take advantage of the fact that Azure runs around the world, and you can spin up multiple instances of your website in multiple different regions around the world with Azure.

What you can then do is use Traffic Manager so you can have a single DNS entry that you then map to the different instances around the world. And what Traffic Manager does is gives you a really nice way that you can actually automatically, for example, route all your North America users to one of the North American versions of your app, while people in Europe will go routed to the European version of your app. That gives you better performance, response and latency.

Traffic Manager is also smart enough so that if you ever have an issue with one of the instances of your app, it can automatically remove it from those rotations and send users to one of the other active apps within the system. So this gives you also a nice way you can fail over in the event of an outage.

And the great thing about Traffic Manager, now, is you can use it not just for virtual machines and cloud services, but we’ve also now enabled it to work fully with websites.

…

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]] Scott then invited Mads Kristensen on stage to walk through a few of the features that Scott discussed at a higher level. Specifically, he walked through the new ASP.NET templates emphasizing the creation of the DB layer and then showing PowerShell integration to manage your web site. He then showed Angular integration with Azure Web sites, emphasizing easy and dynamic ways to update your site showing deep browser and Visual Studio integration (Browser Link), showing updates that are made in the browser show up in the code in Visual Studio. Very cool!!

He also showed how you can manage staging and production sites by using the “swap” functionality built into the Azure Web sites service. He also showed Web Jobs to show how you can also run background jobs and Traffic Manager functionality to ensure your customers have the best performing web site in their regions.

So as Mads showed, there are a lot of great features that we’re kind of unveiling this week. A lot of great announcements that go with it.

These include the general availability release of auto-scale support for websites, as well as the general availability release of our new Traffic Manager support for websites as well. As you saw there, we also have Web Job support, and one of the things that we didn’t get to demo which is also very cool is backup support so that automatically we can have both your content as well as your databases backed up when you run them in our Websites environment as well.

Lots of great improvements also coming in terms of from an offer perspective. One thing a lot of people have asked us for with Websites is the ability not only to use SSL, but to use SSL without having to pay for it. So one of the cool things that we’re adding with Websites and it goes live today is we’re including one IP address-based SSL certificate and five SNI-based SSL certificates at no additional cost to every Website instance. (Applause.)

Throughout the event here, you’re also going to hear a bunch of great sessions on some of the improvements we’re making to ASP.NET. In terms of from a Web framework perspective, we’ve got general availability release of ASP.NET MVC 5.1, Web API 2.1, Identity 2.0, as well as Web Pages 3.1 So a lot of great, new features to take advantage of.

As you saw Mads demo, a lot of great features inside Visual Studio including the ability every time you create an ASP.NET project now to automatically create an Azure Website as part of that flow. Remember, every Azure customer gets 10 free Azure Websites that you can use forever. So even if you’re not an MSDN customer, you can take advantage of that feature in order to set up a Web environment literally every time you create a new project. So pretty exciting stuff.

So that was one example of some of the PaaS capabilities that we have inside Azure.

[Mobile] I’m going to move now into the mobile space and talk about some of the great improvements that we’re making there as well.

One of the great things about Azure is the fact that it makes it really easy for you to build back ends for your mobile applications and devices. And one of the cool things you can do now is you can develop those back ends with both .NET as well as NOJS, and you can use Visual Studio or any other text editor on any other operating system to actually deploy those applications into Azure.

And once they’re deployed, we make it really easy for you to go ahead and connect them to any type of device out there in the world.

Now, some of the great things you can do with this is take advantage of some of the features that we have, which provide very flexible data handling. So we have built-in support for Azure storage, as well as our SQL database, which is our PaaS database offering for relational databases, as well as take advantage of things like MongoDB and other popular NoSQL solutions.

We support the ability not only to reply to messages that come to us, but also to push messages to devices as well. One of the cool features that Mobile Services can take advantage of — and it’s also available as a stand-alone feature — is something we call notification hubs. And this basically allows you to send a single message to a notification hub and then broadcast it to, in some cases, devices that might be registered to it.

We also support with Mobile Services a variety of flexible authentication options. So when we first launched mobile services, we added support for things like Facebook login, Google ID, Twitter ID, as well as Microsoft Accounts.

One of the things we’re excited to demo here today is Active Directory support as well. So this enables you to build new applications that you can target, for example, your employees or partners, to enable them to sign in using the same enterprise credentials that they use in an on-premises Active Directory environment.

What’s great is we’re using standard OAuth tokens as part of that. So once you authenticate, you can take that token, you can use it to also provide authorization access to your own custom back-end logic or data stores that you host inside Azure.

We’re also making it really easy so that you can also take that same token and you can use it to access Office 365 APIs and be able to integrate that user’s data as well as functionality inside your application as well.

The beauty about all of this is it works with any device. So whether it’s a Windows device or an iOS device or an Android device, you can go ahead and take advantage of this capability.

…

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]Yavor Georgiev then came on stage to walk through a Mobile Services demo. He showed off a new Mobile Services Visual Studio template, test pages with API docs, local and remote debugging capabilities, and a LOB app that enables Facilities departments to manage service requests—this showed off a lot of the core ASP.NET/MVC features along with a quick publish service to your Mobile Services service in Azure. Through this app, he showed how to use Active Directory to build the app—which prompts you to log into the app with your corp/AD credentials to use the app. He then showed how the app integrates with SharePoint/O365 such that the request leverages the SharePoint REST APIs to publish a doc to a Facilities doc repository. He also showed how you can re-use the core code through Xamarin to repurpose the code for iOS.

The app is shown here native in Visual Studio.

This app view is the cross-platform build using Xamarin.

Kudos to Yavor! This was an awesome demo that showcases how far Mobile Services has come in a short period of time—love the extensibility and the cross-platform capabilities. Very nice!

One of the things that kind of Yavor showed there is just sort of how easy it is now to build enterprise-grade mobile applications using Azure and Visual Studio.

And one of the key kind of lynchpins in terms of from a technology standpoint that really makes this possible is our Azure Active Directory Service. This basically provides an Active Directory in the cloud that you can use to authenticate any device. What makes it powerful is the fact that you can synchronize it with your existing on-premises Active Directory. And we support both synch options, including back to Windows Server 2003 instances, so it doesn’t even require a relatively new Windows Server, it works with anything you’ve got.

We also support a federate option as well if you want to use ADFS. Once you set that environment up, then all your users are available to be authenticated in the cloud and what’s great is we ship SDKs that work with all different types of devices, and enables you to integrate authentication into those applications. And so you don’t everyone have to have your back end hosted on Azure, you can take advantage of this capability to enable single sign-on with any enterprise credential.

And what’s great is once you get that token, that same token can then be used to program against Office 365 APIs as well as the other services across Microsoft. So this provides a really great opportunity not only for building enterprise line-of-business apps, but also for ISVs that want to be able to build SaaS solutions as well as mobile device apps that integrate and target enterprise customers as well.

…

[From BUILD Day 2: Keynote Summary [by Steve Fox [MSFT] on MSDN Blogs, April 3, 2014]]Scott then invited Grant Peterson from DocuSign on stage to discuss how they are using Azure, who demoed AD integration with DocuSign’s iOS app. Nice!

This is really huge for those of you building apps that are cross-platform but have big investments in AD and also provides you as developers a way to reach enterprise audiences.

So I think one of the things that’s pretty cool about that scenario is both the opportunity it offers every developer that wants to reach an enterprise audience. The great thing is all of those 300 million users that are in Azure Active Directory today and the millions of enterprises that have already federated with it are now available for you to build both mobile and Web applications against and be able to offer to them an enterprise-grade solution to all of your ISV-based applications.

That really kind of changes one of the biggest concerns that people end up having with enterprise apps with SaaS into a real asset where you can make it super-easy for them to go ahead and integrate and be able to do it from any device.

And one of the things you might have noticed there in the code that Grant showed was that it was actually all done on the client using Objective-C, and that’s because we have a new Azure Active Directory iOS SDK as well as an Android SDK in addition to our Windows SDK. And so you can use and integrate with Azure Active Directory from any device, any language, any tool.

Here’s a quick summary of some of the great mobile announcements that we’re making today. Yavor showed we now have .NET backend support, single sign-on with Active Directory.

One of the features we didn’t get a chance to show, but you can learn more about in the breakout talk is offline data sync. So we also now have built into Mobile Services the ability to sync and handle disconnected states with data. And then, obviously, the Visual Studio and remote debugging capabilities as well.

We’ve got not only the Azure SDKs for Azure Active Directory, but we also now have Office 365 API integration. We’re also really excited to announce the general availability or our Azure AD Premium release. This provides enterprises management capabilities that they can actually also use and integrate with your applications, and enables IT to also feel like they can trust the applications and the SaaS solutions that their users are using.

And then we have a bunch of great improvements with notification hubs including Kindle support as well as Visual Studio integration.

So a lot of great features. You can learn about all of them in the breakout talks this week.

So we’ve talked about Web, we’ve talked about mobile when we talk about PaaS.

[Data] I want to switch gears now and talk a little bit about data, which is pretty fundamental and integral to building any type of application.

And with Azure, we support a variety of rich ways to handle data ranging from unstructured, semistructured, to relational. One of the most popular services you heard me talk about at the beginning of the talk is our SQL database story. We’ve got over a million SQL databases now hosted on Azure. And it’s a really easy way for you to spin up a database, and better yet, it’s a way that we then manage for you. So we do handle things like high availability and patching.

You don’t have to worry about that. Instead, you can focus on your application and really be productive.

We’ve got a whole bunch of great SQL improvements that we’re excited to announce this week. I’m going to walk through a couple of them real quickly.

One of them is we’re increasing the database size that we support with SQL databases. Previously, we only supported up to 150 gigs. We’re excited to announce that we’re increasing that to support 500 gigabytes going forward. And we’re also delivering a new 99.95 percent SLA as part of that. So this now enables you to run even bigger applications and be able to do it with high confidence in the cloud. (Applause.)

Another cool feature we’re adding is something we call Self-Service Restore. I don’t know if you ever worked on a database application where you’ve written code like this, hit go, and then suddenly had a very bad feeling because you realized you omitted the where clause and you just deleted your entire table. (Laughter.)

And sometimes you can go and hopefully you have backups. This is usually the point when you discover when you don’t have backups.

And one of the things that we built in as part of the Self-Service Restore feature is automatic backups for you. And we actually let you literally roll back the clock, and you can choose what time of the day you want to roll it back to. We save up to I think 31 days of backups. And you can basically rehydrate a new database based on whatever time of the day you wanted to actually restore from. And then, hopefully, your life ends up being a lot better than it started out.

This is just a built-in feature. You don’t have to turn it on. It’s just sort of built in, something you can take advantage of. (Applause.)

Another great feature that we’re building in is something we call active geo-replication. What this lets you do now is you can actually go ahead and run SQL databases in multiple Azure regions around the world. And you can set it up to automatically replicate your databases for you.

And this is basically an asynchronous replication. You can basically have your primary in rewrite mode, and then you can actually have your secondary and you can have multiple secondaries in read-only mode. So you can still actually be accessing the data in read-only mode elsewhere.

In the event that you have a catastrophic issue in, say, one region, say a natural disaster hits, you can go ahead and you can initiate the failover automatically to one of your secondary regions. This basically allows you to continue moving on without having to worry about data loss and gives you kind of a really nice, high-availability solution that you can take advantage of.

One of the things that’s nice about Azure’s regions is we try to make sure we have multiple regions in each geography. So, for example, we have two regions that are at least 500 miles away in Europe, and in North America, and similarly with Australia, Japan and China. And what that means is that you know if you do need to fail over, your data is never leaving the geo-political area that it’s based in. And if you’re hosted in Europe, you don’t have to worry about your data ever leaving Europe, similarly for the other geo-political entities that are out there.

So this gives you a way now with high confidence that you can store your data and know that you can fail over at any point in time.

In addition to some of these improvements with SQL databases, we also have a host of great improvements coming with HDInsight, which is our big data analytics engine. This runs standard Hadoop instance and runs it as a managed service, so we do all the patching and management for you.

We’re excited to announce the GA of Hadoop 2.2 support. We also have now .NET 4.5 installed and APIs available so you can now write your MapReduce jobs using .NET 4.5.

We’re also adding audit and operation history support, a bunch of great improvements with Hive, and we’re now Yarn-enabling the cluster so you can actually run more software on it as well.

And we’re also excited to announce a bunch of improvements in the storage space, including the general availability of our read-access geo-redundant storage option.

So we’ve kind of done a whole bunch of kind of deep dives into a whole bunch of the Azure features.

… With the April updates to Microsoft Azure, Azure Web Sites offers a new pricing tier called Basic. The Basic pricing tier is designated for production sites, supporting smaller sites, as well as development and testing scenarios. … Which pricing tier is right for me? … The new pricing tier is a great benefit to many customers, offering some high-end features at a reasonable cost. We hope this new offering will enable a better deployment for all of you.

Microsoft is launching support for Java-based web sites on Azure Web Sites. This capability is intended to satisfy many common Java scenarios combined with the manageability and easy scaling options from Azure Web Sites. …The addition of Java is available immediately on all tiers for no additional cost. It offers new possibilities to host your pre-existing Java web applications. New Java web site development on Azure is easy using the Java Azure SDK which provides integration with Azure services.

With the latest release of Azure Web Sites and the new Azure Portal Preview we are introducing a new concept: Web Hosting Plans. A Web Hosting Plan (WHP) allows you to group and scale sites independently within a subscription.…

The load balancing services can be accessed by specifying input endpoints on your services either via the Microsoft Azure Portal or via the service model of your application. Once a hosted service with one or more input endpoints is deployed in Microsoft Azure, it automatically configures the load balancing services offered by Microsoft Azure platform. To get the benefit of resiliency / redundancy of your services, you need to have at least two virtual machines serving the same endpoint.

The web marches on, and so does Visual Studio and ASP.NET, with a renewed commitment to making a great IDE for web developers of all kinds. Join Scott & Scott for this dive into VS2013 Update 2 and beyond. We’ll see new features in ASP.NET, new ideas in front end web development, as well as a peek into ASP.NET’s future.

New tiers improve customer experience and provide more business continuity options

To better serve your needs for more flexibility, Microsoft Azure SQL Database is adding new service tiers, Basic and Standard, to work alongside its Premium tier, which is currently in preview. Together these service tiers will help you more easily support the needs of database workloads and application patterns built on Microsoft Azure. … Previews for all three tiers are available today.

The Basic, Standard, and Premium tiers are designed to deliver more predictable performance for light-weight to heavy-weight transactional application demands. Additionally, the new tiers offer a spectrum of business continuity features, a [Data] stronger uptime SLA at 99.95%, and larger database sizes up to 500 GB for less cost. The new tiers will also help remove costly workarounds and offer an improved billing experience for you.

Azure HDInsight now supports [Data] Hadoop 2.2 with HDInsight cluster version 3.0 and takes full advantage of these platform to provide a range of significant benefits to customers. These include, most notably:

Microsoft Avro Library: …

[Data] YARN: A new, general-purpose, distributed, application management framework that has replaced the classic Apache Hadoop MapReduce framework for processing data in Hadoop clusters. It effectively serves as the Hadoop operating system, and takes Hadoop from a single-use data platform for batch processing to a multi-use platform that enables batch, interactive, online and stream processing. This new management framework improves scalability and cluster utilization according to criteria such as capacity guarantees, fairness, and service-level agreements.

High Availability: …

[Data] Hive performance: Order of magnitude improvements to Hive query response times (up to 40x) and to data compression (up to 80%) using the Optimized Row Columnar (ORC) format.

Willing to change, that was the message new Microsoft CEO Satya Nadella was pushing as the firm released third quarter earnings.

Microsoft beat Wall Street analysts’ expectations, driving the company’s stock price up 3 percent on Thursday after earnings were released. Growth came from the company’s surface tablet sales and commercial business sector, according to Norman Young, Senior Equity Analyst at Morningstar. Results were also aided by a less severe decline in the PC industry.

Young believes the company has already demonstrated continued growth for the fourth quarter and remains optimistic about the company’s new direction.

Nadella is shifting the traditionally PC focused company towards more mobile and cloud based technology. On the quarterly call with Wall Street he said, “What you can expect of Microsoft is courage in the face of reality; we will approach our future with a challenger mindset; we will be bold in our innovation.” Analysts are excited about the company’s future trajectory as he continues to push Microsoft’s business into the mobile and cloud computing world.

The company’s stock has increased 8 percent since Nadella assumed the role of CEO in February.

“This quarter’s results demonstrate the strength of our business, as well as the opportunities we see in a mobile-first, cloud-first world. We are making good progress in our consumer services like Bing and Office 365 Home, and our commercial customers continue to embrace our cloud solutions. Both position us well for long-term growth,” said Satya Nadella, chief executive officer at Microsoft. “We are focused on executing rapidly and delivering bold, innovative products that people love to use.”

From the prepared comments: “This quarter we continued our rapid cadence of innovation and announced a range of new services and features in three key areas – data, cloud, and mobility. SQL Server 2014 helps improve overall performance, and with Power BI, provides an end-to-end solution from data to analytics. Microsoft Azure preview portal provides a fully integrated cloud experience. The Enterprise Mobility Suite provides IT with a comprehensive cloud solution to support bring-your-own-device scenarios. These offerings help businesses convert big data into ambient intelligence, developers more efficiently build and run cloud solutions, and IT manage enterprise mobility with ease.”

Satya Nadella – Chief Executive Officer:

As I have told our employees, our industry does not respect tradition, it only respects innovation. This applies to us and everyone else. When I think about our industry over the next 5, 10 years, I see a world where computing is more ubiquitous and all experiences are powered by ambient intelligence. Silicon, hardware systems and software will co-evolve together and give birth to a variety of new form factors. Nearly everything we do will become more digitized, our interactions with other people, with machines and between machines. The ability to reason over and draw insights from everything that’s been digitized will improve the fidelity of our daily experiences and interactions. This is the mobile-first and cloud-first world. It’s a rich canvas for innovation and a great growth opportunity for Microsoft across all our customer segments.

To thrive we will continue to zero in on the things customers really value and Microsoft can uniquely deliver. We want to build products that people love to use. And as a result, you will see us increasingly focus on usage as the leading indicator of long-term success.

…

advancing Office, Windows and our data platform

continue to invest in our cloud capabilities including Office 365 and Azure in the fast growing SaaS and cloud platform markets

ensuring that our cloud services are available across all device platforms that people use

delivering a cloud for everyone on every device

have bold plans to move Windows forward:– investing and innovating in every dimension from form-factor to software experiences to price– Windows platform is unique in how it brings together consistent end user experiences across small to large screens, broadest platform opportunity for developers and control and assurance for IT– enhance our device capabilities with the addition of Nokia’s talented people and their depth in mobile technologies

our vision is about being going boldly into this mobile-first, cloud-first world

…

So this mobile-first cloud-first thing is a pretty deep thing for us. When we say mobile-first, in fact what we mean by that is mobility first. We think about users and their experiences spanning a variety of devices. So it’s not about any one form factor that may have some share position today, but as we look to the future, what are the set of experiences across devices, some ours and some not ours that we can power through experiences that we can create uniquely. …

… When you think about mobility first, that means you need to have really deep understanding of all the mobile scenarios for everything from how communications happen, how meetings occur. And those require us to build new capability. We will do some of this organically, some of it inorganically.

A good example of this is what we have done with Nokia. So we will – obviously we are looking forward to that team joining us building on the capability and then execution, even in the last three weeks or so we have announced a bunch of things where we talked about this one cloud for everyone and every device. We talked about how our data platform is going to enable this data culture, which is in fact fundamentally changing how Microsoft itself works.

We always talked about what it means to think about Windows, especially with the launch of this universal Windows application model. How different it is now to think about Windows as one family, which was not true before, but now we have a very different way to think about it.

…

[Re: Microsoft transition to more of a subscription business]

The way I look at it … we are well on our way to making that transition, in terms of moving from pure licenses to long-term contracts and as well as subscription business model. So when you talked about Platform-as-a-Service if you look at our commercial cloud it’s made up of the platform itself which is Azure. We also have a SaaS business in Office 365.

Now, one of the things that we want to make sure we look at is each of the constituent parts because the margin profile on each one of these things is going to different. The infrastructure elements right now in particular is going to have different economics versus some of the per-user applications in a SaaS mode have. It’s the blending of all of that that matters and the growth of that matters to us the most in this time where I think there is just a couple of us really playing in this market. I mean this is gold rush time in some sense of being able to capitalize on the opportunity.

And when it comes to that we have some of the best, the broadest SaaS solution and the broadest platform solution and that combination of those assets doesn’t come often. So what we are very focused on is how do we make sure we get our customer aggressively into this, having them use our service, be successful with it. And then there will be a blended set of margins across even just our cloud. And what matters to me in the long run is the magnitude of profit we generate given a lot of categories are going to be merged as this transition happens. And we have to be able to actively participate in it and drive profit growth.

… to me the Office 365 growth is in fact driving our enterprise infrastructure growth which is driving Azure growth and that cycle to me is most exciting. So that’s one of the reasons why I want to have to keep indexing on the usage of all of this and the growth numbers you see is a reflection of that.

[Background from him in the call:]

Office 365 I am really, really excited about what’s happening there, which is to me this is the core engine that’s driving a lot of our cloud adoption and you see it in the numbers and Amy will talk more about the numbers. But one of the fundamental things its also doing is it’s actually a SaaS application and it’s also an architecture for enterprises. And one of the most salient things we announced when we talked about the cloud for everyone and every device and we talked about Office 365 having now iPad apps, we also launched something called the enterprise mobility suite which is perhaps one of the most strategic things during that day that we announced which was that we now have a consistent and deep platform for identity management which by the way gets bootstrapped every time Office 365 users sign up, device management and data protection, which is really what every enterprise customer needs in a mobile-first world, in a world where you have SaaS application adoption and you have BYOD or bring your own devices happening.

…

[Re #1: about the new world in terms of more usage and more software driven rather than device driven, and the reengagement with the developer community in that world]

Developers are very, very important to us. If you’re in the platform business which we’re on both on the device side as well as on the cloud side, developers and their ability to create new value props and new applications on them is sort of likes itself. I would say couple of things. …… on the cloud side, in fact one of the most strategic APIs is the Office API. If you think about building an application for iOS, if you want single sign-on for any enterprise application, it’s the Azure AD single sign-on. That’s one of the things that we showed at Build, which is how to take advantage of list data in Sharepoint, contact information in Exchange, Azure active directory information for log-on. And those are the APIs that are very, very powerful APIs and unique to us. And they expand the opportunity for developers to reach into the enterprises. And then of course Azure is a full platform, which is very attractive to developers. So that gives you a flavor for how important developers are and what your opportunities are.

[Re: how you could potentially make what has been traditionally a unit model with Windows OEM revenue into something potentially more recurring in nature?]

… the thing I would add is this transition from one time let’s say licenses or device purchases to what is a recurring stream. You see that in a variety of different ways. You have back end subscriptions, in our case, there will be Office 365, there is advertising, there is the app store itself. So these are all things that attach to a device. And so we are definitely going to look to make sure that the value prop that we put together is going to be holistic in its nature and the monetization itself will be holistic and it will increase with the usage of the device across these services. And so that’s the approach we will take.

From the prepared comments: “Zero dollar licensing for sub 9-inch devices helps grow share and creates new opportunities to deliver our services, with minimal short term revenue impact”

[Re: the recent decision to offer Windows for free for sub 9-inch devices and its impact of Microsoft share in that arena, about Windows pricing in general, the kind of play in different market segmentations, and how Windows pricing is evolving]

Overall, the way I want us to look at Windows going forward is what does it mean to have the broadest device family and ecosystem? Because at the end of the day it’s about the users and developer opportunity we create for the entirety of the family. That’s going to define the health of the ecosystem. So, to me, it matters that we approach the various segments that we now participate with Windows, because that’s what has happened. Fundamentally, we participated in the PC market. Now we are in a market that’s much bigger than the PC market. We continue to have healthy share, healthy pricing and in fact growth as we mentioned in the enterprise adoption of Windows.

And that’s we plan to in fact add more value, more management, more security, especially as things are changing in those segments. Given BYOD and software security issues, we want to be able to reinforce that core value, but then when it comes to new opportunities from wearables to internet of things, we want to be able to participate on all of this with our Windows offering, with our tools around it. And we want to be able to price by category. And that’s effectively what we did. We looked at what it makes – made sense for us to do on tablets and phones below 9 inches and we felt that the price there needed to be changed. We have monetization vehicles on the back end for those. And that’s how we are going to approach each one of these opportunities, because in a world of ubiquitous computing, we want Windows to be ubiquitous. That doesn’t mean its one price, one business model for all of that. And it’s actually a market expansion opportunity and that’s the way we are going to go execute on it.

From the prepared comments: “Our universal app development platform is a big step towards enabling developers to engage users across PCs, tablets, and phones with a common set of APIs”

[Re #2: about the new world in terms of more usage and more software driven rather than device driven, and the reengagement with the developer community in that world]

Developers are very, very important to us. If you’re in the platform business which we’re on both on the device side as well as on the cloud side, developers and their ability to create new value props and new applications on them is sort of likes itself. I would say couple of things.

One is the announcements we made at Build on the device side is really our breakthrough worked for us which is we’re the only device platform today that has this notion of building universal apps with fantastic tooling around them. So that means you can target multiple of our devices and have common code across all of them. And this notion of having a Windows universal application help developers leverage them core asset, which is their core asset across this expanded opportunity is huge. There was this one user experience change that Terry Myerson talked about at Build, which expands the ability for anyone who puts up application in Windows Store to be now discovered across even the billion plus PC installed base. And so that’s I think a fantastic opportunity to developers and we are doing everything to make that opportunity clear and recruit developers to do more with Windows. And in that context, we will also support cross platforms. So this has been one of the things that we have done is the relationship with Unity. We have tooling that allows you to have this core library that’s portable. You can bring your code asset. In fact, we are the only client platform that has the abstractions available for the different languages and so on.

“SQL Server revenue grew more than 15%, and continued to outpace the data platform market; we continue to gain share in mission critical workloads”

“Windows Server Premium and System Center revenue showed continued strength from increased virtualization share and demand for hybrid infrastructure”

[Re: about the factors that have enabled Microsoft to continue growing server business well above its peers, and whether that kind of 10% ish growth is sustainable over fiscal 2015]

It’s a pretty exciting change that’s happening, obviously it’s that part of the business is performing very well for a while now, but quite frankly it’s fundamentally changing. One of the questions I often get asked is hey how did Windows server and the hypervisor underneath it becomes so good so soon. You’ve been at it for a long time but there seems to have something fundamentally changed I mean we’ve grown a lot of share recently, the product is more capable than it ever was, the rate of change is different and for one reason alone which is we use it to run Azure. So the fact that we use our servers to run our cloud makes our servers more competitive for other people to build their own cloud.

So it’s the same trend that’s accelerating us on both sides. The other thing that’s happening is when we sell our server products they for most part are just not isolated anymore. They come with automatic cloud tiering. SQL server is a great example. We just launched a new version of SQL which is by far the best release of SQL in terms of its features like it’s exploitation of in-memory. It’s the first product in the database world that has in-memory for all the three workloads of databases, OLTP, data warehousing and BI. But more importantly it automatically gives you high availability which means a lot to every CIO and every enterprise deployment by actually tiering to the cloud.

So those kinds of feature innovation which is pretty boundary less for us is breakthrough work. It’s not something that somebody who has been a traditional competitor of ours can do if you’re not even a first class producer of a public cloud service. So I think that we’re in a very unique place. Our ability to deliver this hybrid value proposition and be in a position, where we not only run a cloud service at scale, but we also provide the infrastructure underneath it as the server products to others. That’s what’s driving the growth. The shape of that growth and so on will change over time, but I feel very, very bullish about our ability to continue this innovation.

Introducing Common XAML UI – In today’s Build Keynote we heard that Microsoft is finally starting the reconciliation process with the introduction of Common XAML UI. Based on the WinRT API, the Common XAML UI framework will allow the same UI code to be shared on phones, tablets, desktop computers, and eventually Xbox One. … Common XAML and Universal Apps are available in all versions Visual Studio 2013 Update 2. Apr 02, 2014

A WPF Q&A – A panel of 9 Microsoft desktop developers were available during a lunch time Q&A. This session was not filmed, but we were able to record some of the WPF questions and Microsoft’s answers. … Microsoft is looking into offering the same kind of functionality for XAML that we currently see in web sites via Browser Link. Partial functionality is available via Snoop or XAML Spy. Touch and desktop applications came up. And again, the panel mentioned the possibility to offer Common XAML for the desktop. … Apr 03, 2014

A Q&A with the XAML Performance Leadership Team – This panel discussion mostly covers XAML, but there are still some thoughts on its relationship to WPF and the desktop in general. … Microsoft intends to continue copying features form WPF into XAML, but in a measured fashion. The features they choose to move are based largely on developer feedback, especially in terms of pain points. Apr 03, 2014

Future-Proofing Desktop Applications for Hardware Enhancements – Though CPUs aren’t getting any faster, other hardware capabilities are rapidly increasing. This is most evident in high DPI displays and the way they shrink legacy applications to the point of illegibility. So for perhaps the first time since the 90’s, future proofing for better monitors is becoming vital. With Windows 8, scaling was available up to 150%. Windows 8.1 bumped that up to 200% and the soon to be released Win 8.1 Update will further push that to 250%. But that’s only for high-end machines, mainstream machines are only expected to need 150% scaling in 2015. … Microsoft’s plan to work around this is to leverage Remote Desktop. By setting the scaling at values from 100% to 500%, developers can see how their application behaves without a high DPI monitor. It isn’t an ideal experience, as the screen is zoomed in to potentially absurd levels. A whitepaper on how to enable this will be published sometime next month. … We mentioned that Kinect might need to be supported. How that support will happen for common applications is not yet known. They may eventually include it into the pointer API, but as it stands you have to use the Kinect Windows SDK directly. Apr 02, 2014

Scott Hunter from Microsoft stops by our booth at Build to talk Azure and the web with Mehul Harry from Developer Express. You can see his Day 1 session here: http://channel9.msdn.com/Events/Build/2014/3-602 “The web marches on, and so does Visual Studio and ASP.NET, with a renewed commitment to making a great IDE for web developers of all kinds. Join Scott & Scott for this dive into VS2013 Update 2 and beyond. We’ll see new features in ASP.NET, new ideas in front end web development, as well as a peek into ASP.NET’s future.”

Mehul Harry: Web Program Manager (since November 2006)Scott Hunter: now Principal Program Manager Lead on the Azure Application and Platform Team focusing on .NET development on the server, this includes working on ASP.NET, MVC, Web API, Web Pages, SignalR, Entity Framework, Visual Studio Web Tooling, Nuget and Azure SDK’s; previously was Senior Program Manager Lead on the ASP.NET team (for 7 years)