Press release

Amazon Web Services Announces Series of New Database Capabilities

Significant new features for Amazon Aurora and Amazon DynamoDB; brand new purpose-built databases for time series data and ledger systems of record

Amazon Aurora Global Database enables customers to update a database in a single region and automatically and quickly replicate to other AWS Regions for even higher availability, disaster recovery, and lower latency

Amazon DynamoDB On-Demand, a flexible new capacity option for the fully-managed key-value database service, enables customers to instantly scale to thousands of requests per second, with no capacity planning required, saving customers time and money

Amazon Timestream, a fast, scalable, and fully managed time series database for IoT and operational applications, helps customers process trillions of time series events per day 1,000 times faster and at 1/10th the cost of relational databases

Amazon Quantum Ledger Database (QLDB) provides a high-performance, immutable, cryptographically verifiable ledger for applications where multiple parties work with a centralized, trusted authority to maintain a complete, verifiable record of transactions

SEATTLE--(BUSINESS WIRE)--Nov. 28, 2018-- Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ:AMZN), announced significant new Amazon Aurora and Amazon DynamoDB capabilities along with two new purpose-built databases. New Amazon Aurora Global Database offers customers the ability to update a database in a single region and have it automatically replicated to other AWS Regions for higher availability and disaster recovery. Amazon DynamoDB’s new On-Demand feature provides read/write capacity provisioning which removes the need for capacity planning and enables customers to only pay for the read/write requests they consume, while the launch of DynamoDB Transactions enables developers to build transaction guarantees for multi-item updates, making it easier to avoid conflicts and errors when developing highly scalable business critical applications. AWS also announced two new purpose-built database services, Amazon Timestream, a fast, scalable, and fully managed time series database for IoT and operational applications and, Amazon Quantum Ledger Database (QLDB),a highly scalable, immutable, and cryptographically verifiable ledger. To get started with Amazon Aurora or Amazon DynamoDB, visit https://aws.amazon.com/rds/aurora or https://aws.amazon.com/dynamodb/ or to learn more about Amazon Timestream or Amazon Quantum Ledger Database, visit: https://aws.amazon.com/timestream or https://aws.amazon.com/qldb.

“Hundreds of thousands of customers have embraced AWS’s built-for the-cloud database services because they perform and scale better, are more cost effective, can be easily combined with other AWS services, and offer freedom from restrictive, over-priced, and clunky, old guard database offerings,” said Raju Gulabani, Vice President, Databases, Analytics, and Machine Learning, AWS. “Today’s announcements make it even easier for AWS customers to scale and operate cloud databases around the world. Whether it is helping to ensure critical workloads are fully available even when disaster strikes, instantly scaling workloads to Internet-scale, maintaining application data consistency, or building new applications for emerging use cases like time series data or ledger systems of record, we are giving customers the features and purpose-built databases they need to support the most mission critical workloads at lower cost, better operational performance, and diminished complexity."

Amazon Aurora MySQL now supports Global Database (available today)

Amazon Aurora, the fastest-growing service in AWS history, is a MySQL and PostgreSQL-compatible relational database built for the cloud and used by tens of thousands of customers around the world. Amazon Aurora Global Database allows customers to update a database in a single AWS Region and automatically replicate it across multiple AWS Regions globally, typically in less than a second. This allows customers to maintain read-only copies of their database for fast data access in local regions by globally distributed applications, or to use a remote region as a backup option in case they need to recover their database quickly for cross-region disaster recovery scenarios.

Intuit, maker of TurboTax, QuickBooks, Mint and Turbo, provides financial management solutions to approximately 50 million consumers, self-employed and small businesses around the world. “Intuit recently migrated their commerce platform to Amazon Aurora MySQL to support increasing global demand. All direct purchases of Intuit’s software will go through the Intuit’s Commerce Platform running on Aurora, with TurboTax already live to meet traffic demands during tax season,” said Krishna Vaishnav, Engineering Manager, e-commerce and cloud platform engineering at Intuit. “A large portion of our workload involves low latency, read-only access to data. An example is pricing information, which is infrequently updated but needs to be readily available for reads from coast to coast. Aurora Global Database, with sub-second global replication, enables us to address this business requirement without performance or latency constraints. As a financial services company, we also care deeply about business continuity even in the face of large-scale events. Aurora Global Database allows us to maintain a strong disaster recovery posture by distributing data across AWS regions with failover typically taking under a minute to complete.”

Amazon DynamoDB is a fully managed, key-value database service that offers reliable performance at any scale. More than a hundred thousand AWS customers use Amazon DynamoDB to deliver consistent, single-digit millisecond latency for some of the world’s largest applications. Many of these customers run large-scale applications that receive irregular and unpredictable data access requests or have new applications for which the usage pattern is unknown. These customers often face a database capacity planning dilemma, having to choose between over-provisioning capacity upfront and paying for resources they will not use, or under-provisioning resources and risking performance problems, and a poor user experience.

For applications with unpredictable, infrequent usage, or spikey usage where capacity planning is difficult, Amazon DynamoDB On-Demand removes the need for capacity planning, by automatically managing the read/write capacity, and customers only pay-per-request for what they actually use. Amazon DynamoDB On-Demand delivers the same single-digit millisecond latency, high availability, and security that customers have come to expect from Amazon DynamoDB.

Amazon DynamoDB powers some of the world’s most high-scale applications that run globally. Sometimes, developers building those applications need support for transactions and have to write custom code for error handling that can be complex, error prone, and time consuming. Amazon DynamoDB Transactions enables developers to build transactions with full atomicity, consistency, isolation, and durability (ACID) guarantees for multi-item updates into their DynamoDB applications, without having to write complex client-side logic to manage conflicts and errors, and without compromising on scale and performance.

“At Amazon.com, our business-critical e-commerce platform relies heavily on Amazon DynamoDB for consistent low-latency performance regardless of workload volume, even during peak shopping events,” said Dave Treadwell, VP of eCommerce Foundation at Amazon.com. "Although we are experienced DynamoDB users, it can still be difficult to forecast our future throughput needs, especially for new applications and infrequent workloads. Previously, we would often overprovision throughput capacity just to be on the safe side. We received early access to DynamoDB on-demand and our testing showed that it eliminates the need to make these capacity decisions. DynamoDB on-demand does for nonrelational databases what Amazon S3 did for object storage. We simply create a table and start making requests. There is no provisioning or capacity planning (DynamoDB manages that for us), and we pay only for what we use in terms of storage and read and write requests that our applications perform."

Developers are building IoT and operational applications that need to collect, synthesize, and derive insights from enormous amounts of data that changes over time (known as time-series data). Common examples include DevOps data that measures change in infrastructure metrics over time, IoT sensor data that measures changes in sensor readings over time, and Clickstream data that captures how a user navigates a website over time.

This type of time-series data is generated from multiple sources in extremely high volumes, and needs to be collected in near-real time, in a cost-optimized and highly scalable manner, and customers need a way to store and analyze all this data efficiently. To do this today, customers are using either their existing relational databases or existing commercial time-series databases. Neither of these options are attractive because none have been built from the ground up as time-series databases for the scale needed in the cloud.

Relational databases have rigid schemas that need to be pre-defined and are inflexible if new attributes of an application need to be tracked. They require multiple tables and indexes to be created that lead to complex and inefficient queries as the data grows over time. In addition, they lack the required time series analytical functions such as smoothing, approximation, and interpolation. When you look at existing open source or commercial time series DBs, they are difficult to scale, do not support data retention policies, and require developers to integrate them with separate ingestion, streaming/batching, and visualization software.

To address these challenges, AWS is introducing Amazon Timestream, a purpose-built, fully managed time series database service for collecting, storing, and processing time series data. Amazon Timestream processes trillions of events per day at one-tenth the cost of relational databases, with up to one thousand times faster query performance than a general purpose relational database. Amazon Timestream makes it possible to get single-digit millisecond responsiveness when analyzing time series data from IoT and operational applications. Analytics functions in Amazon Timestream provide smoothing, approximation, and interpolation to help customers identify trends and patterns in real-time data. And, Amazon Timestream is serverless, so it automatically scales up or down to adjust capacity and performance, and customers only pay for what they use.

Capital One is a diversified bank that offers a broad array of financial products and services to consumers, small businesses and commercial clients. “We need a fast and scalable time series database solution to ingest and analyze data quickly,” said Sunjay Pandey, Vice President, Capital One. “Amazon Timestream as a purpose-built time series database will give us the capacity to process this data cost-effectively.”

Edmunds.com is a car-shopping website that offers detailed, constantly updated information about vehicles to 20 million monthly visitors. “At Edmunds, we manage millions of metrics emitted by our IT infrastructure every day,” said Stephen Felisan, Chief Information Officer, Edmunds.com. “Amazon Timestream as a purpose-built time series database provides powerful built-in functions for interpolation and approximation that will help us analyze this time series data quickly without writing complex code.”

Amazon QLDB: A high performance, immutable, and cryptographically verifiable ledger database service (available in preview)

Amazon QLDB is a new class of database that provides a transparent, immutable, and cryptographically verifiable ledger that customers can use to build applications that act as a system of record, where multiple parties are transacting within a centralized, trusted entity. Amazon QLDB removes the need to build complex audit functionality into a relational database or rely on the ledger capabilities of a blockchain framework. Amazon QLDB uses an immutable transactional log, known as a journal, which tracks each and every application data change and maintains a complete and verifiable history of changes over time. All transactions must comply with atomicity, consistency, isolation, and durability (ACID) to be logged in the journal, which cannot be deleted or modified. All changes are cryptographically chained and verifiable in a history that customers can analyze using familiar SQL queries. Amazon QLDB is serverless, so customers don’t have to provision capacity or configure read and write limits. They simply create a ledger, define tables, and Amazon QLDB will automatically scale to support application demands, and customers pay only for the reads, writes, and storage they use. And, unlike the ledgers in common blockchain frameworks, Amazon QLDB doesn’t require distributed consensus, so it can execute two to three times as many transactions in the same time as common blockchain frameworks.

About Amazon Web Services

For over 12 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud platform. AWS offers over 125 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 57 Availability Zones (AZs) within 19 geographic regions around the world, spanning the US, Australia, Brazil, Canada, China, France, Germany, India, Ireland, Japan, Korea, Singapore, and the UK. AWS services are trusted by millions of active customers around the world—including the fastest-growing startups, largest enterprises, and leading government agencies—to power their infrastructure, make them more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit amazon.com/about and follow @AmazonNews.