Microsoft Azure Stack is an extension of Azure—bringing the agility and innovation of cloud computing to your on-premises environment and enabling the only hybrid cloud that allows you to build and deploy hybrid applications anywhere. We bring together the best of the edge and cloud to deliver Azure services anywhere in your environment.

Azure Data Lake Analytics is the first cloud serverless job-based analytics service where you can easily develop and run massively parallel data transformation and processing programs in U-SQL, R, Python, and .Net over petabytes of data. With no infrastructure to manage, you can process data on demand, scale instantly, and only pay per job.

Region:

Currency:

US government entities are eligible to purchase Azure Government services from a licensing solution provider with no upfront financial commitment, or directly through a pay-as-you-go online subscription.

Important—The price in R$ is merely a reference; this is an international transaction and the final price is subject to exchange rates and the inclusion of IOF taxes. An eNF will not be issued.

Azure Germany is available to customers and partners who have already purchased this, doing business in the European Union (EU), the European Free Trade Association (EFTA), and in the United Kingdom (UK). It provides data residency in Germany with additional levels of control and data protection. You can also sign up for a free Azure trial.

Support & SLA

FAQ

An Azure Data Lake Analytics Unit, or AU, is a unit of computation made available to your U-SQL job. Each AU gives your job access to a set of underlying resources like CPU and memory. Learn more about an AU

When you create a job, you must allocate AUs for the job to run. A job will pass through 4 major phases: preparation, queuing, execution and finalization and enters execution after the allocated AUs become available. You will be billed AUs allocated for the duration of the job's execution and finalization phases. Learn more about an AU

You should carefully allocate the right number of AUs that fits your job requirements. Increasing the number of AUs makes more compute resources available to your job, it however does not increase the job’s inherent parallelism. Depending on your job’s characteristics (e.g. how parallelizable it is, and how much data it is processing etc.), you may see that your jobs run faster with more AUs, or you may over-allocate more AUs than can be used. Azure Data Lake Tools for Visual Studio provides several tools that can help you diagnose the performance of your U-SQL jobs and estimate the optimal number of AUs. Learn more about saving money and controlling costs

Price is determined by the number of AUs and job length. Let’s assume two cases:

Case 1: A job takes three hours to complete with 10 AUs, so the price is calculated as 3*10=30 AU hours. If the job can take advantage of 20 AUs and runs twice as fast, the price would be 1.5*20= 30 AU hours. In this case the price is the same, but latency is improved.

Case 2: A job takes five hours to complete with 10 AUs, so the price is calculated as 5*10=50 AU hours. If the job takes 4 hours to complete when using 20 AUs, the price would be 4*20=80 AU hours. In this case, the total cost increased 80% with your job finishing one hour sooner.

Azure Data Lake Storage Gen1 transactions incur any time you read or write data to the service. Every time a user, an application, or another Azure service reads or writes data up to 4 MB in size, it's billed as one transaction. For example, if one writes operation puts 128 KB of data into Data Lake Storage Gen1, it's billed as one transaction. Transactions are billed in increments of up to 4 MB, so if an item is larger than 4 MB, it will be billed in multiple increments. For example, if one read operation gets 9 MB of data from Data Lake Storage Gen1, it's billed as three transactions (4 MB + 4 MB + 1 MB).

Let's see how transactions appear on your bill based on read operations. For this, assume a scenario where your application runs a Data Lake Analytics job for four hours per day, while reading 1,000 items per second when the job is running, each item being less than 4 MB. In the above scenario, Data Lake Storage Gen1 will charge for read transactions for Data Lake Analytics reading data from Data Lake Storage Gen1. You will be charged the following:

Item

Usage Volume Per Month

Rate Per Month

Monthly Cost

Read transactions from Data Lake Analytics

1,000 items/second * 3,600 * 4 * 31

$- per 10,000 transactions

$-

Total transactions cost

$-

Price is determined by the number of AUs you reserve for the month.

A billing cycle is aligned to calendar month. Therefore, it always starts the 1st day of the month and ends the last day of the month.

When you commit for the first time to a package, we will pro-rate the monthly price and AU-hours to the days left within that month. As an example, if you commit to a 1,000 AU-hour package and there are 10 days left within that month, you will immediately get 334 AU-hours (1,000 AU-hours / 30 days in a month x 10 day left) at a price of $- ($- / 31 days in a month x 10 day left). We pro-rate by 30 days for the AU-hours in a package and by 31 days for the price to make sure that the pro-rata is always in your favor.

Units in a package reset the 1st day of the month. As an example, if you commit to 100 AU-hours and you have 20 AU-hours left by the end of the month, your package will be reset to 100 AU-hours the day after. There is no roll-over for unused AU-hours.

You can choose a new package at any time. The change will be effective the first day of the next calendar month. This means that during a month if you have a package of 100 AU-hours and decide to commit to a 500 AU-hours package, this change will apply on the 1st day of the next calendar month. For the current calendar month, you will remain on the 100 AU-hours package.

We use "seconds" as the unit of measure for the consumption of your commitment package.

Once your package is consumed, you will be charged at the overage consumption rate.

Consumption is determined by the number of AUs and job length. Job length is influenced by the number of AUs assigned to the job as well as the characteristic of the job such as data size and computation complexity.

Case 1: You committed to 100 AU-hours and submit a job that takes 2 hours and 30 minutes to complete with 1 AU, so the consumption is calculated as 2.5*1=2.5 AU Hours. You will have 97.5 AU-hours left in your commitment.

Case 2: You committed to 100 AU-hours and have only 1 AU-hour left. You submit a job that takes 2 hours to complete with 2 AUs, so the consumption is calculated as 2*2=4 AU Hours. You will use your remaining AU hour and be charged 3 additional AU hours at the overage rate (1.5*3 = $-)

Azure Data Lake Analytics allows you to read and write data from Azure Data Lake Storage Gen1, Azure Blob Storage and Azure SQL Database. The use of these services by Azure Data Lake Analytics can incur standard charges from these services (e.g., transactions, outbound data transfers, etc.). Please refer to the service pricing page for these services for more details.

Job Cancellation:Cancellation is always the result of a deliberate customer action or an admin-defined administrative policy. The ADLA service does not autonomously cancel jobs, except in the case that a vertex reaches its execution time limit of 5 hours (there is no time limit for a job but a limit for a single vertex). When a job is cancelled, we will bill you for the duration the job was running.

Job Failure:Job failures are either a result of user error, or sometimes a ADLA service error. The error code from a failed job will indicate whether a job failure was result of a user or service error. If the error code contains “USER”, the failure was result of a user error, in which case, the service will bill you for the duration the job was running. However, if the error contains "SYSTEM", the failure was a result of ADLS service error and you will not be billed for the job.