Job Graphs

The job graphs are as follows showing significantly more map-reduce steps using the Left, Right and Inner join.

LIR Method

CFO Method

Comparison

So looking at the job graphs we can see that there is a very clear difference on the physical execution. As an aside Azure Data Lake Analytics has some really good job graph analysis tools. The first one we’ll look at is a side by side comparison:

I’m not going to give commentary on the numbers. It should be fairly clear that the CFO (MergeFullOuter) physical job plan is significantly more efficient on all comparative measures

Vertexes are not all independent, they will have some dependency based on the logic. So vertexes are organised into dependency stages. Max Degree of parallelism that I allocated is 10 AU’s.

I picked 10 deliberately because I already knew from previous runs that it’s the maximum that it can achieve for this particular job. Increasing any further would wasting AU’s. Remember though we pay for the reservation not for the use so just because it’s using 10 it may not be the most efficient… It depends on the business value of your job and the cost! Because elastic MPP compute is available at a fixed cost and it scales relatively linearly it just becomes a question of:

how much data?

when do we want it to finish?

how much will it cost?

Using the AU analysis we can see the following for the LIR and CFO respectively:

This is a great tool for analyzing compute. Again we can see the CFO is more efficient needing less AU’s for the majority of the job for shorter compute time. Note that for both jobs we only make use of the full 10 AU’s for a relatively small proportion of the compute. This is why the Azure AU analyzer is recommending 3 AU’s for LIR and 2 for CFO. Interestingly if I drag the AU bar down using the visual to achieve approximately the same compute I can see that LIR will need 8 and CFO will need 6. Essentially CFO requires less reserved AU’s for a shorter compute hence a lower cost:

LIR $ 0.67

CFO $ 0.33

Note cost doesn’t change much within 10 AU’s because as the reservation comes down the compute time goes up. It’s relatively linear.

Conclusion

In short write good U-SQL!

Lazy transformation is absolutely what we want from a platform that processes big data. What do I mean by this?

If you work with SQL Server for a career then you’ll know that SQL Server compiles it’s SQL into query plans. The optimizer that creates the plan decides how best to execute the query based on what it knows about the data, indexes, etc. It shouldn’t matter how we write the code, if the optimizer is good then it should arrive at the same query plan… In the case of SQL reality is more complicated than it appears and the code and indexes you create matter.

We need lazy transformation more so in big data processing platform just because of the scale and costs involved. The spark platform is really good at this. There are a variety of options for coding transformations but the end result… the query plan that eventually gets executed and reads the data will be the same.

With data lake it really does seem to matter how we write the queries. Returning back to table variable multiple times shouldn’t really matter. I would expect the platform to take the code and create a physically optimized map reduce job is if coded in a single operation. It really does seem to matter however if we go about the job using different join and set operators. We cannot just be lazy about the code we write and trust the engine optimizer to create the best physical plan. Fortunately the optimization tools are pretty good and very easy to use.

In a this blog I’ve covered how I set up a standalone Spark 2.3 on an Azure provisioned CentOS 7.4 VM. This is the build I’m using to experiment with and learn Spark data applications and architectures. A benefit of using an Azure VM is that I can rip it down, rebuild it or clone it. When I do this I don’t want to lose my data every time, recover it and then put it back in place. Having my data in a datalake in an Azure blog storage container is ideal since I can kill and recycle my compute VMs and my data just stays persisted in the cloud. This blog covers how I can mount my blob storage container to my CentOS 7.4 VM.

Note this is for standalone only and for the convenience of learning and experimentation. A multi-node Spark cluster would need further consideration and configuration to achieve distributed compute over Azure blog storage.

A final note; I’m learning linux and spark myself and a lot of this stuff is already on the webz albeit in several different places sometimes poorly explained. Hopefully this provides a relatively layman’s end to end write-up with the missing bits filled in that I found myself asking.

Temporary Path

Blobfuse requires a temporary path. This is where it caches files locally aiming to provide the performance of local native storage. This place obviously has to be big enough to accommodate the data that we want to use on our standalone spark build. What better drive to use for this than the local temporary SSD storage that you get with a Azure Linux VM. Running the following we can see a summary of our attached physical storage:

df -lh

Here we can see that /dev/sbd1 has 63GB available which is plenty for me right now. It’s mounted on /mnt/resource so we’ll create a temp directory here. Obviously substitute your own username when assigning permissions.

Create an Azure Blob Storage Account

After creating the storage account we need to create a container; Click on blobs and create a container.

Once the container is created, click on it and upload some data. I’m using the companion data files for the book called Definitive Guide to Spark, they can be found here.

Now the storage, container and data is up we need to note down the following details so that we can configure the connection details for blobfuse:

Storage Account Name

Access Key 1 or 2 (doesn’t matter)

Container Name – we already created I called it datalake

These can be obtained by clicking on the storage account Access Keys.

Configure Blob Storage Access Credentials

Blobfuse takes a parameter which is a path to a file that holds the Azure storage credentials. To that end we need to create this file. I created it in my home user directory (i.e. home/shaunryan or ~) for convenience. Because of it’s content it should be adequately secured on a shared machine so store it where you want to but note the path.

cd ~
sudo touch ~/fuse_connection.cfg
chmod 700 fuse_connection.cfg

We need the following Azure storage details for the storage container that we want to mount using blobfuse:

account name

access key

container name

Create an Azure Blob Storage Account above will show where these details can be found.

Mount the Drive

So now all that’s left to do is mount the drive. We need somewhere to mount it to so create a directory of your liking. I’m using a sub-dir in a folder called data at my home directory since I might mount more than 1 storage container and it’s just for me (~/data/datalake).

sudo mkdir ~/data/datalake

We also need the path to our temp location (/mnt/resource/blobfusetmp) and the path to our fuse_connection.cfg file that holds the connection details (just fuse_connection.cfg because I created this at ~).

So now when we list files in this directory I should see all the files that are in my storage account and I can load them into my spark console. See below where I have all the data files available to work through the definitive guide to spark book. I copied them from GitHub into my Azure storage account which is now attached to my VM.

Automate in Bash Profile

So it’s all up and working until we reboot the machine, the drive is unmounted and our temp location is potentially (we should assume it will be) deleted.

To remedy this we can automate the temporary file creation and blobfuse storage mount in the bash profile.

That way I can totally forget all this stuff and just be happy that it works; and when it doesn’t I’ll be back here reading what I wrote.

Nano the bash profile to edit it.

sudo nano ~/.bash_profile

Add the following to the end of the profile and ctrl+x to exit and y to save.

Now when we ssh in it should mount automatically. Below shows a login after a reboot and login after an exit. The mount after the exit will fail because it’s already mounted which is fine. Note the temporary storage already existed but it may not do. I issued a reboot so likely hood it wasn’t down long enough to be recycled, however it was destroyed when I shut down the VM last night and power it up this morning.