VIRTUALIZATION IS LIFE!

.+Add.Feed Info1000FOLLOWERS

Anthony Spiteri is a Global Technologist, vExpert, VCIX-NV and VCAP-DCV working in the Product Strategy team at Veeam. He currently focuses on Veeam's Service Provider products and partners. He previously held Architectural Lead roles at some of Australia's leading Cloud Providers. His specialities include: vCloud Suite specializing in vCloud Director and vSphere; Network..

When it comes to connecting remote sites, branch offices or extending on-premises networks to the cloud that level of complexity has traditionally always been high. Networking has always been the most complex part of any IT platform. There has also always been a high level of cost associated with connecting sites…both from a hardware or a software point of view. There are also the man hours to ensure things are setup correctly and will continue to work. As well and that, security and performance are also important factors in any networking solution..
Simplifying Networking with Veeam

At VeeamOn in 2017, we announced the release candidate for Veeam Powered Network (Veeam PN) which in combination with our Restore to Azure functionality created a new solution to ease the complexities around extending an on-premises network to an Azure network to ensure connectivity during restoration scenarios. In December of that year, Veeam PN went generally available as a FREE solution.

What Veeam PN does well is present a simple and intuitive Web Based User Interface for the setup and configuration of site-to-site and point-to-site VPNs. Moving away from the intended use case, Veeam PN became popular in the IT enthusiast and home lab worlds as a simple and reliable way to remain connected while on the road, or to mesh together with ease networks that where spread across disparate platforms.

By utilizing OpenVPN under the surface and automating and orchestrating the setup of site-to-site and point-to-site networks, we leveraged a mature Open Source tool that offered a level of reliability and performance that suited most use cases. However, we didn’t want to stop there and looked at ways in which we could continue to enhance Veeam PN to make it more useful for IT organizations and start to look to increase underlying performance to maximize potential use cases.

Introducing Veeam Powered Network v2 featuring WireGuard

With the release of Veeam PN v2, we have enhanced what is possible for site-to-site connectivity by incorporating WireGuard into the solution (replacing OpenVPN for site-to-site) as well as enhancing usability. We also added the ability to better connect to remote devices with the support of DNS for site-to-site connectivity.

WireGuard has replaced OpenVPN for site-to-site connectivity in Veeam PN v2 due to the rise of it in the Open Source world as a new standard in VPN technologies that offers a higher degree of security through enhanced cryptography and operates more efficiently, leading to increased performance and security. It achieves this by working in kernel and by using fewer lines of code (4000 compared to 600,000 in OpenVPN) and offers greater reliability when thinking about connecting hundreds of sites…therefore increasing scalability.

For a deeper look at why we chose WireGuard… have a read of my offical veeam.com blog. The story is very compelling!

Increased Security and Performance

By incorporating WireGuard into Veeam PN we have further simplified the already simple WireGuard setup and allow users of Veeam PN to consume it for site-to-site connectivity even faster via the Veeam PN Web Console. Security is always a concern with any VPN and WireGuard again takes a more simplistic approach to security by relying on crypto versioning to deal with cryptographic attacks… in a nutshell it is easier to move through versions of primitives to authenticate rather than client server negotiation of cipher type and key lengths.

Because of this streamlined approach to encryption in addition to the efficiency of the code WireGaurd can out perform OpenVPN, meaning that Veeam PN can sustain significantly higher throughputs (testing has shown performance increases of 5x to 20x depending on CPU configuration) which opens up the use cases to be for more than just basic remote office or homelab use. Veeam PN can now be considered as a way to connect multiple sites together and have the ability to transfer and sustain hundreds of Mb/s which is perfect for data protection and disaster recovery scenarios.

Other Enhancements

The addition of WireGuard is easily the biggest enhancement from Veeam PN v1, however there are a number of other enhancements listed below

DNS forwarding and configuring to resolve FQDNs in connected sites.

New deployment process report.

Microsoft Azure integration enhancements.

Easy manual product deployment.

Conclusion

Once again, the premise of Veeam PN is to offer Veeam customers a free tool that simplifies the traditionally complex process around the configuration, creation and management of site-to-site and point-to-site VPN networks. The addition of WireGuard as the site-to-site VPN platform will allow Veeam PN to go beyond the initial basic use cases and become an option for more business-critical applications due to the enhancements that WireGuard offers.

VeeamON is happening next week and the final push towards the event is in full swing. I can tell you that that this years event is going to be slightly different for those that have attended VeeamONs in the past…however that is a good thing! This is going to be my fourth VeeamOn, and my third being involved with the preparation of elements of the event. Having been behind the scenes, and knowing what our customers and partners are in for in terms of content and event activities…I can’t wait for things to kick off in Miami.

This year we have 60+ breakout sessions with a number of high profile speakers coming over to help delver those sessions. We also have significant keynote speakers for the main stage sessions on each of the event days. One of the biggest differences this year is that we will have a dedicated Technical Mainstage Keynote happening on Tuesday afternoon which will feature myself and other members of the Veeam Product Strategy and Product Management teams showing live demos of the latest Veeam technology and a look at what’s coming in our next major release.

Top Session Pick:

I’ve gone through all the breakouts and picked out my top sessions that you should consider attending…as usual there is a cloud slant to most of them, but there are also some core technology sessions that are not to be missed. The Veeam Product Strategy team are well represented in the session list so it’s also worth looking to attend talks from Rick Vanover, Michael Cade, Niels Engelen, David Hill, Kirsten Stoner, Dave Russell, Jason Buffington, Jeff Reichard and Danny Allan.

When I was a boy, I started following the Essendon Australian Rules Football club…I was drawn to their colours and I was also drawn to the fact they had just completed back to back premierships. Since then, I have been engaged in running battles with my father, family and friends…all who support different AFL sides. I chose my tribe early on in life and that has resulted in battle lines being drawn every since.

People, by nature are tribal creatures…most of us strive to belong to groups that carry similar values, shared beliefs and also, the most primal desires of all…the feeling of belonging, security and safety. People form tribes… they always have… they always will. We all fight for our tribes and in what we believe in. Whether it be Coke or Pepsi, Burger King or McDonalds, Nike or Reebok, Apple or Samsung… the list goes on!

Work Tribes:

When it comes to work, tribalism becomes even more apparent. Even within work places we see tribes form between departments and even within the same groups… each tribe with their own agenda…their own political motives… but ultimately each person in their respective tribes wants to see that tribe succeed.

Stage Three: Tribal members are selfish at this stage. They are in it for themselves, and they are extremely averse to collaboration. Their attitude is “I’m great . . . and you’re not.”

Each stage has it’s own description but ultimately when it comes to Work Tribes, we are very good at taking that attitude of, I am great and you are not. Your software sucks…mine is better. We outperform your storage array.. etc etc

Vendor Wars, FUD, Trolling and the Notion of Can’t we all Get Along?

Anyone who operates in and around IT vendors knows of instances where things have been posted on social media that escalates to popcorn worthy viewing. Trolling is also something that happens quiet often and I will be the first to admit that I have been involved at times and also witnessed petulant behaviour that has a lot to do with protecting ones tribe.

We all walk a fine line when it comes to supporting our tribes… and for those who are passionate by nature, the line can sometimes be easily crossed. I have observed those who claim to be non tribal, less passionate and see themselves as neutral observers when it comes to trolling, arguments or FUD throwing. These are the people that will ironically join the argument while standing on their soapboxes and shout… “Why can’t we all get along!” … themselves showing Stage 1 or 2 tribal characteristics.

When it comes to defending our tribes… the tribes that put food on the table for our families… the tribes that help us achieve a sense of belonging and accomplishment in life … the tribes who we currently root for 100%… it should not be of surprise to anyone that competitive behaviour exists. There are always lines that are crossed, but that is one hundred percent due to the belief in our own tribes and the desire for them to survive and prosper.

I’m not excusing any behavior. I’m not condoning some of the stuff I have seen, or been a part of… but what I am trying to say is that as long as people exist, we will form tribes… it’s a very reptilian instinct that makes us want to defend our patches.

I know this is controversial to some… and that some people don’t like or condone the behaviour that we see sometimes, but the reality of the world in which we live in… especially in the IT vendor space… is that tribes will be at war… and people will do what they need to do to win. It’s not always desirable and sometimes the level of FUD is amazingly mind blowing. However, the one thing to remember… and the irony that is obviously apparent in the world of IT is that people change tribes often… people who where once your enemy are now your tribe members… this is something that needs consideration as we are always ultimately accountable for our actions.

At the end of the day, it is almost impossible for everyone to play nice…We are… and always have been tribal!

At the recent Cloud Field Day 5 (CFD#5) I presented a deep dive on the Veeam Cloud Tier which was released as a feature extension of our Scale Out Backup Repository (SOBR) in Update 4 of Veeam Backup & Replication. Since we went GA we have been able to track the success of this feature by looking at Public Cloud Object Storage consumption by Veeam customers using the feature. As of last week Veeam customers have been offloading petabytes of backup data into Azure Blob and Amazon S3…not counting the data being offloaded to other Object Storage repositories.

During the Cloud Field Day 5 presentation, Michael Cade talked about the Portability of Veeam’s data format, around how we do not lock our customers into any specific hardware or format that requires a specific underlying File System. We offer complete Flexibility and Agnosticity where your data is stored and the same is true when talking about what Object Storage platform to choose for the offloading of data with the Cloud Tier.

I had a need recently to setup a Capacity Tier extent that was backed by an Object Storage Repository on Azure Blob. I wanted to use the same backup data that I had in an existing Amazon S3 backed Capacity Tier while still keeping things clean in my Backup & Replication console…luckily we have built in a way to migrate to a new Object Storage Repository, taking advantage of the innovative tech we have built into the Cloud Tier.

Cloud Tier Data Migration:

During the offload process data is tiered from the Performance Tier to the Capacity Tier effectively Dehydrating the VBK files of all backup data only leaving the metadata with an Index that points to where the data blocks have been offloaded into the Object Storage.

This process can also be reversed and the VBK file can be rehydrated. The ability to bring the data back from Capacity Tier to the Performance Tier means that if there was ever a requirement to evacuate or migrate away from a particular Object Storage Provider, the ability to do so is built into Backup & Replication.

In this small example, as you can see below, the SOBR was configured with a Capacity Tier backed by Amazon S3 and using about 15GB of Object Storage.

The first step is to download the data back from the Object Storage and rehydrate the VBK files on the Performance Tier extents.

There are two ways to achieve the rehydration or download operation.

Via the Backup & Replication Console

Via a PowerShell Cmdlet

Rehydration via the Console:

From the Home Menu under Backups right click on the Job Name and select Backup Properties. From here there is a list of the Files contained within the job and also the objects that they contain. Depending on where the data is stored (remembering that the data blocks are only even in one location… the Performance Tier or the Capacity Tier) the icon against the File name will be slightly different with files offloaded represented with a Cloud.

Right Clicking on any of these files will give you the option to Copy the data back to the Performance Tier. You have the choice to copy back the backup file or the backup files and all its dependancies.

Once this is selected, a SOBR Download job is kicked off and the data is moved back to the Performance Tier. It’s important to note that our Intelligent Block Recovery will come into play here and look at the local data blocks to see if any match what is trying to be downloaded from the Object Storage… if so it will copy them from the Performance Tier, saving on egress charges and also speeding up the process.

In the image above you can see the Download Job working and only downloaded 95.5MB from Object Storage with 15.1GB copied from the Performance Tier… meaning the data blocks for the most that are local are able to be used for the rehydration.

The one caveat to this method is that you can’t select bulk files or multiple backup jobs so the process to rehydrate everything from the Capacity Tier can be tedious.

Rehydration via PowerShell:

To solve that problem we can use PowerShell to call the Start-VBRDownloadBackupFilecmdlet to do the bulk of the work for us. Below are the steps I used to get the backup job details, feed that through to variable that contains all the file names, and then kick off the Download Job.

No matter which way the Download job is initiated, we can see the progress form the Backup & Replication Console under the Jobs section.

And looking at the Disk and Network sections of Windows Resource Monitor we can see connections to Amazon S3 pulling the required blocks of data down.

Once the Download job has been completed and all VBKs have been rehydrated, the next step is to change the configuration of the SOBR Capacity Tier to point at the Object Storage Repository backed by Azure Blob.

The final step is to initiate an offload to the new Capacity Tier via an Offload Job…this can be triggered via the console or via Powershell (as shown in the last command of the PowerShell code above) and because we have already a set of data that satisfies the conditions for offload (sealed chains and backups outside the operational restore window) data will be dehydrated once again…but this time up to Azure Blob.

The used space shown below in the Azure Blob Object Storage matches the used space initially in Amazon S3

All recovery operations show Restore Points on the Performance Tier and on the Capacity Tier as dictated by the operational restore window policy.

Conclusion:

As mentioned in the intro, the ability for Veeam customers to have control of their data is an important principal revolving around data portability. With the Cloud Tier we have extended that by allowing you to choose the Object Storage Repository of your choice for cloud based storage or Veeam backup data…but also given you the option to pull that data out and shift when and where desired. Migrating data between AWS, Azure or any platform is easily achieved and can be done without too much hassle.

Last week I wrote an opinion piece on Infrastructure as Code vs RESTful APIs. In a nutshell, I talked about how leveraging IaC instead of trying to code against APIs directly can be more palatable for IT professionals as it acts as a middle man interpreter between yourself and the infrastructure endpoints. IaC can be considered a black box that does the complicated lifting for you without having to deal with APIs directly.

As a follow up to that post I wanted to show an example about the differences between using direct APIs verses using an IaC tool like Terraform. Not surprisingly the example below features vCloud Director…but I think it speaks volumes to the message I was trying to get across in the introduction post.

The Terraform Provider has been developed using Python and GO. It uses Client-Server model inside the hood where the client has been written using GO and server has been written using Python language. The core reason to use two different languages is to make a bridge between Terraform and Pyvcloud API. Pyvcloud is the SDK developed by VMware and provides an medium to talk to vCloud Director. Terraform uses GO to communicate where Pyvcloud has been written in Python3.

The above explanation as to how this provider is doing its thing highlights my previous points around any IaC tools. The abstraction of the infrastructure endpoint is easy to see… and in the below examples you will see it’s benefit for those who have not got the inclination to hit the APIs directly.

The assumption for both examples is that we are starting without any configured Firewall or NAT rules for the NSX Edge Services Gateway. Both methods are connecting as tenant’s of the vCD infrastructure and authenticating with Organisation level access.

The end result will be:

Allow HTTP, HTTPS and ICMP access to a VM living in a vDC

External IP is 82.221.98.109

Internal IP of VM is 172.17.0.240

VM Subnet is 172.17.0.0/24

Configure DNAT rules to allow HTTP and HTTPS

Configure SNAT rule to allow outbound from the VM subnet

Configuring Firewall and NAT Rules with RESTful API:

Firstly, to understand what vCD API operations need to be hit, we need to be familiar with the API Documentation. This will cover initial authentication as either a SYSTEM or Organizational admin and then what calls need to be made to get information relating to the current configuration and schema. Further to this, we need to also be familiar with the NSX API for vCD Documentation which covers how to interact with the network specific API operations possible from the vCD API endpoint.

We are going to be using Postman to execute against the vCD API. Postman is great because you can save your call history and reuse them at a later date. You can also save variable into Workspaces and also insert specific code to assist with things like authentication.

First step is to authenticate against the API and get a session authorization key that will allow you to feed that key back into subsequent requests. This authorization key will only last you a finite amount of time and will need to be regenerated.

Because we are using a decent RESTful API Client like Postman, there is a better way to programatically authenticate using a bearer access token as described in Tom Fojta’s post here when talking to the vCD API.

Once that is done we are authenticated as a vCD Organizational Admin and we can now query the NSX Edge Services Gateway (ESG) Settings for Firewall and NAT rules. I’ll walk through configuring a NAT rule for the purpose of the example, but the same method will be used to configure the Firewall as well.

Below we are querying the existing NAT rules using a GET request against the NSX ESG. What we are returned is an empty config in XML.

What needs to be done is to turn that request into a POST and craft an XML payload into the Body of the request so that we can configure the NAT rules as desired.

Redoing the GET request will now show that the NAT rules have been created.

And will be shown in the vCD Tenant UI

From here we can update, append, reset or delete the NAT rules as per the API documentation. Each one of those actions will require a new call to the API and the same process followed as above.

Configuring Firewall and NAT Rules with Terraform:

For a primer on the vCloud Director Terraform Provider, read this post and also head over to Luca’s post on Terraform with vCD. As with the RESTful API example above, I will use Terraform IaC to configure the same Tenant NSX Edge Gateway’s Firewall and NAT rules. What will become clear using Terraform for this is that it is a lot more efficient and elegant that going at it directly against the APIs.

Initially we needs to setup the required configuration items in order for the Terraform Provider to talk to the vCD API endpoint. To do this we need to setup a number of Terraform files that declare the variables required to connect to the vCD Organization and then configure the terraform.tfvars file that contains the specific variables.

We also create a provider .tf file to specifically call out the required Terraform provider and set the main variables.

We contain all this in a single folder (seen in the left pane above) for organization and portability…These folders can be called as Terraform Modules if desired in more complex, reusable plans.

We then create two more .tf files for the Firewall and NAT rules. The format is dictated by the Provider pages which gives examples. We can make things more portable by incorporating some of the variables we declared elsewhere in the code as shown below for the Edge Gateway name and Destination IP address.

Once the initial configuration work is done, all that’s required in order to apply the configuration is to initialize the Terraform Provider, make sure that the Terraform Plan is as expected… and then apply the plan against the Tenant’s Organization.

Configuring vCloud Director NSX Edge with Terraform - YouTube

As the video shows… in less than a minute we have the NSX Firewall and NAT rules configured. More importantly, we now have a desired state which can be modified at any time by simple additions or subtractions to the Terraform code.

Wrapping it up:

From looking at both examples, it’s clear that both methods of configuration do the trick and it really depends on what sort of IT Professional you are in terms of which method is more suited to your day to day. For those that are working as automation engineers, working with APIs directly and/or integrating them into provisioning engines or applications is going to be your preferred method. For those that want to be able to deploy, configure and manager their own infrastructure in a more consumable way, using a Terraform provider is probably a better way

The great thing about Terraform in my eyes is the fact that you have declared the state that you want configured and once that has been actioned, you can easily check that state and modify it by changing the configuration items in the .tf files and reapplying the plan. For me it’s a much more efficient way to programatically configure vCD than doing the same configuration directly against the API.

Ohhh… and don’t forget… you are still allowed to use the UI as well… there is no shame in that!

While I was a little late to the game in understanding the power of Infrastructure as Code, I’ve spent a lot of the last twelve months working with Terraform specifically to help deploy and manage various types of my lab and cloud based infrastructure. Appreciating how IaC can fundamentally change the way in which you deploy and configure infrastructure, workloads and applications is not an easy thing to grasp…there can be a steep learning curve and lots of tools to choose from.

In terms of a definition as to what is IaC:

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources. The definitions may be in a version control system. It can use either scripts or declarative definitions, rather than manual processes, but the term is more often used to promote declarative approaches.

As represented above, there are many tools that are in the IaC space and everyone will gravitate towards their own favourite. The post where I borrowed that graphic from actually does a great job or talking about the differences and also why Terraform has become my standout for IT admins and why Hashicorp is on the up. I love how the article talks about the main differences between each one and specifically the part around the Procedural vs Declarative comparison where it states that declarative approach is where “you write code that specifies your desired end state, and the IcC tool itself is responsible for figuring out how to achieve that state.”
You Don’t Need to Know APIs to Survive!:

The statement above is fairly controversial… especially for those that have been preaching about IT professionals having to code in order to remain viable. A lot of that mindshare is centred around the API and the DevOps world…but no everyone needs to be a DevOp! IT is all about trying to solve problems and achieve outcomes… it doesn’t matter how you solve it… as long as the problem/outcome is solved/attained. Being as efficient as possible is also important when achieving that outcome.

My background prior to working with IaC tools like Terraform was working with and actioning outcomes directly against RESTFul APIs. I spent a lot of time specifically with vCloud Director and NSX APIs in order to help productise services in my last two roles so I feel like I know my way around a cURL commend or Postman window. Let me point out that there is nothing wrong with having knowledge of APIs and that it is important for IT Professionals to understand the fundamentals of APIs and how they are accessed and used for programatic management of infrastructure and for creating applications.

I’m also not understating the skill that is involved in being able to understand and manipulate APIs directly and also being able to take those resources and create automated provisioning or actual applications that interact directly with APIs and create an outcome of their own. Remembering though that everyones skill set and level is different, and no one should feel any less an IT practitioner if they can’t code at a perceived higher level.

How IaC Tools Bridge the Gap:

In my VMUG UserCon session last month in Melbourne and Sydney I went through the Veeam SDDC Deployment Toolkit that was built with various IaC tooling (Terraform and Chef) as well as PowerShell, PowerCLI and some Bash Scripting. Ultimately putting all that together got us to a point where we could declaratively deploy a fully configured Veeam Backup & Replication server and fully configure it ready for action on any vSphere platform.

That aside, the other main point of the session was taking the audience through a very quick Terraform 101 introduction and demo. In both cities, I asked the crowd how much time they spent working with APIs to do “stuff” on their infrastructure… in both cities there was almost no one that raised their hands. After I went through the basic Terraform demo where I provisioned and then modified a VM from scratch I asked the audience if something like this would help them in their day to day roles… in both cities almost everyone put their hands up.

Therein lies the power of IaC tools like Terraform. I described it to the audience as a way to code without having to know the APIs directly. Terraform Providers act as the middle man or interpreter between yourself and the infrastructure endpoints. Consider it a black box that does the complicated lifting for you… this is the essence of Infrastructure as Code!

There are some that may disagree with me (and that’s fine) but I believe that for the majority of IT professionals that haven’t gotten around yet into transitioning away from “traditional” infrastructure management, configuration and deployment, that looking at a IaC tools like Terraform can help you not only survive…but also thrive!

A couple of weeks ago of Veeam Backup for Office 365 version 3.0 (build 3.0.0.422) went GA. This new version builds on the 2.0 release that offered support for SharePoint and OneDrive as well as enhanced self service capabilities for Service Providers. Version 3.0 is more about performance and scalability as well as adding some highly requested features from our customers and partners.

Version 2.0 was released last July and was focused on expansed the feature set to include OneDrive and SharePoint. We also continued to enhanced the automation capability of the platform through a RESTful API service allowing our Cloud & Service Providers to tap into the APIs to create scaleable and efficient service offerings. In version 3.0, there is also an extended set of PowerShell commandlets that have been enhanced from version 2.0.

What’s New in 3.0:

Understanding how best to deal with backing up SaaS based services where a lot of what happens is outside of the control of the backup vendor, there where some challenges around performance with the backing up and restoring of SharePoint and OneDrive in version 2.0. With the release of version 3.0 we have managed to increase the performance of SharePoint and OneDrive incremental backups up to 30 times what was previously seen in 2.0. We have also added support for multi-factor authentication which was a big ask from our customers and partners.

Other key enhancements for me was some optimisations around the repository databases that improves space efficiencies, auto-scaling of repository databases that enable easier storage management for larger environments by overcoming the ESE file size limit of 64 TB. When the limit is reached, a new database will be created automatically in the repository which stops manual intervention.

Apart from the headline new features and enhancements there are also a number of additional ones that have been implemented into Backup for Microsoft Office 365 3.0.

Backup flexibility for SharePoint Online. Personal sites within organisations can now be excluded or included from/to a backup in a bulk.

Flexible protection of services within your Office 365 organization, including exclusive service accounts for Exchange Online and SharePoint Online.

Built-in Office 365 storage and licensing reports.

Snapshot-based retention which extends the available retention types.

Extended search options in the backup job wizard that make it is possible to search for objects by name, email alias and office location.

On-demand backup jobs to create backup jobs without a schedule and run them upon request.

The ability to rename existing organizations to keep a cleaner view on multiple tenant organizations presented in the console

For another look at what’s new, Niels Engelen goes through his top new features in detail here and for service providers out there, it’s worth looking at his Self Service Portal which has also been updated to support 3.0.

Architecture and Components:

There hasn’t been much of a change to the overall architecture of VBO and like all things Veeam, you have the ability to go down an all in one design, or scale out depending on sizing requirements. Everything is handled from the main VBO server and the components are configured/provisioned from here.

Proxies are the work horses of VBO and can be scaled out again depending on the size of the environment being backed up. Again, this could be Office 365 or on-premises Exchange or SharePoint instances.

Repositories must be configured on Windows formatted volumes as we use the JetDB database format to store the data. The repositories can be mapped one to one to tenants, or have a many to one relationship.

Installation Notes:

You can download the the latest version of Veeam Backup for Microsoft Office 365 from this location. The download contains three installers that covers the VBO platform and two new versions of the Explorers. Explorer for Microsoft OneDrive for Business is contained within the Explorer for Microsoft SharePoint package and installed automatically.

3.0.0.422.msi for Veeam Backup for Microsoft Office 365

9.6.5.422.msi for Veeam Explorer for Microsoft Exchange

9.6.5.422.msi for Veeam Explorer for Microsoft SharePoint

To finish off…It’s important to read the release notes here as there are a number of known issues relating to specific situations and configurations.

Backup for Office 365 has been a huge success for Veeam with a growing realisation that SaaS based services require an availability strategy. The continuity of data on SaaS platforms like Office 365 is not guaranteed and it’s critical that a backup strategy is put into place.

Last week I had the pleasure of presenting at Cloud Field Day 5 (a Tech Field Day event). Joined by Michael Cade and David Hill, we took the delegates through Veeam’s cloud vision by showcasing current product and features in the Veeam platform including specific technology that both leverages and protects Public Cloud workloads and services. We also touched on where Veeam is at in terms of market success and also dug into how Veeam enables Service Providers to build services off our Cloud Connect technology.

First off, I would like to thank Stephen Foskett and the guys at Gestalt IT for putting together the event. Believe me there is a lot that goes on behind the scenes and it is impressive how the team are able to setup, tear down and setup agin in different venues while handling the delegates themselves. Also to all the delegates, it was extremely valuable being able to not only present to the group, but also have a chance to talk shop at the offical reception dinner…some great thought provoking conversations where had and I look forward to seeing where your IT journey takes you all next!

Getting back to the recap, i’ve pasted in the YouTube links to the Veeam session below. Michael Cade has a great recap here, where he gives his overview on what was presented and some thoughts about the event.

We tried to focus on core features relating to cloud and then show a relatable live demo to reinforce the slide decks. No smoke and mirrors when the Veeam Product Strategy Team is doing demos… they are always live!

For those that might not have been up to speed with what Veeam has done over the past couple of years it’s a great opportunity to learn about what we have done for a number of years innovating the Data Protection space, while also looking at the progress we have made in recent times in transitioning to a true software defined, hardware agnostic platform that offers customers absolute choice. We like to say that Veeam was born in the virtual world…but is evolving in the Cloud!

Veeam Company Introduction - YouTube

Veeam Portability & Cloud Mobility - YouTube

Veeam Cloud Tier - YouTube

Veeam Availability for AWS - YouTube

Veeam for Cloud and Service Providers - YouTube

Summary:

Once again, being part of Cloud Field Day 5 was a fantastic experience, and the team executed the event well. In terms of what Veeam set out to achieve, Michael, David and myself where happy with what we where able to present and demo and we where happy with the level of questions being asked by the delegates. We are looking forward to attending Tech Field Day 20 later in the year and maybe as well as continue to show what Veeam can do today…take a look at where we are going in future releases!

Yesterday at Cloud Field Day 5, I presented a deep dive on our Cloud Tier feature that was released as a feature for Scale Out Backup Repository (SOBR) in Veeam Backup & Replication Update 4. The section went through an overview of its value proposition as well as deep dive into how we are tiering the backup data into Object Storage repositories via the Capacity Tier Extend of a SOBR. I also covered the space saving and cost saving efficiencies we have built into the feature as well as looking at the full suite of recoverability options still available with data sitting in an Object Storage Repository.

This included a live demo of a situation where a local Backup infrastructure had been lost and what the steps would be to leverage the Cloud Tier to bring that data back at a recovery site.

Quick Overview of Offload Job and VBK Dehydration:

Once a Capacity Tier Extent has been configured, the SOBR Offload Job is enabled. This job is responsible for validating what data is marked to move from the Performance Tier to the Capacity Tier based on two conditions.

The Policy defining the Operational Restore Window

If the backup data is part os a sealed backup chain

The first condition is all about setting a policy on how many days you want to keep data locally on the SOBR Performance Tiers which effectively become your landing zone. This is often dictated by customer requirements and now can be used to better design a more efficient approach to local storage with the understanding that the majority of older data will be tiered to Object storage.

The second is around the sealing of backup chains which means they are no longer under transformation. This is explained in this Veeam Help Document and I also go through it in the CFD#5 session video here.

Once those conditions are met, the job starts to dehydrate the local backup files and offload the data into Object Storage leaving a dehydrated shell with only the metadata.

The importance of this process is that because we leave the shell locally with all the metadata contained, we are able to still perform every Veeam Recovery option including Instant VM Recovery and Restore to Azure or AWS.

Resiliency and Disaster Recovery with Cloud Tier:

Looking at the above image of the offload process you can see that the metadata is replicated to the Object Storage as well as the Archive Index which keeps track of which blocks are mapped to what backup file. In fact for every extent we keep a resilient copy of the archive index meaning that if an extent is lost, there is still a reference.

Why this is relevant is because it gives us disaster recovery options in the case of a loss of whole a whole backup site or the loss of an extent. During the synchronization, we download the backup files with metadata located in the object storage repository to the extents and rebuild the data locally before making it available in the backup console.

After the synchronization is complete, all the backups located in object storage will become available as imported jobs and will be displayed under the Backups and Imported in the inventory pane. But what better way to see this in action than a live demo…Below, I have pasted in the Cloud Field Day video that will start at the point that I show the demo. If the auto-start doesn’t kick in correctly the demo starts at the 31:30 minute mark.