Tag: EMC Recovery Point

The customers who have placed their workload in both on-premises and cloud forming a “Hybrid Cloud” model for your Organisation, you probably need on-premises storage which meets the requirement of hybrid workloads. EMC’s Unity hybrid flash storage series may be the answer to your business critical problem. This unified storage array is designed for organisations from midmarket to the enterprise. Cover the broadest range of workloads – SAN and NAS both. The EMC unity has been designed for workloads rather than a tin seating on your data centre consuming power and cooling bills, and you are calling it a SAN. After all, that was a traditional tin-based SAN solution.

Previously I wrote an article about Dell Compellent. I received an overwhelming response from the Compellent user. I have been asked many occasion what other option do we have if not the Compellent storage.

To answer the question, I would choose from either EMC Unity Hybrid Storage, Nimble and NetApp Storage subject to the in-depth analysis of workloads, casestudy and business requirements. But again, this is a “Subject to x,y,z” question. The tin-based storage does not fulfil the modern business requirement. I would personally like to use Azure or AWS than procure any tin and pay for power, cooling and racks.

EMC Unity

The Unity midrange storage for flash and rich data services based on dense SSD technology helps provide outstanding TCO. The Unity provides intelligent insight into SAN health with CloudIQ, which provides cloud-based proactive monitoring and predictive analytics. Additionally, the ongoing operation is simple with proactive assistance and automated remote support.

What I like about Unity is that the Unity Software, most notably CloudIQ, Appsync and Cloud Tiering Appliance. The Unity has the capabilities include point-in-time snapshots, local and remote data replication, built-in encryption, and deep integration with VMware, Microsoft Apps, Hyper-v, Azure Blob, AWS S3 and OpenStack ecosystems. The Unity provides an automated tiering and flash-caching, the most active data is served from flash.

Management

The Unity provides the most user-friendly GUI management interface. After installing and powering on the purpose-built Dell EMC Unity system for the first time, the operating environment will boot. The interfaces are well-defined and highlighted for areas of interest – drive faults, network link failures, etc. Within Unisphere are some options for support, including Unisphere Online Help and the Support page where FAQs, videos, white papers, chat sessions, and more

Provisioning Storage

The EMC Unity offers both block and file provisioning in the same enclosure. The Disk Drives are provisioned into Pools that can be used to host both block and file data. Connectivity is offered for both block and file protocols using iSCSI and Fibre Channel. You can access LUNs, Consistency Groups, Thin Clones, VMware Datastores (VMFS), and VMware Virtual Volumes.

Fast VP

The FAST VP (Fully Automated Storage Tiering for Virtual Pools) is a very smart solution for dynamically matching storage requirements with changes in the frequency of data access. Fast VP segregate disk drives in three tiers

Extreme Performance Tier – SSD

Performance tier – SAS

Capacity Tier – NL-SAS

Fast VP Policies – FAST VP is an automated feature but provide controls to setup user-defined tiering policies to ensure the best performance for various environments. FAST VP uses an algorithm to make data relocation decisions based on the activity level of each slice.

Highest Available Tier

Auto-Tier

Start High then Auto-Tier

Lowest Available Tier

No Data Movement

Cloud Tiering Appliance (CTA)

If you are an organisation with hybrid cloud and you would like to move data from on-premises to Azure Cloud or AWS S3, then Cloud Tiering Appliance (CTA) is the best solutions for you to move data to a cloud-based on user-configured policies. The other way is also true which means you can return your data to on-premises using this appliance.

Why do you need this appliance? If you run of storage or free-up space, you can do it on the fly without capital expenditure. This ability optimises primary storage usage, dramatically improves storage efficiency, shortens the time required to back up data, and reduces overall TCO for primary storage. This functionality also reduces your own data centre footprint. You can move both file and block data to Azure Cloud or AWS S3 using CTA.

Your priority is workload. You must protect workloads and simplify management of workloads. AppSync empowers you to satisfy copy demand for data repurposing, operational recovery, and disaster recovery with AppSync.

AppSync simplifies, orchestrates, and automates the process of generating and consuming copies of production data. You can integrate AppSync with Oracle, Microsoft SQL Server, and Microsoft Exchange for application-consistent copy management. AppSync is the single user interface and provides VM-consistent copies of data stores and individual VM recovery for VMware environments

The Storage Analytics is dashboards based visual tools provide deep visibility into EMC infrastructure. Actionable capacity and performance analysis help you troubleshoot, identify, and act on issues fast.

Encryption

EMC Unity lets you encrypt user data as it is written to the backend drives, and decrypted during departure. Because encryption and decryption are handled via a dedicated hardware piece on the SAS interface, there is minimal impact on Unity Storage. The system also supports external key management through the use of the Key Management Interoperability Protocol (KMIP).

This article provides a tour of the configuration steps required to integrate EMC Data Domain System with Veeam Availability Suite 9 as well as provides benefits of using EMC DD Boost for backup application.

Data Domain Boost (DD Boost) software provides advanced integration with backup and enterprise applications for increased performance and ease of use. DD Boost distributes parts of the deduplication process to the backup server or application clients, enabling client-side deduplication for faster, more efficient backup and recovery. All Data Domain systems can be configured as storage destinations for leading backup and archiving applications using NFS, CIFS, Boost, or VTL protocols.

The following applications work with a Data Domain system using the DD Boost interface: EMC Avamar, EMC NetWorker, Oracle RMAN, Quest vRanger, Symantec Veritas NetBackup (NBU), Veeam and Backup Exec. In this example, we will be using Veeam Availability Suite version 9.

Data Domain Systems for Service Provider

Data Domain Secure Multitenancy (SMT) is the simultaneous hosting by a service provider for more than one consumer (Tenant) or workload (Applications, Exchange, Standard VMs, Structured Data, Unstructured Data, Citrix VMs).

SMT provides the ability to securely isolate many users and workloads in a shared infrastructure, so that the activities of one Tenant are not apparent or visible to the other Tenants. A Tenant is a consumer (business unit, department, or customer) who maintains a persistent presence in a hosted environment.

Basic Configuration requirements are:

Enable SMT in the DD System

Role Based Access Control in DD Systems

Tenant Self-Service in the DD Systems

A Tenant is created on the DD Management Center and/or DD system.

A Tenant Unit is created on a DD system for the Tenant.

One or more MTrees are created to meet the storage requirements for the Tenant’s various types of backups.

The newly created MTrees are added to the Tenant Unit.

Backup applications are configured to send each backup to its configured Tenant Unit MTree.

The Data Domain System broadcast itself to the backup server using one or more path physically or virtually connected. The design of entire systems depend on the Data Domain sizing on how you connect Data Domain with backup server(s), how many backup jobs will be running, size of backup, de-duplication, data retention and frequency of data restore. A typical backup solution should include the following environment.

Backup server with 2 initiator HBA ports (A and B)

Data Domain System has 2 FC target endpoints (C and D)

Fibre Channel Fabric zoning is configured such that both initiator HBA ports can access both FC target endpoints

Data Domain system is configured with a SCSI target access group containing:

Both FC target endpoints on the Data Domain System

Dual Fabric for fail over and availability

Multiple physical and logical Ethernet for availability and fail over

Examples of Sizing

To calculate the maximum simultaneous connection to Data Domain Fibre Channel System (DFC) from all Backup servers. DFC device (D) is the number of devices to be advertised to the initiator of the backup server(s). Lets say we have 1 backup server, single data domain systems, the backup server is running 100 backup jobs.

DFC Device Count D= (minimum 2 X S)/128

J=1 Backup Server x 100 Backup Jobs=100

C= 1 (Single DD System)

S=JXC (100X1)=100

D=2*100/128 = 1.56 rounded up 2

Therefore, all DFC groups on the Data Domain system must be configured with 2 devices.

Step1: Preparing DD System

Step2: Managing system licenses

Select Administration > Licenses> Click Add Licenses.

On the License Window, type or paste the license keys. Type each key on its own line or separate each key by a space or comma (DD System Manager automatically places each key on a new line)

Click Add. The added licenses display in the Added license list.

OR

In System Manager, select Protocols > DD Boost > Settings. If the Status indicates that DD Boost is not licensed, click Add

License and enter a valid license in the Add License Key dialog box.

Step3: Setting up CIFS Protocol

On the DD System Manager Navigation>click Protocols > CIFS.

In the CIFS Status area, click Enable.

Step4: Remove Anonymous Log on

Select Protocols > CIFS > Configuration.

In the Options area, click Configure Options.

To restrict anonymous connections, click the checkbox of the Enable option in the

Step4: Restrict Anonymous Connections area.

In the Log Level area, click the drop-down list to select the level number 1.

In the Server Signing area, select Enabled to enable server signing

Step5: Specifying DD Boost user names

The following user will be used to connect to DD boost from backup software.

Select Protocols > DD Boost.

Select Add, above the Users with DD Boost Access list.

On the Add User dialog appears. To select an existing user, select the user name in the drop-down list. EMC recommends that you select a user name with management role privileges set to none.

To create and select a new user, select Create a new Local User and Enter the password twice in the appropriate fields. Click Add.

Step6: Enabling DD Boost

Select Protocols > DD Boost > Settings.

Click Enable in the DD Boost Status area.

Select an existing user name from the menu then complete the wizard.

Step7: Creating a storage unit

Select Protocols > DD Boost > Storage Units.

Click Create. The Create Storage Unit dialog box is displayed.

Enter the storage unit name in the Name box e.g. DailyRepository1

Select an existing username that will have access to this storage unit. EMC recommends that you select a username with management role privileges set to none. The user must be configured in the backup application to connect to the Data Domain system.

To set storage space restrictions to prevent a storage unit from consuming excess space: enter either a soft or hard limit quota setting, or both a hard and soft limit.

Generate an advanced certificate from Active Directory Certificate services and install into the Data Domain DD Boost. You must install the same certificate into the backup servers so that both data domain and data domain client which is backup server can talk to each via encrypted certificate.

Start DD System Manager on the system to which you want to add a host certificate.

Select Protocols > DD Boost > More Tasks > Manage Certificates….

In the Host Certificate area, click Add.

To add a host certificate enclosed in a .p12 file, Select I want to upload the certificate as a .p12 file. Type the password in the Password box.

Click Browse and select the host certificate file to upload to the system.

Click Add.

To add a host certificate enclosed in a .pem file, Select I want to upload the public key as .pem file and use a generated private key. And Click Browse and select the host certificate file to upload to the system.

Install the DD Boost API/plug-in (if necessary, based on the application).

Step10: Configuring storage for DD Extended Retention (Optional)

Before you proceed with Extended Retention you must add required license on the DD System.

Select Hardware > Storage tab.

In the Overview tab, select Configure Storage. In the Configure Storage tab, select the storage to be added from the Available Storage list.

Select the appropriate Tier Configuration (or Active or Retention) from the menu.

Select the checkbox for the Shelf to be added.

Click the Add to Tier button. Click OK to add the storage.

Step11: Configure a Veeam backup repository

To create an EMC Data Domain Boost-enabled backup repository, navigate to the Backup Infrastructure section of the user interface, then select Backup Repositories and right-click to select Add Backup Repository.

The next step is to select the repository type, De-duplicating storage appliance. Type the Name of the DD Systems, Choose Fibre Channel or Ethernet Option, add credentials to connect to DD System and Gateway to connect to DD System. To be able to connect Veeam Backup server to the DD System using Fibre Channel you must add DD System & Veeam Backup server in the same SAN zone. You also need to enable FC on the DD System. To be able to connect Veeam Backup Server using Ethernet Veeam backup Server and DD System must be in same VLAN or for multi-VLAN you must enable unrestricted communication between VLANs.

On the next screen, select the Storage Unit of the DD System to be used by the Veeam Server as repository, leave concurrent connection as default

Active Full- Financial or health sector prefer to keep a monthly full backup of data and retain certain period of time for corporate compliance and satisfying external auditor’s requirement to keep data off-site for a period of time.

Synthetic Full- A standard practice to keep synthetic full at all time to reduce storage cost and recovery time objective for any organization.

For most environments, Veeam recommends to do synthetic full backups when leveraging EMC Data Domain Boost. This will save stress on primary storage for the vSphere and Hyper-V VMs and the Boost-enabled synthesizing is very fast.

For a Backup Copy job using GFS retention (Monthly, Weekly, Quarterly and/or Annual restore points), the gateway server must be closest to the Data Domain server, since the Backup Copy job frequently involves an offsite transfer. When the Data Domain server is designated in the repository setup, ensure that consideration is given to the gateway server if it is being used off site.

Backup job timed out value must be higher than 30 minutes to be able to retry the job if it is to fail for any reason

DD System Option:

A virtual synthetic full backup is the combination of the last full (synthetic or full) backup and all subsequent incremental backups. Virtual synthetics are enabled by default.

The synthetic full backups are faster when Data Domain Boost is enabled for a repository

DD Boost reduces backup transformation time by less than 80% of total time if DD Boost was not used.

The first job has the bulk of the blocks of the vSphere or Hyper-V VM on the DD Boost Storage Unit, it will only need to transfer metadata and any possible changed blocks. This can be a significant improvement on the active full backup process when there is a fast source storage resource in place.

Taking a VMware snapshots and Hyper-v checkpoint can produce a serious workload on VM performance, and it can take considerable effort by sys admin to overcome this technical challenge and meet the required service level agreement. Most Veeam user will run their backup and replication after hours considering impact to the production environment, but this can’t be your only backup solution. What if storage itself goes down, or gets corrupted? Even with storage-based replication, you need to take your data out of the single fault domain. This is why many customers prefer to additionally make true backups stored on different storage. Never to store production and backup on to a same storage.

Source: Veeam

Now you can take advantage of storage snapshot. Veeam decided to work with storage vendor such as EMC and NetApp to integrate production storage, leveraging storage snapshot functionality to reduce the impact on the environment from snapshot/checkpoint removal during backup and replication.

Supported Storage

EMC VNX/VNXe

NetApp FAS

NetApp FlexArray (V-Series)

NetApp Data ONTAP Edge VSA

HP 3PAR StoreServ

HP StoreVirtual

HP StoreVirtual VSA

IBM N series

Unsupported Storage

Dell Compellent

NOTE: My own experience with HP StoreVirtual and HP 3PAR are awful. I had to remove HP StoreVirtual from production store and introduce other fibre channel to cope with workload. Even though Veeam tested snapshot mechanism with HP, I would recommend avoid HP StoreVirtual if you have high IO workload.

Benefits

Veeam suggest that you can get lower RPOs and lower RTOs with Backup from Storage Snapshots and Veeam Explorer for Storage Snapshots.

Veeam and EMC together allow you to:

Minimize impact on production VMs

Rapidly create backups from EMC VNX or VNXe storage snapshots up to 20 times faster than the competition

Easily recover individual items in two minutes or less, without staging or intermediate steps

As a result of integrating Veeam with EMC, you can backup 20 times faster and restore faster using Veeam Explorer. Hence users can achieve much lower RPOs (recovery point objectives) and lower RTOs (recovery time objectives) with minimal impact on production VMs.

How it works

Veeam Backup & Replication works with EMC and NetApp storage, along with VMware to create backups and replicas from storage snapshots in the following way.

Source: Veeam

The backup and replication job:

Analyzes which VMs in the job have disks on supported storage.

Triggers a vSphere snapshot for all VMs located on the same storage volume. (As a part of a vSphere snapshot, Veeam’s application-aware processing of each VM is performed normally.)

Triggers a snapshot of said storage volume once all VM snapshots have been created.

Retrieves the CBT information for VM snapshots created on step 2.

Immediately triggers the removal of the vSphere snapshots on the production VMs.

Mounts the storage snapshot to one of the backup proxies connected into the storage fabric.

Reads new and changed virtual disk data blocks directly from the storage snapshot and transports them to the backup repository or replica VM.

Triggers the removal storage snapshot once all VMs have been backed up.

VMs run off snapshots for the shortest possible time (Subject to storage array- EMC works better), while jobs obtain data from VM snapshot files preserved in the storage snapshot. As the result, VM snapshots do not get a chance to grow large and can be committed very quickly without overloading production storage with extended merge procedure, as is the case with classic techniques for backing up from VM snapshots.

Integration with EMC storage will bring great benefit to customers who wants to take advantage of their storage array. Veeam Availability Suite v9 will provide the chance to reduce IO on to your storage array and bring your SLA under control.

In most of the SMB customer, the nodes of the cluster that reside at their primary data center provide access to the clustered service or application, with failover occurring only between clustered nodes. However for an enterprise customer, failure of a business critical application is not an option. In this case, disaster recovery and high availability are bonded together so that when both/all nodes at the primary site are lost, the nodes at the secondary site begin providing service automatically, or with minimal intervention.

The maximum availability of any services or application depends on how you design your platform that hosts these services. It is important to follow best practices in Compute, Network and Storage infrastructure to maximize uptime and maintain SLA.

The following diagram shows a multi-site failover cluster that uses four nodes and supports a clustered service or application.

The following rack diagram shows the identical compute, storage and networking infrastructure in both site.

Physical Infrastructure

Primary and Secondary sites are connected via 2x10Gbps dark fibre

Storage vendor specific replication software such as EMC recovery point

Storage must have redundant storage processor

There must be redundant Switches for networking and storage

Each server must be connected to redundant switches with redundant NIC for each purpose

Since I am talking about highly available and redundant systems design, this sort of design must consist of replicated or clustered storage presented to multi-site Hyper-v cluster nodes. Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. You will achieve high performance through hardware or block level replication instead of software. You should contact your storage vendor to come up with solutions that provide replicated or clustered storage.

A multi-site cluster running Windows Server 2008 can contain nodes that are in different subnet however as a best practice, you must configure Hyper-v cluster in same subnet. You applications and services can reside in separate subnets. To avoid any conflict, you should use dark fibre connection or MPLS network between multi-sites that allows VLANs.

Quorum Selection: Since you will be configuring Node and File Share Majority cluster, you will have the option to place quorum files to shared folder. Where do you place this shared folder? Since we are talking about fully redundant and highly available Hyper-v Cluster, we have several options to place quorum shared folder.