Sharing Thoughts Around Cloud Computing Solutions.

Uncategorized

Last few weeks, i spent good amount of time on testing Azure Site Recovery capabilities for performing migration. During testing few of the things, which came across me. Although Microsoft has tried best to document entire process, but sometime we, as human being, has tendency of missing critical part or things are not mentioned clear.

I am going jot down those which would help in planning, testing and using ASR for migration.

Configuration server should be as per recommended sizing. You can also perform with slightly lesser configuration also, but be cognizant of fact that you may not get good performance.

Save Recovery Vault Credential at secured and known place on Configuration server,

If using unmatched configuration, while installing Configuration Server Application (Unified Agent), you would receive compatibility assessment with Warning, Error etc. Warning can be read through and can be ignored.

You may require (most probably) machine restart. Please do that.

Configuration server set up with either Proxy or Without Proxy based scenario. So, security can be taken care as per enterprise rule.

V Imp. straight after installation of Microsoft Azure Site Recovery Configuration Server wizard , you should do two things;

Add Account of Target Source Server which you want to migrate. Key here is that those should be either Domain Admin account, who has right to install application on Source Server, OR Local Admin Account of target server.

Thus, if you have multiple server in environment and does not domain joined server, you may end up adding many account. Try to Add, user friendly name, which can be later identified easily.

After adding account, go to next tab of ‘Vault Registration’ Browser same Vault Registration Credential, which were used during installation. This might sound confusing and may not find in documentation. But, during my testing i found that it have impact. Even if you leave this option, your config server still be visible on ASR. But, discovery and mobility service push may not work as expected. Therefore, it is better to take precaution than rectifying later, which is off-course difficult.

If trying to migrate Physical Server, you would need to add Physical Servers manually by their IP. Config server would not do auto-discovery unlike VMware or HyperV migration.

Target Source Server should have expected firewall rules enable. Follow Microsoft documentation for detail. Basically, you should have WMI, File/Print sharing necessary things are enabled at Source Protected VM. Failing to do so, neither you would be able to push install Mobility Service or Migrate Server.

Since ASR gives you option of perform Test Fail-over, Pre-Check before migration, always use “Test Fail-over” option. This help in checking if target destination server would perform as application behaving at source server.

Recommendation is to have ‘test network sub-net’ and test fail-over on it. Keep production separate.

If you have application running on Running has some dependency on different environment or source server or intranet application, it is recommended to keep your network layer ready prior initiating migration process. Set up Hybrid Network either using Site-to-Site or Express Route. Create different virtual network sub-net, testing/production etc. Testing should be performed only on Testing Sub-net.

This is for today. I would be collating more information about scenario we can touch base for ASR.

Stop, Look, then Cross….simple rules to even cross roads. Then, why do we simply decide on going to cloud and start working without even going through basic prerequisite checks.

Cloud is attractive but not every application/architecture/server right fit for cloud based environment (remember IaaS and PaaS work differently). Every cloud vendor has slight different approach towards hybrid connectivity/cloud and hosting platform offerings. Enterprise Application running on Windows Server 2003 migration may not be fit foe lift & sift (simply pick and move to Cloud VM). You may need to check what is pattern of usage, need of application first. Then, explore dependencies of application to different applications/processes. It is good if you have enterprise architecture (which is generally rare, though they claim) or use automated tool (which is generally preferred as it is back by true facts and bring much more value added information on table).

Server Migration is not as straight forward like traditional application migration like Office 365 migration from respective workload running in different environments. I am in opinion that planning play vital role than migration. For migration, every vendor has their offerings or working on some directions. Some of the example: Azure has many things like Migration using PowerShell from VHDX to VHD/Using MVMC tool from VMDK/ASR/Even Back up etc. AWS launched recently AWS Migration Service, Google tying up with Migration vendors. But, bigger question remain same, where to start from???

Similarly database migration possess another threat towards failed move. Azure has interesting offering on DB as a Service known known as Azure SQL Database, Document DB for No SQL which is not only cost effective but also managed DB engine (there no more worries about maintaining uptime/patching/availability) OR even running latest DB edition on Azure VM. But question remains same, what is usage of current DB, what is need, what architecture, what is application usage pattern for this DB, can I really get all features if I’ll move to managed DB service or if my current DB is good enough to move on latest DB edition running on Azure VM.

If enterprise does not do proper due diligence, expect them to fail badly.

Use automated tool rather doing manual approach. As we all know manual approach is prone to human error and based on thoughts. While automated way is based on true facts and prone almost no error. Few of the solution are available in markets like;

Azure VM Readiness Assessment Tool: It does analysis on current on premise physical or virtual environment and provide you design level recommendation if there are any changes required. A step-by-step by guidance of using is available here.

MAP toolkit for Windows Azure Platform: This has been flagship assessment product for product for assessment Microsoft environment for long time. Be it Core IO workloads, Server Consolidation project, DB migration, Office 365 and even move to Azure. There has been constant enhancement done on this. Make sure you always use latest version of it. Follow this guide to perform action. As I mentioned, this has been flagship Assessment product of Microsoft. While installation or assessment, chose environment which you want to work on.

Database Assessment: Database is altogether different animal in enterprise IT environment and most important. It’ll always have separate planning and a most complex and also having so many option in market from different vendors (managed and unmanaged DB engines). Good part is Microsoft has interesting offering such Database Migration Assistance (DMA v3.0). What I like about this tool is that it helps doing assessment of current database and it can assessed with respect to target environment which can be either SQL Server (latest edition) running on VM or Azure SQL Database. Icing on the cake is it can even migrate it to destination environment. What else you need, when it can do the same job for source environment running on Oracle, IBM DB2, MySQL etc. Explore this quick video.

Web Application Migration: Azure App Service is very interesting offering from Microsoft Azure for Web Application Hosting. It takes away availability, patching, management part. But, I have my application and DB running on premise and that on outdated (EOL) version. How to move ahead. No worries, just use this tool as per guidance. It’ll not only assess and give your some recommendation but also help your transferring to Azure App Services with required DB engine. All this would happen seamlessly. Best part, if you have environment running on Linux it can handle that also. Follow this guide and learn about it.

Apart from all those, there are also 3rd party offering such as BitTitan’s HealthCheck for Azure which is exciting granular level automated assessment. It address ROI/TCO and Migration in single go without much of manual work, which I believe is more important than migration. Migration can be done either way but where to start from is most critical.

In next post I would be talking about few of those assessment tools usage in detail as well as easy way of moving to cloud.

Keep in mind that knowing your environment using such tools would only help you planning for successful transition to cloud. As a service provider organization, it should be must to do activity before you claim any project. Don’t just rely on discussion and build Scope of work on those. Your estimation should be based on facts, else it would be bad experience for customer as well as loss making deal for you.

As a service provider, if you want to be true cloud service provider and make money from happy customer. Following P2M2 would be essential.

Cloud is getting traction. Customer wants to get rid of existing on-premise server, as they see cloud the way forward to manage up-time, upgrades, elasticity and reduce cost. Hold-on Up-time, Yes there are SLAs for up-time. But does that mean customer has nothing to do ??? Every OEM and their every service carry its own Up-time backed by respective SLAs. Designing Highly Available Infrastructure Architecture totally depends upon customer’s Architects.

Read through SLA documents prior is highly recommended. Specifically, Microsoft Azure has made great effort to make it quite easy for customer to review SLA for every services. All Azure component SLA are available here. Then doing Simple mathematics can help setting up right expectations like below;

Azure Virtual Machines give 99.9% uptime (VM Connectivity) if you have premium storage as Data Disk, it becomes 99.95% if you have Availability Set configured for any scenario i.e. any type of disk any type of OS. Similar kind of SLA based available for other components. So, spend sometime and read through it as there would be lots of if’s and but’s.

What is ‘9’s? Two, Three, … 9’s matter to us only if we know how many hours/minutes systems would not be available. Below is simple visibility of hours, minutes available/non-available across Year/Month/Week/Day

Simple Example below may simplify further more w.r.t. to Infrastructure designing on Azure. Leveraging concept such Availability Set, Regional Replication using Hybrid Connectivity, DB replication mechanism and Solution like External Load Balancer (with rules inside) can help reducing risk of any downtime as much as possible. This is simple representation of Architecture design on Azure. It may not be covering details about it which is not objective of this blog post.

Above mentioned example consist of an simple application using Tiers for Web, DB and Authentication. Infrastructure is being designed in two Regions (1) Region A and (2) Region B

Scenario 1: If we use single region HA approach. Then, we may have following amount of risk;

Scenario 2: If we use Single Region HA approach backed by Geo-redundant site (non-HA), then follow amount of risk;

Scenario 2: If we use Single Region HA approach backed by Geo-redundant site (non-HA), then follow amount of risk;

Therefore, if you really thinking of moving any mission critical application which has impact of million $$ in single minutes of transaction, then think of Highly Available architecture design which does not only depends on specific geography but also leverage all possible component of true hyper scale cloud like Azure.

Today, I am starting this blog. I would be taking it to specific direction, which you may see in coming few months. It may not be hard core technical deep time [although I would try to cover as much as i can as per capabilities :-)], but I would certainly try to bridge gap between Technology Innovation and Business Needs in most easiest way. Example: As most of developers, who are working on Azure, must be aware of deadly combination of Visual Studio and Azure App Services. They must be familiar how to code, push to VS-Team Services or Git or wherever and keep working on the go (oh yes, this seems to be DevOps). But, I would discuss what we miss out like If I am going leverage Azure App Services, what all small things play big part and add value to business.

Seeing emergence Cloud Computing in recent past, today Industry has reached to a stage if they don’t adopt, Probably they can’t survive/reduce cost/do innovation/not compete against their competitors and various other things. But, biggest question remain, Is this really new trend??? I would say things exist for long time. May be with different terminologies like Hosters, Collocation Providers etc. I would say Subscription based hosted services business model exist since Information Technology birth. Sometime, these were being offered by large corporation as in form of managed Data centers, sometime some small/mid-size player had been offering small portion of services in form of Monthly Fees to customer for certain things like Email/Web-Site etc. And why not talk about even Telecom providers, they are kind of services provider offering us telecom connectivity which is result of some application in their Data Centers, backed by hardware/network channel spread across geography. So what has changed now. Basically today’s Cloud has made these services as “Commodity”.

Hosted/Collocated Servers were there, so what had Cloud given. To me, what is more attractive is ‘Variety’, ‘Geo-Presence’, ‘Providers’. Elasticity, Cost and other things are expected.

Variety, played big part in success of Cloud. We were earlier stuck either with Windows, Linux, Unix. On top of that provisioning things, as per need, were not straight forward. Hardware Procurement –> Cabling –> Provision OS –> Provision Setting/other things to make it available for Application –> Then install Application, thus GO LIVE. There was no model such Automated Managed Infrastructure, which is today’s ‘PaaS’. Today, just by few clicks/or using scripts, choice of server with required settings on same network/required other apps installed. If you are familiar with Microsoft Desired State Configuration (DSC), it has taken things to next level where we can not only define sequence of Infrastructure provisioning but also specific setting can be enable/disable when things are getting done.

Geo-Presence is what makes more senses for businesses. Today small companies/start-ups are running one of the biggest cloud infrastructure across globe. Business Continuity/Disaster Recovery has never been like this before. Cloud can help us going global in minutes, with using Global Load balancers using different Load Balancing Rules, Help reducing Latency by simply configuring CDN (Content Delivery Network)/Caching on the go.

Market is getting crowded with number of vendors. We have AWS, Azure, GCE, SoftLayer etc etc. Therefore, as prospective customer we have variety of options available. Everyone comes with their own Unique proposition. Smarter companies are those, who doesn’t lock themselves in to single vendor or designed Vendor Agnostic Architecture. At the same time, you can choose vendor based upon your organization’s strength Example: We are strong Development Organization with skill on .net. Then, we can use Azure PaaS thus less focusing on Infrastructure or System Admin Job. Thus, rely more on PaaS. Since we have various Service Providers, A smarter Architect would Design Architecture in such a way that it doesn’t remain dependent upon one vendor. Thus, reducing risk.

This is starting point. Next post would be around Azure, where to start from. Stay Tune.