For example, I’ve made loads of great friends, gotten to speak at a number of events, and even had the chance to become a Tech Field Day delegate, traveling to the US several times to visit a load of tech companies and startups, whilst learning from some seriously clever people. What I’ve put into the community I have easily received back tenfold, and I am massively grateful to be part of it.

Do it! Do it now!…

If you have the time to put into it, I highly recommend anyone takes the time to share and hopefully become part of the community.

Here are a few examples from my entry this year, which might hopefully give people some ideas as to the kinds of things which you could do too!

Started a new podcast with some other VMUG members and vExperts around homelabbing and tech news (http://opentechcast.com)

Posted a blog a week (in the past I have aimed to do one a month and built it up over time)

The entry bar to becoming a vExpert is not massively high; you certainly don’t have to do all of the above, or even any of the above! That said, if you are not currently a vExpert, and you can achieve just a couple of these kinds of items, you could be well on the way to becoming one too!

DR in IT can mean many different things to different people. To a number of people I have spoken to in the past, it’s simply HA protection against the failure of a physical host (yikes!)! To [most] others, it’s typically protection against failure of a data centre. As we discovered this week, to AWS customers, a DR plan can mean needing to protect yourself against a failure impacting an entire cloud region!

But how much is your business willing to pay for peace of mind?

When I say pay, I don’t just mean monetarily, I also mean in terms of technical flexibility and agility as well.

What are you protecting against?

What if you need to ensure that in a full region outage you will still have service? In the case of AWS, a great many customers are comfortable that the Availability Zone concept provides sufficient protection for their businesses without the need for inter-region replication, and this is perfectly valid in many cases. If you can live with a potential for a few hours downtime in the unlikely event of a full region outage, then the cost and complexity of extending beyond one region may be too much.

That said, as we saw from the failure of some AWS capabilities this week, if we take DR in the cloud to it’s most extreme, some organisations may wish to protect their business against not only a DC or region outage, but even a global impacting incident at a cloud provider!

This isn’t just technical protection either (for example against a software bug which hits multiple regions); what if a cloud provider goes under due to a financial issue? Even big businesses can disappear overnight (just ask anyone who used to work for Barings Bank, Enron, Lehman Brothers, or even 2e2!).

Ok, it’s true that the likelihood of your cloud provider going under is pretty teeny tiny, but just how paranoid are your board or investors?

Ultimate Cloud DR or Ultimate Paranoia?

For the ultimate in paranoia, some companies consider protecting themselves against the ultimate outage, by replicating between multiple clouds. In doing so, however, they must stick to using the lowest common denominator between clouds to avoid incompatibility, or indeed any potential for the dreaded “lock-in”.

At that point, they have then lost the ability to take advantage of one of the key benefits of going to cloud; getting rid of the “undifferentiated heavy lifting” as Simon Elisha always calls it. They then end up less agile, less flexible, and potentially spend their time on things which fail to add value to the business.

What is best for YOUR business?

These are all the kinds of considerations which the person responsible for an organisation’s IT DR strategy needs to consider, and it is up to each business to individually decide where they draw the line in terms of comfort level vs budget vs “lock-in” and features.

I don’t think anyone has the right answer to this problem today, but perhaps one possible solution is this:

No cloud is going to be 100% perfect for every single workload, so why not use this fact to our advantage? Within reason, it is possible to spread workloads across two or more public clouds based on whichever is best suited to those individual workloads. Adopting a multi-cloud strategy which meets business objectives and technical dependencies, without going crazy on the complexity front, is a definite possibility in this day and age!

(Ok, perhaps even replicating a few data sources between them, for the uber critical stuff, as a plan of last resort!).

The result is potentially a collection of smaller fault domains (aka blast radii!), making the business more resilient to significant outages from major cloud players, as only some parts of their infrastructure and a subset of applications are then impacted, whilst still being able to take full advantage of the differentiating features of each of the key cloud platforms.Of course, this is not going to work for everyone, and plenty of organisations struggle to find talent to build out capability internally on one cloud, never mind maintaining the broad range of skills required to utilise many clouds, but that’s where service providers can help both in terms of expertise and support.

They simply take that level of management and consulting a little further up the stack, whilst enabling the business to get on with the more exciting and value added elements on top. Then it becomes the service provider’s issue to make sure they are fully staffed and certified on your clouds of choice.

*** Full Disclosure *** I work for a global service provider who does manage multiple public clouds, and I’m lucky enough to have a role where I get to design solutions across many types of infrastructure, so I am obviously a bit biased in this regard. That doesn’t make the approach any less valid! 🙂

The Tekhead Take

Whatever your thoughts on the approach above are, it’s key to understand what the requirements are for an individual organisation, and where their comfort levels lie.

An all-singing, all-dancing, multi-cloud, hybrid globule of agnostic cloudy goodness is probably a step too far for most organisations, but perhaps a failover physical host in another office isn’t quite enough either…

Study Materials

This is always the go-to document for almost any current industry certification, and should be used as your primary guide for resources and areas to study. In the case of the AWS Exam Blueprint, they actually direct you to specific white papers to review as well as the content areas to study.

As with the CSA and CDA courses, the quality of the production (especially as Ryan and co are a small startup) is excellent. Remember that as Ryan says, they are focussed on teaching you the knowledge to pass the exam, not teaching you everything in AWS. There is no substitute for labbing and working with AWS day to day to become an expert, but you can certainly pass the exam based on this course! This might then help you get your first AWS job and gain the experience you need to be a real Cloud Guru! 🙂

The course has around about 9 hours or so of content, but I would say it took me 15-20 hours in total between all of the lab work, coming up with my own scenarios to practice configuring different elements, completing the quizzes, and researching any areas where I got a quiz answer incorrect or wasn’t sure of the reasons for a specific answer. Ryan also speaks quite slowly and very clearly, so I find that watching it at 1.5x speed or above can help get through the videos on the areas you know well already. Remember to slow it back down for new content areas of course!

Myself and a number of colleagues completed both of the 3 day architecting courses (standard and advanced) in a rather intense, but very informative 5 day week! This was an awesome course, and really helped my gain breadth and depth of knowledge, but I would not say it was critical to passing the SysOps Administrator exam itself.

White Papers

The AWS white papers are a great source of information, though they can be a bit dry at times. I highly recommend using them to augment any particular areas you feel less confident in. See my AWS CSA guide for a list of which to read.

Other Articles and Resources – The AWS documentation site is an absolute goldmine of information, and most of the articles are well written and easy to consume. Significantly more so than some of the best known “kb” and documentation sites in the industry IMHO. The following is a list of some of the articles I dipped in and out of while researching for the exam as well as my AWS Tips and Gotchas blog series:

I say this about every single exam I have ever taken – lab it, lab it lab it! It is a million times easier to answer a question based on something you have actually done yourself! Don’t try to just learn the theory, spend a bit of time doing it in practice and you will reap the benefits in both the exam and real life!

The information bellow covers my experience for the AWS Certified SysOps Administrator Associate (CSOA) exam from Amazon. Following this I will post a list of my study materials, so keep checking back for updates or check out my Index of AWS Posts.

Along with a number of tips, tricks and gotchas I have posted over the past few months. I also did a podcast recently with Scott Lowe on the subject of learning AWS. If you are new to AWS, I highly recommend you check it out!

AWS Certified SysOps Associate (CSOA) Exam Experience

Almost everything I read in the run up to taking the AWS Certified SysOps Administrator, suggested that it was going to be really hard and a step up from the Solution Architect exam. My personal experience when I took it in December 2016 was that it was really on a par with the SA exam, and the reputation was perhaps a tad overblown.

This could be for one of several reasons; I’ve been doing AWS designs for several months now at work; this is the third AWS exam I’ve taken; or the exam has become marginally easier now that they seem to have removed some of the older EC2 Classic questions ad brought it a little more up to date. I suspect the latter is the most likely reason, but with some benefit from the former!

Based on my experience of the exams so far, I think I would definitely recommend the order people approach these as Solution Architect first to give you a thorough grounding in all of the products, then the Developer as it was fairly easy and just broadened your knowledge, finally followed by SysOps.

As I have previously mentioned, AWS seem to structure their exams with some general questions across their portfolio, then specific technologies taking precedence in each. The SysOps seemed to me to be about two-thirds of the Solution Architect exam again, with the last third having more of a focus on CloudWatch and CloudFormation. If you have already passed the SA exam, you should have no issues with this content, though the remaining questions I had were a touch trickier, as they were fairly in depth / specific.

The exam itself is the same length as the other associate ones, at 80 minutes and 55 questions. Again AWS (as is their way) do some odd things like not giving you a passing grade requirement, but it’s generally safe to assume that if you get 70% or more, in the Certified SysOps Administrator Associate exam then you will pass. The Kryterion exam environment is frankly pretty dated, but I already wrote about that in the CSA guide here, so I won’t repeat myself again! Suffice to say, read the other article for a detailed overview.

Best of luck, and if you found this article useful, please leave a comment below! 🙂

Want to Learn More?

Part 2 of this article, the AWS Certified SysOps Administrator Associate exam study guide and materials can be found here: