the latest in cloud delivery and digital transformation

It’s occurred to me fairly recently that there’s sufficient confusion about what the CD in CICD stands for, to warrant some simple explanation. I’m not even certain that people generally understand the CI part either. I’ve noticed on a few occasions developers tend to say, “A CICD pipeline is an automated way of delivering code into production.” I feel this is often interpreted as, “you commit code over here in your repository, and it automagically pops up in production over there.”

“What on earth could possibly go wrong with that?” the system operations team might ask sarcastically, turning distinctly green then pale at the thought of developers having ultimate control over production releases. I’ve also noticed non-developers ask, “Is it Continuous Delivery, or Continuous Deployment?” Is there even a difference? It seems a number of people interchange Delivery and Deployment without really understanding what each of them actually is. Isn’t this just typical developer double-speak for exactly the same thing?

To provide some historical context to this explanation, it’s useful to understand software development life cycles before Virtualisation and Cloud. Right up until the mid-2000s, software feature releases were typically very slow. Products might have gone four years before they got an update, and the savvy never installed a “dot-one” (.1) release, let alone a dot zero (.0). Frequently they were buggy and unreliable. It often wasn’t until a “one- dot-two” (1.2) release that people started getting any confidence. Even then, that only ensured catastrophic bugs were eliminated. Other glitchy behaviours often still existed but didn’t contribute to a total loss of work.
“You do back up your work every half hour right?” was the catch cry long before Cloud, autosave and versioning were as widespread as it is today.

A significant change that virtualisation helped bring about, was the ability to more cheaply run a non-production staging environment. A place to test out new changes before releasing them into production. Cloud via infrastructure as code makes non-production environments even faster, easier and cheaper to provision for testing purposes. The key to an effective staging environment is that it’s as Production-like as possible. This helps avoid embarrassing and costly “roll-backs” when code changes don’t behave in production as expected because there was too much variation between prod and non-prod.

Running in parallel to these infrastructure changes was the development and rapid adoption of software version control systems. In the bad old days, files were versioned by either commenting out old lines of code and introducing new lines. Alternatively, old files on production servers were renamed, and new files introduced in their place. It was less than ideal and alarmingly widespread. “When was the last backup done?” wasn’t something you wanted to hear in a development team. It often meant somebody had overwritten something they shouldn’t have.

Version control in the form of SVN, then later Git and other alternatives allowed a new copy of a code file to be added to a repository, and the old version kept completely intact. What’s more, the developer could comment during the commit, what had been changed.
This lead to the practice of Continuous Integration (CI), where developers could collaborate together more rapidly by sharing small code changes via the version control repository, and by doing so, minimise the impact each code change had on others. Everyone was effectively working on the same code base, rather than having separate copies that diverged more and more widely, the longer each developer worked on their private copy of the code.

This brings us then to Continuous Delivery which is the automatic build and test of committed code changes with the aim of having production ready features available for release. Getting to Continuous Delivery after the successful implementation of Continuous Integration is relatively straight forward using AWS Services. By using Code Pipeline to automate the test and build of code commits, most of the tooling required is readily available.
AWS services like Elastic Beanstalk make it incredibly simple to replicate production application stacks into non-production environments for testing. AWS OpsWorks and CloudFormation can greatly simplify the reproduction of more complex application stacks for production-like staging.

Many organisations get to Continuous Delivery and don’t adopt Continuous Deployment. They either use a manual authorisation step to deploy changes into production or a semi-automated delivery approach of code into production. Continuous Deployment then is the automatic deployment of production-ready code into production with no manual interventions. If changes fail the build and test processes, they are rejected and sent back to the development team for revision. If however, all changes pass the test, they are automatically deployed into production, and this is continuous deployment.

The fundamental key to all this working well is small frequent changes. The historic issue with large complex changes over an extended period of time was that the root cause of any particular issue was extremely difficult to pin down. This made people reluctant to release changes unless it was absolutely necessary. With the collaboration made possible by Continuous Integration, it ensures everyone is working on a single code base. This prevents accumulative errors common when everyone is working in isolation on big changes.

So there is an important distinction between Continuous Delivery and Continuous Deployment. The latter can be arrived at in an incremental manner after successfully adopting CI, then getting continuous delivery and testing to a robust enough point that well vetted, small feature changes can be continuously deployed into production.

Consegna have significant experience in helping organisations adopt CICD successfully. If you’d like to find out more information, email hello@consegna.cloud.

Experience is a hard teacher because you get the test first, and the lesson afterwards.

I’ve always felt the best lessons are the ones learnt yourself, but to be honest, sometimes I would be more than happy to learn some lessons from others first. I hope the following can help you before you embark on your lift-and-shift migration journey.

Beware the incumbent

“Ok, so he won’t shake my hand or even look me in the eye, oh no this is not a good sign”. These were my initial observations when I first met with the representative of one of our clients Managed Service Provider (MSP). Little did I know how challenging, yet important this relationship was to become.

This is how I saw it. All of sudden after years of this MSP giving your client pretty average service they see you, this threat on their radar. Sometimes the current MSP is also getting mixed messages from the client. What’s wrong? Why the change? What does it mean for them?

I found it best to get the existing MSP on side early. If it’s an exit, then an exit strategy is needed between the client and the MSP. The best results happen when the MSP is engaged and ideally a Project Manager is put in place to assist the client with that exit strategy.

Most importantly, tell your MSP to heed the wise words of Stephen Orban. “Stop fighting gravity. The cloud is here, the benefits to your clients are transformational, and these companies need your help to take full advantage of what the cloud offers them. Eventually, if you don’t help them, they’ll find someone who will.”

Partner with your client

“Do cloud with your client, not to them”. Your client is going to have a certain way they work, and are comfortable with. Your client will also have a number of Subject Matter Experts (SMEs) and in order to bring these SMEs on the journey also, having someone from your team on-site and full time paired-up next to the SME to learn from them can be invaluable.

There will be things they know that you don’t. A lot actually. I found it best to get your client involved and more importantly their input and buy-in. The outcome will be much better, as will your ability to overcome challenges when they come up.

Lay a good foundation

We spent a significant amount of time working with our client to understand what we were in for. We created an extensive list of every server (and there were 100s) in the Managed Service Providers’ data centre and then set strategies, in place to migrate groups of servers.

We also set up our own version of an AWS Landing Zone as a foundational building block so best practices were set up for account management and security in AWS.

It’s important to lay this foundation and do some good analysis up front. Things will change along the way but a good period of discovery at the start of a project is essential.

But, don’t over analyse!

Do you need a plan? Absolutely. There is a number of good reasons why you need a plan. It sets a guideline and communication within the team and outside it. But I think you can spend too much time planning and not enough time doing.

We started with a high-level plan with groups of servers supporting different services for our client. We estimated some rough timelines and then got into it. And we learnt a lot along the way and then adapted our plan to show value to our client.

Pivot

Mike Tyson once said “Everybody has a plan until they get punched in the mouth”

When things go wrong you need to adapt and change. When we started migrating one particular set of servers out from the incumbent data centre we discovered their network was slow and things ground to a hold during the migration. So, like being punched in mouth, you take the hit and focus on a different approach. We did get back to those servers and got them into the cloud but we didn’t let them derail our plans.

Challenge the status quo

When I started working with a client migration project recently, the team had just finished migrating one of the key databases up into AWS, but the backup process was failing, as the backup window was no longer large enough.

After digging a little deeper, it was found that the backup process itself was very slow and cumbersome, but it had been working (mostly) for years, so ‘why change, right?’! The solution we put in place was to switch to a more lightweight process, which completed in a fraction of the time.

What’s my role, what’s your role?

It’s a really good idea to get an understanding of what everyone’s role is when working with multiple partners. We found taking a useful idea from ITIL and creating a RACI matrix (https://en.it-processmaps.com/products/itil-raci-matrix.html ) a really good way to communicate who was responsible for what during the migration and also with the support of services after the migration.

Although with most servers we used a “Lift and Shift” or “Rehosting” approach in a number of cases we were also “Refactoring”, “Re-architecting” and “Retiring” where this made sense.

Go “Agile”.

In short, “Agile” can mean a lot of different things to different people. It can also depend on the maturity of your client and their previous experiences.

We borrowed ideas from the Kanban methodology such as using sticky notes and tools like Trello to visualise the servers being migrated and to help us limit tasks in progress to make the team more productive.

We found we could take a lot of helpful parts of Agile methodologies like Scrum including stand-ups which allowed daily communication within the team.

And finally but probably most important – Manage Up and Around!

My old boss once told me “perception is reality” and it has always stuck.

It’s critical senior stakeholders are kept well informed of progress in a concise manner and project governance is put in place. This way key stakeholders from around the business can assist when help is needed and are involved in the process.

So, how does this work in an Agile world? Communications are key. You can still run your project using an Agile methodology but it’s still important to provide reporting on risks, timelines and financials to senior stakeholders. This reporting, along with regular updates with governance meetings reinforcing these written reports, will mean your client will be kept in the loop and the project on track.

Consegna and AWS are proud to be sponsoring the NZTA Hackathon again this year. The event will be held the weekend of 21 – 23 September.

Last year’s event, Save One More Life, was a huge success and the winner’s concept has been used to help shape legislation in order to support its adoption nationally.
The information session held in the Auckland NZTA Innovation space last night, provided great insight into this years event which focuses on accessible transport options, in other words making transport more accessible to everyone, especially those without access to their own car, the disabled, and others in the community who are isolated due to limited transportation options.

The importance of diversity among the teams was a strong theme during the evening. For this event in particular, diverse teams are going to have an edge, as Luke Krieg, the Senior Manager of Innovation NZTA, pointed out, “Data can only tell you so much about a situation. It’s not until you talk to people, that the real insights appear – and what the data alone doesn’t reveal also becomes evident.”

Jane Strange, CX Improvement Lead NZTA, illustrated this point nicely with a bell curve that shows the relationship between users at each extreme of the transport accessibility graph.

Those on the right with high income, urban location, proximity to and choice of transport options invariably define transport policy for those to the left of the curve who are those with low income, located in suburban or rural areas, who are typically more isolated and have fewer transport options.

Luke also stressed how much more successful diverse teams participating in Hackathons usually are. As these are time-boxed events that require a broad spectrum of skills, technology in and of itself often doesn’t win out. Diverse skills are essential to a winning team.

At Consegna, we like AWS and their services which are covered by a solid bench of documentation, blog posts and best practices. Because it is easy to find open source production ready code on GitHub, it is straightforward to deploy new applications quickly and at scale. However, sometimes, moving too fast may lead to some painful problems over time!

Deploying the AWS Serverless Developer Portal from Github straight to production works perfectly fine. Nevertheless, hardcoded values within the templates make complicated to deploy multiple similar environments within the same AWS account. Introducing some parameterization is usually the way to go to solve that problem. But that leads to deal with a production stack to not be aligned with the staging environments which is, of course, not a best practice…

This blog post describes the solution we have implemented to solve the challenge of migrating Cognito users from one pool to another at scale. The extra step of migrating API keys associated to those users is covered in this blog.

The Technology Stack

The deployed stack involves AWS serverless technologies such as Amazon API Gateway, AWS Lambda, and Amazon Cognito. It is assumed in this blog post that you are familiar with those AWS services but we encourage you to check out the AWS documentation or to contact Consegna for more details.

The Challenge

The main challenge is to migrate Cognito users and their API keys at scale without any downtime or requiring any password resets from the end users.

The official AWS documentation describes two ways of migrating users from one user pool to another:

1. Migrate users when they sign-in using Amazon Cognito for the first time with a user migration Lambda trigger. With this approach, users can continue using their existing passwords and will not have to reset them after the migration to your user pool. 2. Migrate users in bulk by uploading a CSV file containing the user profile attributes for all users. With this approach, users will require to reset their passwords.

We discarded the second option as we did not want our users to “pay” for this backend migration. So we used the following AWS blog article as a starting point while keeping in mind that it does the cover the entire migration we need to implement. Indeed, by default, an API key is created for every user registering on the portal. The key is stored in API Gateway and is named based on the user’s CognitoIdenityId attribute which is specific to each user within a particular Cognito user pool.

The Solution

The Migration Flow

The version of our application currently deployed in production does not support the Forget my password flow so we did not implement it in our migration flow (but we should and will).

When a user registers, they must submit a verification code to have access to his API key. In the very unlikely situation where a user has registered against the current production environment without confirming their email address, the user will be migrated automatically with automatic confirmation of their email address by the migration microservice. Based on the number of users and the low probability of this particular scenario, we considered it as an acceptable risk. However it might be different for your application.

The Prerequisites

In order to successfully implement the migration microservice, you first need to grant some IAM permissions and to modify the Cognito user pool configuration.

You must grant your migration Lambda function the following permissions (feel free to restrict those permissions to specific Cognito pools using

On both Cognito pools (the one you are migrating from and the one you are migrating to), enable Admin Authentication Flow (ADMIN_NO_SRP_AUTH) for allowing server-based authentication by the Lambda function executing the migration. You can do it via the Management Console or the AWS CLI with the following command:

The Implementation (in JS)

At the Application Layer

To allow a smooth migration for our users, the OnFailure of the login method should call our migration microservice instead of returning the original error back to the user. An unauthenticated API Gateway client is initialized to call the migrate_user method on our API Gateway. The result returned by the backend is straightforward: RETRY indicates a successful migration so the application must re login the user automatically else it must handle the authentication error (user does not exist, username or password incorrect and so on).

The Migration microservice

API Gateway is used in conjunction with Cognito to authenticate the caller but few methods such as our migrate_user must remain unauthenticated. So here the configuration of migrate_user POST method on our API Gateway:

1 – Extract parameters from the body

All the data required for the migration has been passed by the application to our function via req so we just extract it. Of course do not log the password else it will appear in clear in the execution logs of your Lambda.

Note: you might wish to inject the Cognito pool information directly to the Lambda via environment variables instead of passing via the body of the request.

2 – Check if migration is required

A migration is indicated as required only if the user does not already exist in the new pool. However be aware that this function does not verify the existence of the user in the old pool (the check is made during step 3.):

3 – Resolve the CognitoIdentityId of the user within the old pool

Authenticate the user against the old pool using adminInitiateAuth and get his CognitoIdentityId via the getId method. This is required for the migration of the user’s API key. Of course, if the user cannot be authenticated against the old pool, they cannot be migrated so the function returns the error straight away.

5 – Migrate user from old to new pool

Our user is now ready to be migrated! So let’s use the admin features of Cognito(adminCreateUser, adminInitiateAuth, and adminRespondToAuthChallenge) to create the user, authenticate the user, and set their password.

8 – Migration complete, so return RETRY to indicate success

The migration is now complete so return RETRY status indicating to the application that the user must be re logged in automatically.

Conclusion

By leveraging AWS serverless technologies we have been able to fully handle the migration of our client’s application users at the backend level. The customer was happy with this solution as it avoided sending requests to the users to reset their password and it realigned the production with staging.

It’s implementing solutions like this that helps set Consegna apart from other cloud consultancies — we are a true technology partner and care deeply about getting outcomes for customers that align with their business goals, not just looking after our bottom line.

How many times have you walked into your garage and took stock of all the things you haven’t used in years? Those bikes that you bought for you and your partner that you haven’t used since the summer of ‘09, the fishing rods, the mitre saw, the boat (if you’re lucky) and the list goes on and on. Imagine if you didn’t have to pay for them all up front – and better yet, imagine if you could stop paying for them the moment you stopped using them!

Amazingly, that is the world we live in with the public cloud. If you’re not using something, then you shouldn’t be paying for it – and if you are, then you need to ask yourself some hard questions. The problem we’re seeing in customer-land is twofold:

Technical staff are too far removed from whoever pays the bills, and

It’s easier than ever to start new resources that cost money

Technical staff don’t care about the bill

Many technical staff that provision resources and use services on AWS have no idea what they cost and have never seen an invoice or the billing dashboard. They don’t pay the bills, so why would they worry about what it costs?

Working with technical staff and raising awareness around the consequences of their choices in the public cloud goes a long way to arresting the free-fall into an unmanageable hosting bill. By bringing the technical staff along on the optimisation journey, you’re enabling them to align themselves with business goals and feel the choices they make are contributing in a positive way.

It’s so easy to create new resources

One of the biggest strengths of the public cloud is how easy it is to provision resources or enable services, however this appears to be one of its weaknesses as well. It’s because of this ease of use that time and time again we see serious account sprawl: unused, underutilised and over-sized resources litter the landscape, nobody knows how much Project A costs compared to Project B and there isn’t a clear plan to remediate the wastage and disarray.

Getting a handle on your hosting costs is an important step to take early on and implementing a solid strategy to a) avoid common cost related mistakes and b) be able to identify and report on project costs is crucial to being successful in your cloud journey.

Success stories

Consegna has recently engaged two medium-to-large sized customers and challenged them to review the usage of their existing AWS services and resources with a view to decreasing their monthly cloud hosting fees. By working with Consegna as an AWS partner and focusing on the following areas, one customer decreased their annual bill by NZD$500,000 and the other by NZD$100,000. By carefully analysing the following areas of your cloud footprint, you should also be able to significantly reduce your digital waste footprint.

Right-sizing and right-typing

Right-sizing your resources is generally the first step you’ll take in your optimisation strategy. This is because you can make other optimisation decisions that are directly related to the size of your existing resources, and if they aren’t the right size to begin with then those decisions will be made in error.

Right-typing can also help reduce costs if you’re relying on capacity in one area of your existing resource type that can be found in a more suitable resource type. It’s important to have a good idea of what each workload does in the cloud, and to make your decisions based on this instead of having a one-size-fits all approach.

Compute

Right-sizing compute can be challenging if you don’t have appropriate monitoring in place. When making right-sizing decisions there are a few key metrics to consider, but the main two are CPU and RAM. Because of the shared responsibility model that AWS adheres to, it doesn’t have access to RAM metrics on your instances out-of-the-box so to get a view on this you need to use third party software.

Consegna has developed a cross-platform custom RAM metric collector that ships to CloudWatch and has configured a third-party integration to allow CloudCheckr to consume the metrics to provide utilisation recommendations. Leveraging the two key metrics, CPU and RAM, allows for very accurate recommendations and deep savings.

Storage

Storage is an area that gets overlooked regularly which can be a costly mistake. It’s important to analyse the type of data you’re storing, how and how often you’re accessing it, where it’s being stored and how important it is to you. AWS provides a myriad of storage options and without careful consideration of each, you can miss out on substantial decreases of your bill.

Database

Right-sizing your database is just as important as right-sizing your compute – for the same reasons there are plenty of savings to be had here as well.

Right-typing your database can also be an interesting option to look at as well. Traditional relational databases appear to be becoming less and less popular with new serverless technologies like DynamoDB – but it’s important to define your use case and provision resources appropriately.

It’s also worth noting that AWS have recently introduced serverless technologies to their RDS offering which is an exciting new prospect for optimisation aficionados.

Instance run schedules

Taking advantage of not paying for resources when they’re not running can make a huge difference to how much your bill is, especially if you have production workloads that don’t need to be running 24/7. Implementing a day / night schedule can reduce your bill by 50% for your dev / test workloads.

Consenga takes this concept to the next level by deploying a portal for non-technical users to control when the instances they deal with day-to-day are running or stopped. By pushing this responsibility out to the end users, instances that would have been running 12 hours a day based on a rigid schedule now only run for as long as they’re needed – an hour, or two usually – supercharging the savings.

Identify and terminate unused and idle resources

If you’re not using something then you should ask yourself if you really need it running, or whether or not you could convert it to an on-demand type model.

This seems like an obvious one, but the challenge can actually be around identification – there are plenty of places resources can hide in AWS so being vigilant and using the help of third party software can be key to aid you in this process.

Review object storage policies

Because object storage in AWS (S3) is so affordable, it’s easy to just ignore it and assume there aren’t many optimisations to be made in this area. This can be a costly oversight as not only the type of storage you’re using is important, but how frequently you need to access the data as well.

Lifecycle policies on your object storage is a great way to automate rolling infrequently used data into cold storage and can be a key low-hanging fruit that you can nab early on in your optimisation journey.

Right-type pricing tiers

AWS offers a robust range of pricing tiers for a number of their services and by identifying and leveraging the correct tiers for your usage patterns, you can make some substantial savings. In particular you should be considering Reserved Instances for your production resources that you know are going to be around forever, and potentially Spot Instances for your dev / test workloads that you don’t care so much about.

Of course, there are other pricing tiers in other services that are worth considering.

Going Cloud Native

AWS offers many platform-as-a-service offerings which take care of a lot of the day to day operational management that is so time consuming. Using these offerings as a default instead of managing your own infrastructure can provide some not so tangible optimisation benefits.

Your operations staff won’t be bogged down with patching and keeping the lights on – they’ll be freed up to innovate and explore the new and exciting technologies that AWS are constantly developing and releasing to the public for consumption.

Consegna consistently works with its technology and business partners to bake this optimisation process into all cloud activities. By thinking of ways to optimise and be efficient first, both hosting related savings and operational savings are achieved proactively as opposed to reactively.

Recently we discovered that a customer’s website was being attacked in what is best described as a “slow DoS”. The attacker was running a script that scraped each page of the site to find possible PDF files to download, then was initiating many downloads of each file.

Because the site was fronted by a Content Delivery Network (CDN), the site itself was fine and experienced no increase in load or service disruption, but it did cause a large spike in bandwidth usage between the CDN and the clients. The increase in bandwidth was significant enough to increase the monthly charge from around NZ$1,500 to over NZ$5,000. Every time the customer banned the IP address that was sending all the requests, a new IP would appear to replace it. It seems the point of the attack was to waste bandwidth and cost our customer money — and it was succeeding.

The site itself was hosted in AWS on an EC2 instance, however the CDN service the site was using was a third party — Fastly. After some investigation, it seemed that Fastly didn’t have any automated mitigation features that would stop this attack. Knowing that AWS Web Application Firewall (WAF) has built in rate-based rules we decided to investigate whether we could migrate the CDN to CloudFront and make use of these rules.

All we needed to do was create a CloudFront distribution with the same behaviour as the Fastly one, then point the DNS records to CloudFront — easy right? Fastly has a neat feature that allows you to redirect at the edge which was being used to redirect the apex domain to the www subdomain — if we were to replicate this behaviour in CloudFront we would need some extra help, but first we needed to make sure we could make the required DNS changes.

To point a domain at CloudFront that is managed by Route 53 is easy, you can just set an ALIAS record on the apex domain and a CNAME on the www subdomain. However, this customers DNS was managed by a third-party provider who they were committed to sticking with (this is a blog post for another day). The third-party provider did not support ALIAS or ANAME records and insisted that apex domains could only have A records — that meant we could only use IP addresses!

Because CloudFront has so many edge locations (108 at the time of writing), it wasn’t practical to get a list of all of them and set 108 A records — plus this would require activating the “static IP” feature of CloudFront which gives you a dedicated IP for each edge location, which costs around NZ$1,000 a month.

And to top all that off, whatever solution we decided to use would only be in place for 2 months as the site was being migrated to a fully managed service. We needed a solution that would be quick and easy to implement — AWS to the rescue!

So, we had three choices:

Stay with Fastly and figure out how to ban the bad actors

Move to CloudFront and figure out the redirect (bearing in mind we only had A records to work with)

Do nothing and incur the NZ$5,000 cost each month — high risk if the move to a managed service ended up being delayed. We decided this wasn’t really an option.

We considered spinning up a reverse proxy and pointing the apex domain at it to redirect to the www subdomain (remember, we couldn’t use an S3 bucket because we could only set A records) but decided against this approach because we’d need to make the reverse proxy scalable given we’d be introducing it in front of the CDN during an ongoing DoS attack. Even though the current attack was slow, it could have easily been changed into something more serious.

We decided to stay with Fastly and figure out how to automatically ban IP addresses that were doing too many requests. Aside from the DNS limitation, one of the main drivers for this decision was inspecting the current rate of the DoS — it was so slow that it was below the minimum rate-based rule configuration that the AWS WAF allows (2,000 requests in 5 minutes). We needed to write our own rate-based rules anyway, so using CloudFront and WAF didn’t solve our problems straight away.

Thankfully, Fastly had an API that we could hit with a list of bad IPs — so all we needed to figure out was:

Get access to the Fastly logs,

Parse the logs and count the number of requests,

Auto-ban the bad IPs.

Because Fastly allows log shipping to S3 buckets, we configured it to ship to our AWS account in a log format that could be easily consumed by Athena, and wrote a couple of AWS Lambda functions that:

Queried the Fastly logs S3 bucket using Athena,

Inspected the logs and banned bad actors by hitting the Fastly API, maintaining state in DynamoDB,

Built a report of bad IPs and ISPs and generated a complaint email.

The deployed solution looked something like this:

By leveraging S3, Athena, Lambda and DynamoDB we were able to deploy a fully serverless rate-based auto-banner for bad actors with a very short turnaround. The customer was happy with this solution as it avoided having to incur the $5000 NZD / month cost, avoided needing to change the existing brittle DNS setup and also provided some valuable exposure into how powerful serverless technology on AWS is.

It’s implementing solutions like this that helps set Consegna apart from other cloud consultancies — we are a true technology partner and care deeply about getting outcomes for customers that align with their business goals, not just looking after our bottom line.

At Consegna we pride ourselves on our experience and knowledge. Our recently appointed National Client Manager has a knack for knowing virtually everybody in Wellington. This can be amusing if you’re with him as a 10 minute walk down Lambton Quay can take half an hour. We tend to now leave gaps that long between meetings. One of the questions he will ask people is one we probably all ask, “how’s business?” The answer is always the same, “busy!”

Now sometimes that can just be a standard answer, but if it’s true and we’re all so busy then what are we busy doing? We would like to think we’re doing super innovative work but how much time do we actually dedicate to innovation? Most of the time we’re busy just trying to complete projects. Google famously scraped their “20% time” dedicated to innovation because managers were judged on the productivity of their teams and they were in turn concerned about falling behind on projects where only 80% capacity was used. Yet, the flipside of that was that “20% time” formed the genesis of Gmail, Adsense and Google Talk.

So this leads on to thinking how can we continue to create when we’re all so busy? We have to be innovative about being innovative. One solution that organisations have been working on is the idea of a hackathon. A short period of time, usually around 48 hours, for organisations to stop what they’re doing and work on exciting new things that traditionally might be out of scope or “nice to haves”.

Consegna was selected as the University of Auckland cloud enablement partner in 2017 and were recently involved with the University’s cloud centre of excellence team who wanted to promote and facilitate innovation within their internal teams. One answer to create more innovation was hosting a Hackathon with Auckland University internal teams. In total there were 17 teams of 2-6 people a team. They were tasked with trying to solve operational issues within the University – and there were a number of reasons we jumped at the chance to help them with their hackathon.

First and foremost we liked the idea of two days being used to build, break and create. We were there as the AWS experts to help get the boat moving, but deep down we’re all engineers and developers who like to roll up our sleeves and get on the tools.

Secondly, after watching larger companies like Facebook define why they use Hackathons, it resonated with the team at Consegna. “Prototypes don’t need to be polished, they just need to work at the beginning”. Facebook Chat came out of a 2007 Hackathon which evolved into Web Chat, then Mobile Chat which then became Facebook Messenger as we know it today. The end result of a hackathon doesn’t need to be be pretty, but the potential could know no bounds. Consegna were able to help Auckland University build something where its success was not judged on immediate outcomes, but on the potential outcomes to do amazing things.

Hackathons can also be incredibly motivating for staff members. For those of you who are familiar with Daniel Pink’s book “Drive”, you’ll know money doesn’t always motive staff like we traditionally thought it did. Autonomy, Mastery and Purpose are key parts of Pink’s thinking but also the key to a successful hackathon. The business is not telling you how to do something and in a lot of cases, the guidelines can be very fluid in that they’re not telling you what to build. A guiding question to a successful hackathon can be succinctly put as “what do you think?”

We applaud Auckland University for inviting us along to allow their staff members to pick our brains and learn new things. Want to know how to program Alexa? No problem! How can I collect, process and analyse streaming data in real time? Talk to one our lead DevOps Architects, they’ll introduce you to Kinesis. There was as much learning and teaching going on at the hackathon as there was building some amazing applications.

It’s important to be aware of potential pitfalls of hosting a hackathon and to be mindful you don’t fall for the traps. You want to make sure you get the most out of your hackathon, everybody enjoys it and that there are plenty of takeaways otherwise bluntly, what’s the point?

Hackathons as a general rule don’t allow for access to customers. If you’re wanting to dedicate just 48 hours to solve a problem then how can you have understanding and empathy for customers if they’re not around? If they’re not there to collect feedback and iterate can you even build it? Auckland University got around that problem by largely building for themselves; they were the customer. They could interview each other for feedback so this was a tidy solution so if you think a hackathon will offer a solution for customer-facing applications you might want to think about either making customers available for further enhancements outside of the hackathon or think of a different innovation solution altogether.

Before I mentioned how Facebook use hackathons to inspire innovation, but there is one downside – they’re unpaid and on weekends. Employees can feel obligated to participate to please hierarchies which forces innovation out of their staff. Generally, this is not going to go well. The driving factors Pink mentions are not going to be as prevalent if staff are doing it in their own time, usually without pay – to build something the business owns. From what I’ve seen Facebook put on some amazing food for their hackathons but so what? Value your staff, value the work they do and don’t force them to do it on their own time. Auckland University’s hackathon was scheduled over two working days, in work time and made sure staff felt valued for what ultimately was a tool for the University.

Over the two days, the 18 teams presented their projects and there were some amazing outcomes. MiA, the winner, used a QR code on a big screen so students can snapshot the QR code and register their attendance to each lecture. This was done with an eye on Rekognition being the next iteration and using image recognition software to measure attendance. Generally speaking, there’s not a whole lot wrong with how the University currently measures attendance with good old fashioned pen and paper but how accurate is it when you’re allowing humans to be involved and how cool is it for an AWS product to take the whole problem away?

In all, it was an amazing event to be a part of and we must thank Auckland University for inviting us. Also thanks to AWS for providing a wealth of credits so there was no cost barrier to building and creating.

If you’re thinking of hosting your own hackathon I wish you well. It’s a great way to take a break from being busy and focus on creating new tools with potentially unlimited potential. It will get your staff scratching a learning and creative itch they’ve been aching to get at.

Most importantly, always take time and make strategies to keep innovating. Don’t be too busy.

As the newly appointed ICT Manager for a large government agency, Jeff was keen to make his mark quickly and decisively in his new role. Looking at the IT spend over the last three years, he could see that in spite of the market shifting considerably, the agency had been paying exorbitant amounts for application hosting. The market had trended downwards with pressure from large Cloud providers like AWS. Looking at their hosting arrangements more closely, Jeff could not only see their costs remained largely unchanged but also, service reliability had been steadily declining. This project looked like an ideal candidate to reduce cost, increase service levels, and make the mark he wanted.

After looking at the current environment carefully, a current rebuild of the primary public website for the agency appeared to be a good choice. The site was currently in development using AWS services. The development team had chosen AWS for development due to its low cost, well within their budget. Far more compelling to them was the speed with which the developers could provision and utilise the necessary resources. What would typically take internal IT weeks to provide for the developers could be accomplished inside a day using AWS management tools and company best practice.
Using managed services such as AWS Elastic Beanstalk, not only did the development team have ready access to a low-cost development environment they could provision on demand. They could also run parallel application stacks for testing. That just wasn’t possible using their existing infrastructure that was difficult to access and manage. As such, the AWS services allowed new configurations to be tested quickly at a fraction of the cost of traditional infrastructure. Cents per hour, versus a few hundred dollars a month.

With the application completed, launch day and migration commenced. Fortunately, Jeff had identified that the team needed an AWS partner to assist with the migration. This lead to the realisation that a scalable architecture was required to support the fluctuating demand on the website. With the right design, the right AWS partner, the site was migrated and delivered a 75% saving on the former hosting costs. With the automated scaling and monitoring, AWS services provided as part of the production environment, site outages dropped to less than 1% over first few months of operation, improving even more over time. The site had gone from 2 – 3 outages per month on the old hosting, due to network and other issues, to no unscheduled outages from one month to the next.

At this point, one would think this would be the end of the story. The primary production site was on AWS at a fraction of the former cost. Service levels were higher than they ever had been. The new problem that was beginning to emerge was cost control and environmental regulation.

With the success of this project, Jeff’s teams started moving more and more projects to AWS. As more teams in the organisation also adopted this approach, keeping an eye on resource usage started becoming more challenging. Managing what teams and individuals had access to resources was also emerging as a challenge. The situation Jeff was finding himself in after an initial easy win is quite common. Many companies who discovered server virtualisation during the mid-2000’s also learned technology on its own can create all new challenges nobody had previously anticipated.

The simple answer to why Cloud is relevant to your business today is the agility it provides and the transfer of CapEx to OpEx. Not to mention the tangible cost savings you can make. What’s important for ICT managers to understand, however, is the importance of a structured approach to Cloud Adoption. Not every workload is going to be a suitable candidate for migration. Ongoing success requires the implementation of a Cloud Adoption Framework (CAF) and the establishment of a Cloud Centre of Excellence (CCoE). Neither of which need to be as daunting as they sound. The CAF examines your current environment and assesses what workloads would work in the Cloud. It highlights six perspectives around Business, People, Governance, Platform, Security, and Operations. In so doing, it ensures thought is given to each of these within the context of Cloud. What training do your people need for example, when a particular application gets migrated to the Cloud? How would their roles change? What operational support would they need?

A CCoE should be seen as the thought leadership and delivery enablement team that will help plan and execute on the CAF. It usually consists of SMEs from each principal area the CAF examines. By choosing an AWS Consulting Partner like Consegna, who understand this pragmatic, structured approach to cloud adoption and digital transformation, ongoing long-term success starts from a solid foundation.

The discoveries Jeff made during his journey to Cloud are being made by ICT managers on a near-daily basis. An increasing number of Jeff’s peers understand that Cloud is a timely and necessary step to reduce cost and increase agile productivity. Those with that extra slice of understanding and knowledge are working with partners like Consegna to do Digital Transformation the smart way. Putting CAF and CCoE at the forefront of their journey, and seeing great successes in doing so.

An all too familiar scenario playing out in numerous organisations today is a rapid push onto the Cloud. Typically this entails a technical division getting approval to migrate a workload onto to the Cloud, or establish a new project on AWS EC2 instances.

Fast forward a few months and the organisation’s cloud adoption seems to be progressing fairly well. More and more teams and applications are running on AWS, but it’s around this time that cost and management have started spiralling out of control. A quick survey of the organisation’s AWS account reveals there are users, instances and various other resources running out of control “on the cloud”. The typical issues many organisations have wrestled with in their traditional infrastructure space has been directly translated onto the Cloud. Now the financial controller and other executives are starting to question the whole Cloud strategy. What exactly has gone wrong here?

Whether you’re prototyping a small proof of concept (POC) or looking to migrate your entire organisation into the Cloud, if you fail to adhere to a Cloud Adoption Framework (CAF) and form a Cloud Centre of Excellence (CCoE), it won’t be a case of if the wheels fall off the wagon, but when.

Think of the CAF as a structured roadmap that outlines how and when you’re going to migrate onto the Cloud, and the CCoE as the project leadership team who will be able to execute the CAF in a manner that will guarantee results.

The CAF looks at Cloud Adoption from six different perspectives. Business, People, Governance, Platform, Security and Operations.

It ensures that no matter what project you have in mind, the broader implications of Cloud adoption are highlighted and worked through. If you’ve ever been privy to the disaster an ad-hoc technology lead adoption results in, you’ll truly appreciate how critical CAF is to successful Cloud transformation.

Consegna specialise in running CAF workshops for organisations to ensure success with end to end Cloud transformation, both from a Business and Technical perspective, for complete Cloud adoption. For a coffee and chat about CAF and how it can ensure the success of your Cloud migration and digital transformation, contact sales@consegna.cloud for a catch up.

When it comes to digital transformation and cloud adoption, a trusted advisor with the right experience is a crucial component to your success. How do you evaluate your options?

Case studies such as the one recently published by Consegna.Cloud, highlight the challenges and solutions provided, giving key decision makers essential insights into the scope and scale of other projects in relation to their own.

Key points of interest in the case study are identifying the complexity of the project. For example, in the QV case study Consegna recently published, the existing systems were almost twenty years old, so a mature system with a lot of legacy components. Thus, a high level of complexity.

Case studies also highlight the key benefits the project delivered. In the case of QV, over 50% savings on Infrastructure costs alone. Please look for these key metrics as you read through the case study.

Such artefacts are useful in establishing the evidence based capabilities that Consegna, a trusted Cloud advisor, possess. Feel free to contact the Consegna team to ascertain how your project compares to the QV project. Simply email sales@consegna.cloud to start the conversation.