Blurring the line between software development and operations

Tag Archives: cloud infrastructure

If you ever had to handle a big marketing campaign on a website or a promotion on your SaaS product, you probably know that sizing your servers to handle the anticipated load is notoriously hard and costly. Typically, you need to get beefy hardware to power up a number of servers and deploy a load balancer to equally split the traffic between each server so you get a system that can handle millions of views in a small time range like hours.

If you read my post on the OperatingDev blog or the Precursor Games case study on theTriNimbus site, you know that we recently worked with Precursor Games to design and implement a scalable deployment architecture for their crowdfunding campaign. We chose to deploy the system on AWS and our aim was to ensure the servers can handle up to a million page views in the first few days. Continue reading →

A while back I wrote that crowdfunding can be a distraction to startups from working on their products. While I still believe that, one of my clients have shown me that the two can be put together to create a community of people who passionately care about your product and are investing their time and ideas (beyond the $10 or $50 or whatever amount they contribute) in helping you deliver an outstanding experience.

“We are a small team,” said Otto, the CEO of a promising startup in the social media space, “and we let development to deploy changes directly on production.”

“We don’t have much custom development,” said Pam, the IT manager at a small manufacturing shop in the cosmetics industry, “and we mostly do integrations with our ERP. There is no need for a staging environment.”

“We hire the best developers and our code is fully unit tested,” says Rupmeet, who manages a team of seven developers working on an analytics product in the cloud, “thus we don’t have problems deploying directly in production.”

So far in the Disaster Recovery series I have discussed the importance of DR planning for building resilient organizations regardless of size and described the steps needed for building a basic DR plan using the cloud. This article will complete the series by looking at the process of recovery and the thoughts that go into proactive planning and recovery actions.

The most critical part of DR — and hardest to implement — it is the procedure to rebuild your infrastructure in a different location should a disaster hit. As you can imagine, the cost of a secondary infrastructure in a traditional data centre is prohibitive for many companies to even start thinking about DR.

The previous article in the Disaster Recovery series talked about the importance of implementing a proper DR plan for every organization, including startups and small businesses, as a way of building resilience into the organization at the technology level.

Let’s look at some of the ways you can implement a basic DR plan leveraging the cloud for cost effectiveness. The example below uses AWS as a cloud provider but any cloud infrastructure may work if they provide the features discussed below.

If you’re a startup or a small business, you’re probably thinking that disaster planning and recovery processes (DR as usually referred to by IT) are for the big guys. If you’re currently taking a risk and running your systems with no redundancy or a reasonable recovery plan you’re not alone.

Many companies have no experience nor can they afford to implement a proper DR strategy beyond a simple database or file backup – which often doesn’t even leave the premises where the main servers are run and is thus vulnerable to the same problems the overall system is exposed to.

If you are building a business that can survive long to see its products used by many customers, you need to put DR into your toolkit of good business practices to follow. It will pay itself the next time an investor knocks on your door, even if a disaster never hits.

It is the year 2016 and Amazon is getting ready for their fifth re:Invent conference. The conference attracted a lot of attention already – after all one of the keynotes will be broadcast from aboard MV Ushuaia as Amazon is ready to launch their newest data centre, built next door to the Palmers Station on Anvers Island in the Antarctic Peninsula region. With this, they will now have data centres on all continents on our planet! (Some say they are building one under water in the Pacific too but these are only rumors for now.)

The interest for the conference has been growing since the first one in 2012 and Amazon decided this year to hold simultaneous events in several locations around the world. The audiences will follow the broadcast of the keynote from Antarctica using the latest video streaming service the AWS team plans to officially announce at the conference – AWS Cineplex – which is seriously threatening Netflix as the main provider of online movie streaming.

In last week’s article, I have argued that most software code is at what I called maturity level 1-3 and only those teams that practice coding as a deliberate discipline achieve level 4. At this level they are controlling every aspect of their deployment, like configuring the application, managing shared libraries and third-party dependencies, deploying or migrating their database, etc. They are also taking advantage of scripted builds, continuous integration and automated testing, which are pre-requisites for successful adoption of agile and lean methodologies.

But there are two more levels (and maybe more can be achieved through innovation) that are pushing the limits of what is possible with code. Arguably, these levels are only possible thanks to advances in virtualization and cloud technologies and may be beyond reach for certain type of software products that have to deal with traditional infrastructure or where the maturity of the tools available is making it harder to go beyond level 4. (Those projects are possibly limited more by tradition and mindset than tools, though.)

I’d like to propose a new paradigm to describe the impact of the cloud and related services on our world. I think we’re unconsciously changing our lives to adjust to a world in which many things are delivered to us as a service – that is in bits and bytes according to our needs, schedule and budget. Unfortunately we’re still not aware what its implications are and if we need to be wary of the changes or welcoming them. It seems to me we’re developing a hunger for more and more services and we’re slowly adopting service oriented lives.

It used to be that in order to get something done in our lives we had to spend considerable time to obtain and learn the tools needed for the task, prepare an environment to do the work in and then hopefully get enough time left to complete the task. But then some folks realized they can start delivering the tools and means to us so we ended up with broadcasting services like TV or radio, pizza or newspaper delivery, etc. We still had to turn on the TV and choose what to watch or open the pizza box and eat, but we no longer had to put the preparation time compared to say going to cinema or a restaurant.

A friend who got interested in concepts like DevOps and Infrastructure As Code after I introduce them to him, recently forwarded me an article in PC Magazine about something called Software Defined Networking (SDN). Being passionate about technologies that virtualize hardware infrastructure and allow you to programmatically manage your resources, I was immediately intrigued and decided to look up more information. After all, as the map on the right indicates, the IT landscape is very complex and if Dev & Ops can be put to work together by using a software driven approach to typical infrastructure needs everyone should be a winner.