Blurring the line between software development and operations

Tag Archives: startups

If you ever had to handle a big marketing campaign on a website or a promotion on your SaaS product, you probably know that sizing your servers to handle the anticipated load is notoriously hard and costly. Typically, you need to get beefy hardware to power up a number of servers and deploy a load balancer to equally split the traffic between each server so you get a system that can handle millions of views in a small time range like hours.

If you read my post on the OperatingDev blog or the Precursor Games case study on theTriNimbus site, you know that we recently worked with Precursor Games to design and implement a scalable deployment architecture for their crowdfunding campaign. We chose to deploy the system on AWS and our aim was to ensure the servers can handle up to a million page views in the first few days. Continue reading →

A while back I wrote that crowdfunding can be a distraction to startups from working on their products. While I still believe that, one of my clients have shown me that the two can be put together to create a community of people who passionately care about your product and are investing their time and ideas (beyond the $10 or $50 or whatever amount they contribute) in helping you deliver an outstanding experience.

It seems that the threshold for starting a new business to build software products is getting smaller by the year.

One one end, this is fuelled by the availability of many mature open source frameworks and tools, along with the proliferation of services running in the cloud. These tools and frameworks make it possible to spend very little money to start building fairly complex products. (Why, today it is even possible to write and compile code in the cloud).

Openness to ideas, collaboration, communication. These are some of the things one of the agile teams I am coaching values and likes to keep doing. They’re valued equally as tracking issue in Github, regular refactoring of their application framework throughout the sprints or fixing bad coding practices.

So far in the Disaster Recovery series I have discussed the importance of DR planning for building resilient organizations regardless of size and described the steps needed for building a basic DR plan using the cloud. This article will complete the series by looking at the process of recovery and the thoughts that go into proactive planning and recovery actions.

The most critical part of DR — and hardest to implement — it is the procedure to rebuild your infrastructure in a different location should a disaster hit. As you can imagine, the cost of a secondary infrastructure in a traditional data centre is prohibitive for many companies to even start thinking about DR.

The previous article in the Disaster Recovery series talked about the importance of implementing a proper DR plan for every organization, including startups and small businesses, as a way of building resilience into the organization at the technology level.

Let’s look at some of the ways you can implement a basic DR plan leveraging the cloud for cost effectiveness. The example below uses AWS as a cloud provider but any cloud infrastructure may work if they provide the features discussed below.

If you’re a startup or a small business, you’re probably thinking that disaster planning and recovery processes (DR as usually referred to by IT) are for the big guys. If you’re currently taking a risk and running your systems with no redundancy or a reasonable recovery plan you’re not alone.

Many companies have no experience nor can they afford to implement a proper DR strategy beyond a simple database or file backup – which often doesn’t even leave the premises where the main servers are run and is thus vulnerable to the same problems the overall system is exposed to.

If you are building a business that can survive long to see its products used by many customers, you need to put DR into your toolkit of good business practices to follow. It will pay itself the next time an investor knocks on your door, even if a disaster never hits.

I am often asked to give advice on the most important processes a software team needs to implement for managing their code. The discussion usually moves from version control, to branching and merging strategies, unit testing, code reviews, static code analysis, build automation, continuous integration, test automation, code coverage, stress testing, etc. But one aspect I find as important as all of the items above is the scope your code controls. That is, how many things sit outside your codeline and how can you ensure your code is complete.

When you think of code, what comes to mind first? The lines that define your application features I presume. But those features don’t come into life in a vacuum, solely from your code — unless you’re coding The Matrix that is

I recently learned that a local software development company decided to implement Agile “properly” and in the process let go everyone with any process knowledge of experience. “Dev will do it themselves” was the message these people got at the door.

(If anyone is in need of good people with experience related to product or project management and delivery, customer engagement and similar email me at kima@operatingdev.com to put you in touch with few I know very well.)

While advising a client how to properly implement agile methodologies like scrum recently, I was talking about the importance of automation to their team and mentioned that automation starts with standardization. What I meant was that to reduce the cost of automation they need to standardize the platforms and technologies they use for development and production. The reaction from their people got me thinking. Their response was ‘Are you saying we should ask everyone to use only one programming language? We are a small company and we have been trying to attract talent by letting them choose the best technology they think will help them do their work.’