You may find the following useful if you have challenges running effective daily scrum meetings with a larger geographically distributed team. You might be finding that it takes more than 10 minutes, is awkward, shares no new understanding and results in no one doing anything different. Some people may be complaining that you’re not Agile enough and you need a smaller co-located team, and others may have decided that Agile is ineffective.

When acquiring data for the data warehouse from source systems, it can be useful to make a clear distinction between the time at which an event occurred, and the time at which the event was recorded by the source system. In the simplest case, the source system records the event at the time it occurs and the anomalies described below do not happen. But in cases where there is a delay between the actual time of the event, and the time the record of the event is received by the source system, then there's a trap that needs to be avoided.

On Tuesday I participated in an online panel on the subject of Continuous Improvement, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps.

This article about the recent S3 slowdown and recovery notes that AWS originally pursued the wrong root cause. There's always a risk of this happening. We discuss the benefits of the ability to revert changes here.

We created a mechanism that we called "The Federator" for making data processed on one Redshift cluster be available on other Redshift clusters. This post follows the introduction in the previous part 1 post, and describes how we solved the challenge of dealing with large data volumes.

This is an overview of the principles we followed when migrating an enterprise data warehouse to a cloud infrastructure with AWS Redshift. We added automated deployments and automated testing to the more traditional set.