As a follow up to my post on "Continuously controlled integration for Agile development", here's a Google Tech Talk video on using such practices to scale on a massive code base. Here's a snapshot of what's involved:

Even at this size, Google still runs the build from a single monolithic source that has various programming languages intermingled with each other. Here's an excerpt from the talk's introduction:

At Google, due to the rate of code in flux and increasing number of automated tests, this approach does not scale. Each product is developed and released from 'head' relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.

With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build 'green' by analyzing hundreds if not thousands of changes that were incorporated into the latest test run to determine which one broke the build. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, the system could run every test at every change, but that would be very expensive.

To solve this problem, Google built a continuous integration system that uses fine-grained dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change.

For those who want to learn more, watch the video below:

At Google, due to the rate of code in flux and increasing number of automated tests, this approach does not scale. Each product is developed and released from 'head' relying on automated tests verifying the product behavior. Release frequency varies from multiple times per day to once every few weeks, depending on the product team.

With such a huge, fast-moving codebase, it is possible for teams to get stuck spending a lot of time just keeping their build 'green' by analyzing hundreds if not thousands of changes that were incorporated into the latest test run to determine which one broke the build. A continuous integration system should help by providing the exact change at which a test started failing, instead of a range of suspect changes or doing a lengthy binary-search for the offending change. To find the exact change that broke a test, the system could run every test at every change, but that would be very expensive.

To solve this problem, Google built a continuous integration system that uses fine-grained dependency analysis to determine all the tests a change transitively affects and then runs only those tests for every change.

Read this great article from the NY times by one of my favorite business writers Clayton Christensen. It was written just a few days before the presidential elections in which Obama came out the eventual victor and discusses the dilemma facing our capitalist system which has been experiencing a lethargic interest as of late. As he states:

Capitalists seem almost uninterested in capitalism, even as entrepreneurs eager to start companies find that they can’t get financing. Businesses and investors sound like the Ancient Mariner, who complained of “Water, water everywhere — nor any drop to drink.”

It’s a paradox, and at its nexus is what I’ll call the Doctrine of New Finance, which is taught with increasingly religious zeal by economists, and at times even by business professors like me who have failed to challenge it. This doctrine embraces measures of profitability that guide capitalists away from investments that can create real economic growth.

Christensen goes on to describe three models of innovations that investors and executives would typically finance with their capital:

Empowering Innovations - Examples of these include the Ford Model T or Sony's transistor radio. These kinds of innovations "create jobs, because they require more and more people who can build, distribute, sell and service these products. Empowering investments also use capital — to expand capacity and to finance receivables and inventory."

Sustaining Innovations - This is where old products are replaced by new ones. The Toyota Prius is used as an example, but as Christensen states, they "replace yesterday’s products with today’s products and create few jobs. They keep our economy vibrant — and, in dollars, they account for the most innovation. But they have a neutral effect on economic activity and on capital."

Efficiency Innovations - These are used exclusively to reduce the cost of making and distributing existing products and services. It is very similar to the "Lean" method, but "such innovations almost always reduce the net number of jobs, because they streamline processes. But they also preserve many of the remaining jobs — because without them entire companies and industries would disappear in competition against companies abroad that have innovated more efficiently."

His argument, I think a very valid one, is that the world has been focused almost exclusively on the innovations of efficiency since the start of the recent great recession. But such innovations "also emancipate capital. Without them, much of an economy’s capital is held captive on balance sheets, with no way to redeploy it as fuel for new, empowering innovations. For example, Toyota’s just-in-time production system is an efficiency innovation, letting manufacturers operate with much less capital invested in inventory."

The discussion goes on to describe how these innovation movements typically cycle themselves through a normal economic period of growth and recession creating an almost organic like system that regulates itself like a homeostatic organism. But the most scary part of his article is the description of this current economic downturn and how it fits into an overall pattern of being increasingly prolonged:

In the last three recoveries, however, America’s economic engine has emitted sounds we’d never heard before. The 1990 recovery took 15 months, not the typical six, to reach the prerecession peaks of economic performance. After the 2001 recession, it took 39 months to get out of the valley. And now our machine has been grinding for 60 months, trying to hit its prerecession levels — and it’s not clear whether, when or how we’re going to get there. The economic machine is out of balance and losing its horsepower. But why?

The answer is that efficiency innovations are liberating capital, and in the United States this capital is being reinvested into still more efficiency innovations. In contrast, America is generating many fewer empowering innovations than in the past. We need to reset the balance between empowering and efficiency innovations.

So this prolonged downturn has created a perpetual cycle of feeding efficiency innovations to more and more efficiency innovations. I can relate this in real terms as the projects I've been involved with have had "efficiency" as it goal front and central with very little sustaining and practically none of the empowering innovations Christensen outlines.

So the implications are that this is not only a capitalist's dilemma, but a dilemma for the Agile project leader as well. Lean is a great tool that is at the disposal of the Agile project leader, but it is not the only one. Agile is at heart a method that was tailor made for empowering innovations since it allows one to test out and try new innovative products and services, the very ones that will create the new jobs of tomorrow.

Christensen's framework is a brilliant way to view your pipeline of projects and to ensure they are the right mix that will not only create efficiencies and sustain existing products and services, but to empower growth that will propel not only your organization's portfolio of products and services, but you and your team's personal and professional growth.

Some food for thought!

Capitalists seem almost uninterested in capitalism, even as entrepreneurs eager to start companies find that they can’t get financing. Businesses and investors sound like the Ancient Mariner, who complained of “Water, water everywhere — nor any drop to drink.”

It’s a paradox, and at its nexus is what I’ll call the Doctrine of New Finance, which is taught with increasingly religious zeal by economists, and at times even by business professors like me who have failed to challenge it. This doctrine embraces measures of profitability that guide capitalists away from investments that can create real economic growth.

For Agile software development, I'm a big advocate of XP's practice of continuous integration. This is the practice where a developer checks in code which triggers an automated system to compile and build the code into the system. One of my early projects was to implement an automated build system for a .Net/C# platform using open source software such as Cruise Control, that worked in conjunction with a custom make file that got triggered during check in to compile and build the code. There were three development systems these builds got deployed to: development, staging and production environments.

For Agile software development, I'm a big advocate of XP's practice of continuous integration. This is the practice where a developer checks in code which triggers an automated system to compile and build the code into the system. One of my early projects was to implement an automated build system for a .Net/C# platform using open source software such as Cruise Control, that worked in conjunction with a custom make file that got triggered during check in to compile and build the code. There were there development systems these builds got deployed to: development, staging and production environments.

If a build got broken due to a complier error or integrations tests did not pass (unit tests were done at the function level), the developer would receive a warning by email that she/he had to fix the error. This would only effect the development environment. It wasn’t till we reached the end of our iteration after all the code was checked in and integrations tests passed, that it would be deployed to the staging environment where system and UAT testing got done. Once this was all complete, it would go to the “production” environment. I have production in quotations as this was really a pre-production or “prototyping” environment.

So it was with interest that I read this post from the Plastic SCM website about the difference between a “continuous” and “controlled” integration:

In a regular continuous integration scenario, all developers perform integrations and solve merge conflicts in code, which is perfectly acceptable on small, well-trained teams. Even agile teams can be affected by personnel rotation, though, or just new members joining, and it’s usually not a good idea to have a brand new developer mixing code he doesn’t yet understand.

In controlled integration a new role shows up: the integrator. The integrator is usually a seasoned team member who is familiar with the bulk of the code, the version control system, and the build and release process. The most important feature of the integrator is not that she knows all the code, which is not even necessary, but that she takes responsibility for the integration process. The integrator’s primary goal is creating a new stable baseline to serve as the base for development during the next iteration.

I have to agree with this and think it is an important distinction to make. For the automated test system I was in charge of, I played that role of Integrator that was tasked with setting the baseline for the development of the next iteration.

The illustration above shows an example of a mixture of a continuous and controlled process. For anyone tasked with deploying an Agile software development project and are utilizing continuous integration, I recommend looking at this model and ensuring you and your team recognize the difference and where to adopt one process or the other.For Agile software development, I'm a big advocate of XP's practice of continuous integration. This is the practice where a developer checks in code which triggers an automated system to compile and build the code into the system. One of my early projects was to implement an automated build system for a .Net/C# platform using open source software such as Cruise Control, that worked in conjunction with a custom make file that got triggered during check in to compile and build the code. There were three development systems these builds got deployed to: development, staging and production environments.

If a build got broken due to a complier error or integrations tests did not pass (unit tests were done at the function level), the developer would receive a warning by email that she/he had to fix the error. This would only effect the development environment. It wasn’t till we reached the end of our iteration after all the code was checked in and integrations tests passed, that it would be deployed to the staging environment where system and UAT testing got done. Once this was all complete, it would go to the “production” environment. I have production in quotations as this was really a pre-production or “prototyping” environment.

So it was with interest that I read this post from the Plastic SCM website about the difference between a “continuous” and “controlled” integration:

In a regular continuous integration scenario, all developers perform integrations and solve merge conflicts in code, which is perfectly acceptable on small, well-trained teams. Even agile teams can be affected by personnel rotation, though, or just new members joining, and it’s usually not a good idea to have a brand new developer mixing code he doesn’t yet understand.

In controlled integration a new role shows up: the integrator. The integrator is usually a seasoned team member who is familiar with the bulk of the code, the version control system, and the build and release process. The most important feature of the integrator is not that she knows all the code, which is not even necessary, but that she takes responsibility for the integration process. The integrator’s primary goal is creating a new stable baseline to serve as the base for development during the next iteration.

I have to agree with this and think it is an important distinction to make. For the automated test system I was in charge of, I played that role of Integrator that was tasked with setting the baseline for the development of the next iteration.

The illustration above shows an example of a mixture of a continuous and controlled process. For anyone tasked with deploying an Agile software development project and are utilizing continuous integration, I recommend looking at this model and ensuring you and your team recognize the difference and where to adopt one process or the other.

For Agile software development, I'm a big advocate of XP's practice of continuous integration. This is the practice where a developer checks in code which triggers an automated system to compile and build the code into the system. One of my early projects was to implement an automated build system for a .Net/C# platform using open source software such as Cruise Control, that worked in conjunction with a custom make file that got triggered during check in to compile and build the code. There were there development systems these builds got deployed to: development, staging and production environments.

If a build got broken due to a complier error or integrations tests did not pass (unit tests were done at the function level), the developer would receive a warning by email that she/he had to fix the error. This would only effect the development environment. It wasn’t till we reached the end of our iteration after all the code was checked in and integrations tests passed, that it would be deployed to the staging environment where system and UAT testing got done. Once this was all complete, it would go to the “production” environment. I have production in quotations as this was really a pre-production or “prototyping” environment.

So it was with interest that I read this post from the Plastic SCM website about the difference between a “continuous” and “controlled” integration:

In a regular continuous integration scenario, all developers perform integrations and solve merge conflicts in code, which is perfectly acceptable on small, well-trained teams. Even agile teams can be affected by personnel rotation, though, or just new members joining, and it’s usually not a good idea to have a brand new developer mixing code he doesn’t yet understand.

In controlled integration a new role shows up: the integrator. The integrator is usually a seasoned team member who is familiar with the bulk of the code, the version control system, and the build and release process. The most important feature of the integrator is not that she knows all the code, which is not even necessary, but that she takes responsibility for the integration process. The integrator’s primary goal is creating a new stable baseline to serve as the base for development during the next iteration.

I have to agree with this and think it is an important distinction to make. For the automated test system I was in charge of, I played that role of Integrator that was tasked with setting the baseline for the development of the next iteration.

The illustration above shows an example of a mixture of a continuous and controlled process. For anyone tasked with deploying an Agile software development project and are utilizing continuous integration, I recommend looking at this model and ensuring you and your team recognize the difference and where to adopt one process or the other.

For Agile software development, I'm a big advocate of XP's practice of continuous integration. This is the practice where a developer checks in code which triggers an automated system to compile and build the code into the system. One of my early projects was to implement an automated build system for a .Net/C# platform using open source software such as Cruise Control, that worked in conjunction with a custom make file that got triggered during check in to compile and build the code. There were three development systems these builds got deployed to: development, staging and production environments.

If a build got broken due to a complier error or integrations tests did not pass (unit tests were done at the function level), the developer would receive a warning by email that she/he had to fix the error. This would only effect the development environment. It wasn’t till we reached the end of our iteration after all the code was checked in and integrations tests passed, that it would be deployed to the staging environment where system and UAT testing got done. Once this was all complete, it would go to the “production” environment. I have production in quotations as this was really a pre-production or “prototyping” environment.

So it was with interest that I read this post from the Plastic SCM website about the difference between a “continuous” and “controlled” integration:

In a regular continuous integration scenario, all developers perform integrations and solve merge conflicts in code, which is perfectly acceptable on small, well-trained teams. Even agile teams can be affected by personnel rotation, though, or just new members joining, and it’s usually not a good idea to have a brand new developer mixing code he doesn’t yet understand.

In controlled integration a new role shows up: the integrator. The integrator is usually a seasoned team member who is familiar with the bulk of the code, the version control system, and the build and release process. The most important feature of the integrator is not that she knows all the code, which is not even necessary, but that she takes responsibility for the integration process. The integrator’s primary goal is creating a new stable baseline to serve as the base for development during the next iteration.

I have to agree with this and think it is an important distinction to make. For the automated test system I was in charge of, I played that role of Integrator that was tasked with setting the baseline for the development of the next iteration.

The illustration above shows an example of a mixture of a continuous and controlled process. For anyone tasked with deploying an Agile software development project and are utilizing continuous integration, I recommend looking at this model and ensuring you and your team recognize the difference and where to adopt one process or the other.

You knew it would happen and so it has, with the growing popularity of Kanban in recent years by the Agile community someone was going to merge Scrum with Kanban and call it something cute: Scrumban! (Scrum + Kanban). Before you roll your eyes and shrug your shoulder at "yet another Agile method", it may be something worth looking at.

Here's a slide from Yuval Yeret who was apparently one of the first few people to formally propose this:

Since there seems to be many synergies between Scrum and Kanban techniques, why not combine the best elements of both into one

Kanban already has a well know lean way of visually presenting the flow of work, so use it to visualize backlog items and user stories

Itegrate daily stand up meetings with Kanban review boards to move the flow of the project more effectively

Use visual WIPs to help prioritize backlog items and user stories to efficiently achieve incremental progress

Some challenges you will face and need to address before moving forward:

Unless you and your teams familiarize yourself well with both techniques, you may waste more time context switching between which Scrum or Kanban method to follow. Worse, you will confuse the team and Product Owner with inappropriate use of terminology creating self-induced impediments

Kanban places a limit on WIPs, which in Scrum means placing a ceiling on the number of backlog items to be checked off at any given time causing the flow to slow and/or stop.

Not too big of a deal, but goes back to the previous challenge just mentioned in that a Scrum centered team will get slowed down by wondering what to do when the limit is reached

Anyway, this is yet another inevitable evolutionary iteration of Agile methods and techniques that you may want to familiarize yourself with and try out if it suits your project's goal and requirements.

Yet another adoption of Agile outside of software development presents itself again with this article on how a retail outlet called "Oddyssea" based in Half Moon Bay, CA, used Agile development principles to deploy their retail operations:

On July 14, 2012, Oddyssea, a first-of-its-kind retailer, opened in Half Moon Bay, Calif. Equal parts science, nature, games, magic and furnishings, with a generous dash of whimsy, Oddyssea thematically is dedicated to exploring, creating and discovering. What’s most interesting about the Oddyssea retail experience is it was conceptualized, designed, implemented and continues to operate using Agile. The Agile software engineering model. But Agile is unrelated to the store’s point of sale, inventory management or financial systems. Rather, it’s completely focused on defining the retail experience... to build a retail operation that was flexible and open to customers, collaborative with his target market and, most importantly, change ready.

Leveraging the principles of Agile from software development, they customized the methods for:

Quicker conceptualization of new retailing ideas

Design implementation based on those ideas

Development of a working prototype

Testing and tweaking of the prototype based on customer feedback

Creating a culture of continuous improvements by making each iteration better than the last and delighting their customers

I recommend reading the article if you are interested in seeing how Agile is crossing over the boundaries of software development to the development now of retail outlets!