Tutum is a platform to build, run and manage your docker containers. After shortly playing with it some time ago, I decided to take a bit more serious look at it this time. This article describes first impressions of using this platform, more specifically looking at it from a continuous delivery perspective.

The web interface

First thing to notice is the clean and simple web interface. Basically there are two main sections, which are services and nodes. The services view lists the services or containers you have deployed with status information and two buttons, one to stop (or start) and one to terminate the container, which means to throw it away.

You can drill down to a specific service, which provides you with more detailed information per service. The detail page provides you information about the containers, a slider to scale up and down, endpoints, logging, some metrics for monitoring and more .

The second view is a list of nodes. The list contains the VM's on which containers can be deployed. Again with two simple buttons to start/stop and to terminate the node. For each node it displays useful information about the current status, where it runs, and how many containers are deployed on it.

The node page also allows you to drill down to get more information on a specific node. ¬†The screenshot below shows some metrics in fancy graphs for a node, which can¬†potentially be used to¬†impress your boss.

Creating a new node

You‚Äôll need a node to deploy containers¬†on it. In the node view you see two big green buttons. One states: ‚ÄúLaunch new node cluster‚ÄĚ. This will bring up¬†a form with currently four popular providers Amazon, Digital Ocean, Microsoft Azure and Softlayer. If you have linked your account(s) in the settings you can select that provider from a dropdown box. It only takes a few clicks to get a node up and running. In fact¬†you create a node cluster, which allows you to easily scale up or down by adding or removing nodes from the cluster.

You also have an option to ‚ÄėBring you own node‚Äô. This allows you to add your own Ubuntu Linux systems as nodes to Tutum. You need to install an agent onto your system and open up a firewall port to make your node¬†available to Tutum. Again very easy and straight forward.

Creating a new service

Once you have¬†created a node, you maybe want to do something with it. Tumtum provides jumpstart images with popular types of services for storage, cacheing, queueing and more, providing for example MongoDB, Elasticsearch or¬†Tomcat. Using a wizard it takes only four steps to get a particular service up and running.

Besides the jumpstart images that Tutum provides, you can also search public repositories for your image of choice. Eventually¬†you would like to have your own images running your homegrown software. You can upload your image to a Tutum private registry. You can either pull it from Docker Hub or upload your local images directly to Tutum.

Automating

We all know real (wo)men (and automated processes) don‚Äôt use GUI‚Äôs. Tutum provides a nice and extensive command line interface for both Linux and Mac. I installed it using¬†brew on my MBP¬†and seconds later I was logged in and doing all kinds of cool stuff with the command line.

The cli is actually doing rest calls, so you can skip the cli all together and talk HTTP directly to a REST API, or if it pleases you, you can use the python API to create scripts that are actually maintainable. You can pretty much automate all management of your nodes, containers, and services using the API, which is a must have in this era of continuous everything.

A simple¬†deployment example

So let's say we've build a new version of our software on our build server. Now we want to get this software deployed¬†to do some integration testing, or if you feeling lucky just drop it straight into production.

build the docker image

tutum build -t test/myimage .

upload the image to Tutum registry

tutum image push¬†<image_id>

create the service

tutum service create¬†<image_id>

run it on a node

tutum service run -p <port> -n <name>¬†<image_id>

That's it. Of course there are lots of options to play with, for example deployment strategy, set memory, auto starting etc. But the above steps are enough to get your image build, deployed and run. Most time I had to spend¬†was waiting while¬†uploading my image using the flaky-but-expensive hotel wifi.

Conclusion for now

Tutum is clean, simple and just works. I‚Äôm impressed with ease and speed you can get your containers up and running. It takes only minutes to get from zero to running using the jumpstart services, or even your own containers. Although they still call it beta, everything I did just worked, and without the need to read through lots of complex documentation. The web interface is self explanatory and the REST API or cli provides everything you need to integrate Tutum in your build pipeline, so you can get your new features¬†in production with automation speed.

I'm wondering how¬†challenging managing would be at a scale of¬†hundreds of nodes and even more¬†containers, when using the web interface. You'd need¬†a meta-overview or aggregate view or something. But then again, you have a very nice API to

This week‚Äôs Software Process and Measurement Cast is a magazine feature with three columns. This week we have columns from Gene Hughson ‚Äď Form Follows Function, completing a three-column arc on microservices. In Jo Ann Sweeney‚Äôs new Explaining Change column, Jo Ann tackles the concept of communication channels. The SPaMCAST essay this week is on Agile Coaching. Coaches help teams and projects deliver the most value, however many times organizations eschew coaches or conflate management and coaching.¬† This week we will have an external coach versus management death match!

Contest

We are having a contest! Anthony has offered a copy of his great new book to a randomly selected SPaMCAST listener, ANYWHERE IN THE WORLD. ¬†Enter between February 22th and March 7th. ¬†The winner will be announced on March 8th. ¬†If you want a copy of Agile Project Management you have two options: send your name and email address to spamcastinfor@gmail.com (I will act as the broker and notify the winner at which point we can deal with other types of addresses), OR you can buy a copy. ¬†Remember buying a copy through the Software Process and Measurement Cast helps support the podcast.

Can you tell a friend about the podcast? This week Julie Davis introduced two of her co-workers to the podcast and then emailed us at spamcastinfo@gmail.com.¬† Welcome and Joe and Cindy! Pictures of you and your friends listening to the podcast would be great. If your friends don‚Äôt know how to subscribe or listen to podcast, show them how you listen to the Software Process and Measurement Cast and subscribe them!¬† Remember to send us the name of you person you subscribed (and a picture) and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox‚Äôs¬†The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don‚Äôt have a copy of the book, buy one.¬† If you use the link below it will support the Software Process and Measurement blog and¬†podcast.

I will be presenting a new and improved version of ‚ÄúThe Impact of Cognitive Biases on Test and Project Teams.‚ÄĚ

Next SPaMCast

In the next Software Process and Measurement Cast we will feature our interview with Shirly Ronen-Harel. We began by talking about the book she co-authored (or is co-authoring) The Coaching Booster, which is 80% complete on LeanPub. We branched out into other topics including coaching, lean, Agile and using lean and Agile in startups. This was an incredibly content-rich podcast.¬† Have your notepad ready when you listen because Shirly provides ideas and advice that can change how you work!

This week’s Software Process and Measurement Cast is a magazine feature with three columns. This week we have columns from Gene Hughson – Form Follows Function, completing a three-column arc on microservices. In Jo Ann Sweeney’s new Explaining Change column, Jo Ann tackles the concept of communication channels. The SPaMCAST essay this week is on Agile Coaching. Coaches help teams and projects deliver the most value, however many times organizations eschew coaches or conflate management and coaching. This week we will have an external coach versus management death match!

Contest

We are having a contest! Anthony has offered a copy of his great new book to a randomly selected SPaMCAST listener, ANYWHERE IN THE WORLD. Enter between February 22th and March 7th. The winner will be announced on March 8th. If you want a copy of Agile Project Management you have two options: send your name and email address to spamcastinfor@gmail.com (I will act as the broker and notify the winner at which point we can deal with other types of addresses), OR you can buy a copy. Remember buying a copy through the Software Process and Measurement Cast helps support the podcast.

Can you tell a friend about the podcast? This week Julie Davis introduced two of her co-workers to the podcast and then emailed us at spamcastinfo@gmail.com. Welcome and Joe and Cindy! Pictures of you and your friends listening to the podcast would be great. If your friends don’t know how to subscribe or listen to podcast, show them how you listen to the Software Process and Measurement Cast and subscribe them! Remember to send us the name of you person you subscribed (and a picture) and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one. If you use the link below it will support the Software Process and Measurement blog and podcast.

International Conference on Software Quality and Test ManagementWashington D.C. May 31 - June 5, 2015Wednesday June 3, 2015http://qualitymanagementconference.com/

I will be presenting a new and improved version of “The Impact of Cognitive Biases on Test and Project Teams.”

Next SPaMCast

In the next Software Process and Measurement Cast we will feature our interview with Shirly Ronen-Harel. We began by talking about the book she co-authored (or is co-authoring) The Coaching Booster, which is 80% complete on LeanPub. We branched out into other topics including coaching, lean, Agile and using lean and Agile in startups. This was an incredibly content-rich podcast. Have your notepad ready when you listen because Shirly provides ideas and advice that can change how you work!

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture. If the book stopped here it would be brief tragedy, however Chapter 4 begins the path towards the redemption of Alex Rogo and the ideas that are the bedrock of lean.

New Characters

Jonah ‚Äď advisor

Lou ‚Äď plant‚Äôs chief accountant

Chapter 4:

In Chapter 3, Alex was at a company meeting that communicated the depths of the problems the division was having and to search for answers. The meeting was not holding Alex‚Äôs attention. He found a cigar in his jacket and flashed back to a chance meeting in an airport lounge. While smoking a cigar, Alex recognizes and strikes up a conversation with a professor from grad school. The discussion turns to the problems at the plant, and even though they have pursued changes that have yielded great efficiencies the problems still exist and perhaps are getting worse. Alex proudly shows Jonah a chart that ‚Äúproves‚ÄĚ that a 36% improvement in efficiency from using robots and automation. Jonah asks one very simple question: was profit up 36% too? Alex struggles and answers that, ‚Äúit is not that simple.‚ÄĚ In fact the number of people in the plant did not go down, inventory did not go down and not one additional widget had been shipped. This interaction foreshadows one of the key ideas that The Goal presents to the reader. We are measuring the wrong thing! When we measure the wrong thing we send the wrong message and we get the wrong results. Chapter 4 closes with Johan and Rogo talking about the real meaning of productivity. Productivity is defined as accomplishing something in terms of a goal. Without knowing the goal, measuring productivity and efficiency is meaningless.

Chapter 5:

In Chapter 5 we snap back to the all-day meeting to discuss the division‚Äôs performance (Chapter 3). Alex continues to ruminate on Jonah‚Äôs comments. Alex leaves the meeting under the pretext of a problem back at the plant. As he drives back to the plant he begins to reflect on the ‚Äúthe goal‚ÄĚ in the definition of productivity identified in Chapter 4. ¬†As Alex drives back he decides that he will not have time to think due to the day-to-day demands (also known as the tyranny of the urgent but not important ‚Äď See Habit 3 of Stephen Covey) therefore heads to a favorite pizzeria for pizza and beer.¬†Goldratt and Cox use Alex‚Äôs inner dialog show why most of current internal goals and measures Alex is being asked to pursue miss the point. The bottom-line goal is that the plant‚Äôs goal is to make money. If it does not make money, the rest does not matter. While The Goal is set in a manufacturing plant, the point is that unless any group or department does not materially impact the real goal of an organization it should not exist.

Chapter 6:

Chapter 6 begins with a search for the overall measures that contribute (or predict) whether the plant is meeting the goal of profitability. One the first questions Alex poses to himself is whether he can assume that making people work and making money are the same thing. This sounds like a funny question, however I often see managers and leaders mistake being busy with delivering value. Alex and Lou brainstorm a set of 3 metrics that impact the goal. They are: 1. net profit, 2. ROI, 3. cash flow. In this conversation Alex tells Lou the truth about the state of the division and the potential closure of the plant. The 3 metrics sound right, however Alex does not see the immediate connection between the measures and day-to-day operations. The chapter ends with Alex asking the 3rd Shift Supervisor how is activities impact net profit, ROI and cash flow. He simply gets the deer-in-the-headlight look.

Chapters 4 ‚Äď 6 shift the focus from steps in the process to the process as a whole. Organizations have an ultimate goal. In this case the ultimate goal of the plant is make money. The goal is not quality, efficiency or even employment because in the long-run if the plant doesn‚Äôt deliver product that can be sold it won‚Äôt exist. Whether an organization is for-profit or non-profit, if they don‚Äôt attain their ultimate goal they won‚Äôt exist.

Without a roadmap and a value focus it is easier to perceive that the current “project” might the last one in a while, therefore you need to ask for the moon.

It is often more difficult to take a product focus for applications that will be used internally than for an application that will be used by or sold to an external customer. Part of the issue seems to the distance an application from the ultimate end of the value chain and therefore revenue. The further away from revenue, the harder it is to view the user of the software as a customer. Therefore providing support for tools that enable or support non-customer facing work is often viewed as less critical than customer facing applications or tools. The difficulty in considering internal software as a product is less an artifact of any real difference between internal and external facing applications than perspective. Differences in perspective are typically built on minor differences in organization and market attributes. These differences include:

Ability to switch – Internal “customers” often are hostages to the services provided by internal IT organizations, at least in the short run. While that sounds strong, internal customers often do not have the option to shift providers if they don’t like the service or quality they receive. In the long run, switching can and often does occur either through outsourcing, formation of shadow IT groups in the business or changes in IT leadership. Less flexibility in the short run can often lead to a lack of discipline when it comes to defining product roadmaps or defining the true value any specific feature or function might deliver. Without a roadmap, a form of fatalism can set in, in which users always ask for more than they need at the moment but usually accept¬†what they are offered (after a lot of noisy conversation).

Internal politics ‚Äď The value of work that is sold or used by external customers is usually easier to measure. Functionality either solves a need and generates revenue or increases customer satisfaction. Developing a value for work to be consumed internally is rarely that cut and dry. Priorities are often defined by considerations that don’t reflect the true quantitative value of the work. Priorities often reflect the requestor‚Äôs (or requestor‚Äôs group) positional power. In my first job, the head of accounting requests always floated to the top of the list even though we were a garment manufacturer with a sales focus. Prioritization by factors that don‚Äôt relate to value makes it difficult to develop roadmaps or plan release for applications that don‚Äôt have the same level of political clout. Remember when you hear the saying, ‚Äúthe squeaky wheel gets the grease,” it often means that the organization¬†has a¬†project rather than a¬†product focus.

Talking with Customers –¬†¬†Another of the differences between internal applications and external products that impacts whether an application is viewed as a product is who needs to have input into direction.¬†Products require discussion not only with internal stakeholders, but also with external customers. Internal applications supported by individual projects only require discussion with internal stakeholders. The lack of a perceived impact outside of the company‚Äôs boundaries makes it difficult to generate the motivation to get involvement across the IT/business boundary. For example, it is often harder to identify and get product owner involvement to support planning and work to be used internally. Agile techniques are often a tool to remove the barriers between IT and internal business groups. However it is easier to generate the involvement needed facilitate developing plans, road maps and communication when revenue is involved, which tends to yield a project perspective (short term) rather than a product perspective.

Perceived differences between work done for internal and external use tend to drive internal customers into a more transactional mode. Without a roadmap and a value focus it is easier to perceive that the current “project” might the last one in a while, therefore you need to ask for the moon.

The concepts of project and product provide two alternatives that might lead readers to believe that one perspective is more important than another. You need both sets of behaviors generated by the project and product perspective. How these behaviors are incorporated into roles on teams is not as straightforward as designating a role representing project concerns and a role representing product concerns and never the twain shall meet. Both roles do not have to be separate people. Agile spreads the project-centric behaviors across the entire team. Even the product owner typically absorbs some of the project-centric activities. However, other than at a philosophical level, the team is not typically charged with performing the product-centric activities. Agile techniques spread¬†project behaviors across the team while product driven behaviours are more concentrated.

Project-centric behaviors are focused on the delivery of the tactical plan, while the product owner has more of a focus on the vision of the long-term future, i.e. the product roadmap. Even though the product owner has a distinct interest in the tactical (what is to be accomplished in a sprint or release), the team has a more focused interest in day-to-day activities. The team must plan, monitor and adjust the day-to-day activities needed to meet their commitments during the sprint (commitments in Agile are by definition tactical). The product owner can contribute, however they typically do not have the technical acumen to deliver functional software. However, without a product view, the day-to-day project considerations will typically trump long-term considerations. In a mature Agile environment, the product view interacts with the project view to generate an equilibrium between long- and short-term perspectives.

Project and product focuses require a different measurement. The project focus on delivery/short-term goals generates a need to understand, pursue and measure delivery efficiency. Efficiency is a measure of transformation; how much of a set of raw materials is needed to create an output. Efficiently producing any output is only valuable, IF what is being produced is what is needed and can be actually delivered. Interestingly most software is a step toward a different product that is bought or used. Because the software being developed or enhanced is a step along a path, the value assigned often does not represent the ultimate impact to the organization (See our Re-Read Saturday series on The Goal for more on this topic). The product owner, as the steward of the product perspective, owns the definition and measurement of value. He or she needs to take the big picture view of what the market needs AND what the market will pay for. What the market will pay for is just as important for an internal product as an external. In order to understand the value a product delivers, the product owner must ask whether the result of a sprint or release positively impacts ROI, profit and cash follow. Efficiency is a mechanism to determine whether a team is making the most out of their ‚Äúraw material,‚ÄĚ but it does not provide feedback on whether what is being produced is the right thing, or whether the functionality delivered yields value to the organization.

In general the product owner will be the champion for the product perspective, however every team member needs to have an understanding of the how the future should unfold and the value they are being asked to deliver. The team will need to temper the product vision based on the constraints that the day-to-day environment provides. Both the project and product perspectives are needed to maximize value. Putting either perspective ahead of the other for any length of time will create an imbalance that will reduce team effectiveness.

Well that’s an easy question, I thought, the first time it was asked of me in a Certified ScrumMaster class. “The difference is …,” I began to reply and realized it wasn’t actually such an easy difference after all.

I’d been using the two terms, “user story” and “task” in my classes for years, and they seemed pretty distinct in my head. User stories were on the product backlog and tasks were identified during sprint planning and became part of the sprint backlog.

That was fine but wasn’t very helpful—it was like saying “salt is what goes in a salt shaker and pepper is what goes in a pepper grinder.” Sure, stories go on the product backlog and tasks go on a sprint backlog. But what is the essential difference between the two?

I paused for a second the first time I was asked that question, and realized I did know what the difference was. A story is something that is generally worked on by more than one person, and a task is generally worked on by just one person.

Let’s see if that works …

A user story is typically functionality that will be visible to end users. Developing it will usually involve a programmer and tester, perhaps a user interface designer or analyst, perhaps a database designer, or others.

It would be very rare for a user story to be fully developed by a single person. (And when that did happen, the person would be filling multiple of those roles.)

A task, on the other hand, is typically something like code this, design that, create test data for such-and-such, automate that, and so on. These tend to be things done by one person.

You could argue that some of them are or should be done via pairing, but I think that’s just a nuance to my distinction between user story and task. Pairing is really two brains sharing a single pair of hands while doing one type of work. That’s still different from the multiple types of work that occur on a typical story.

I have, however, used a couple of wiggly terms like saying tasks are typically done by one person. Here’s why I wiggled: Some tasks are meetings—for example, have a design review between three team members—and I will still consider that a task rather than a user story.

So, perhaps the better distinction is that stories contain multiple types of work (e.g., programming, testing, database design, user interface design, analysis, etc.) while tasks are restricted to a single type of work.

This week‚Äôs Software Process and Measurement Cast features our interview Anthony Mersino, author of Emotional Intelligence for Project Managers and the newly published Agile Project Management.¬† Anthony and I talked about Agile, coaching and organizational change.¬† It is a wide ranging interview that will help any leader raise the bar! ¬†¬†We also talked about his new venture: Vitality Chicago.

We are having a contest! Anthony has offered a copy of his great new book to a randomly selected SPaMCAST listener, ANYWHERE IN THE WORLD.¬† Enter between February 22th and March 7th.¬† The winner will be announced on March 8th. ¬†If you want a copy of Agile Project Management you have two options: send your name and email address to spamcastinfor@gmail.com (I will act as the broker and notify the winner at which point we can deal with other types of addresses), OR you can buy a copy.¬† Remember buying a copy through the Software Process and Measurement Cast helps support the podcast.

Anthony‚Äôs bio:
Anthony C. Mersino, PMP, PMI-ACP, CSP is an Agile Transformation Coach and IT Program Manager with more than 28 years of experience.¬† He has delivered large-scale business solutions to clients that include Abbot Labs, IBM, Unisys, NORC, and Wolters Kluwer, and provided Agile Coaching for The Carlyle Group, Northern Trust, Bank of America, and Highland Solutions.

Anthony is the author of Agile Project Management, and Emotional Intelligence for Project Managers.¬† He is also the founder of Vitality Chicago, an Agile transformation consulting firm focused on helping teams THRIVE and organizations TRANSFORM.

Can you tell a friend about the podcast?¬† Even better, show them how you listen to the Software Process and Measurement Cast and subscribe them!¬† Send me the name of you person you subscribed and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox‚Äôs¬†The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don‚Äôt have a copy of the book, buy one.¬† If you use the link below it will support the Software Process and Measurement blog and¬†podcast.

In the next Software Process and Measurement Cast we will feature another magazine feature.¬† The features in next week‚Äôs podcast include columns from Gene Hughson, discussing micro-services. Jo Ann Sweeney Explaining Change and our essay on Agile Coaching.¬† Coaches help teams and projects deliver the most value, however many times organizations eschew coaches or conflate management and coaching.¬† Both actions rob teams and organizations of energy and value. We discuss why next week.

This week’s Software Process and Measurement Cast features our interview Anthony Mersino, author of Emotional Intelligence for Project Managers and the newly published Agile Project Management. Anthony and I talked about Agile, coaching and organizational change. It is a wide ranging interview that will help any leader raise the bar! We also talked about his new venture: Vitality Chicago.

We are having a contest! Anthony has offered a copy of his great new book to a randomly selected SPaMCAST listener, ANYWHERE IN THE WORLD. Enter between February 22th and March 7th. The winner will be announced on March 8th. If you want a copy of Agile Project Management you have two options: send your name and email address to spamcastinfor@gmail.com (I will act as the broker and notify the winner at which point we can deal with other types of addresses), OR you can buy a copy. Remember buying a copy through the Software Process and Measurement Cast helps support the podcast.

Anthony C. Mersino, PMP, PMI-ACP, CSP is an Agile Transformation Coach and IT Program Manager with more than 28 years of experience. He has delivered large-scale business solutions to clients that include Abbot Labs, IBM, Unisys, NORC, and Wolters Kluwer, and provided Agile Coaching for The Carlyle Group, Northern Trust, Bank of America, and Highland Solutions.

Anthony is the author of Agile Project Management, and Emotional Intelligence for Project Managers. He is also the founder of Vitality Chicago, an Agile transformation consulting firm focused on helping teams THRIVE and organizations TRANSFORM.

Contact information:

Email:

Anthony@ProjectAdvisorsGroup.com

AMERSINO@VITALITYCHICAGO.COM

Websites:

http://projectadvisorsgroup.com/about.html

http://www.vitalitychicago.com/

Call to action!

Can you tell a friend about the podcast? Even better, show them how you listen to the Software Process and Measurement Cast and subscribe them! Send me the name of you person you subscribed and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one. If you use the link below it will support the Software Process and Measurement blog and podcast.

In the next Software Process and Measurement Cast we will feature another magazine feature. The features in next week’s podcast include columns from Gene Hughson, discussing micro-services. Jo Ann Sweeney Explaining Change and our essay on Agile Coaching. Coaches help teams and projects deliver the most value, however many times organizations eschew coaches or conflate management and coaching. Both actions rob teams and organizations of energy and value. We discuss why next week.

Today we begin the re-read of¬†The Goal.¬†If you don‚Äôt have a copy of the book, buy one.¬† If you use the link below it will support the Software Process and Measurement blog and¬†podcast.¬†Dead Tree Version¬†or¬†Kindle Version¬†

Eliyahu M. Goldratt and Jeff Cox‚Äôs wrote The Goal: A Process of Ongoing Improvement (published in 1984).¬† The Goal was framed as a business novel. In general, a novel presents a story through a set of actions and events to facilitate a plot. A business novel uses the plot, interactions between characters and events to develop and illustrate concepts or processes that are important to the author.¬† Bottom line, a business novel uses metaphors rather than drawn out scholarly exposition to make its point.¬† The Goal uses the story of Alex Rogo, plant manager, to illustrate the theory of constraints and how the wrong measurement focus can harm an organization.

I am using the 30th anniversary edition of The Goal for this re-read.¬† This version of the book includes two forewords and 40 chapters.

The two forwards to The Goal expose Goldratt‚Äôs philosophical approach.¬† For example, in the forwards, science is defined as a method to develop or expose a ‚Äúminimum set of assumptions that can be explained through straightforward derivation, the existence of many phenomena of nature.‚ÄĚ Science provides an approach to develop an understanding why something is occurring and then to be able to test against that understanding. We deduce based on observation and measurement to develop a hypothesis and then continue to compare what we see against the hypothesis. Good science is the foundation of effective process improvement. Good process improvement is simply a requirement for survival in today‚Äôs dynamic business environment.

The characters introduced in chapters 1 – 4:

Alex Rogo ‚Äď the protagonist, manufacturing plant manager

Bill Peach ‚Äď command and control division vice-president

Fran – Alex’s secretary

Bob Donovan – Production Manager

Julie Rogo – Alex Rogo’s wife

Chapter 1:

In the first chapter we are immediately introduced to Alex Rogo, plant manager, and his boss, Bill Peach.¬† Our protagonist, Alex Rogo is immediately thrown into a crisis, revolving around a late shipment that his boss has arrived, unannounced, at the plant to expedite. Bill Peach begins by interfering with plant operations, which leads to a critical mechanic quitting and to a broken and potentially sabotaged machine. ¬†Remember back to Kotter‚Äôs eight stage model for significant change in his seminal book, Leading Change (the last book featured in out Re-Read Saturday feature).¬† The first step in the model was to establish a sense of urgency. Goldratt and Cox use chapter one to establish the proverbial burning platform.¬† The plant is losing money, orders are shipping late and Peach has delivered an ultimatum that unless the plant is turned around, it will be closed.

Chapter 2:

The immediate crisis is surmounted, the order is completed and shipped. The plant focused on getting a single order done and shipped. Bob Donavan noted that everyone pulled together, behavior that the Agile community would call ‚Äúswarming.‚ÄĚ ¬†A thread running through the chapter is that the plant has aggressively pursued cost savings and increased efficiency. This thread foreshadows a recognition that measuring the cost savings or efficiency improvement in any individual step might not provide the results the organization expects. Rogo reflects at one point that he has the best people and the best technology, therefore he must be a poor manager.

Chapter 3:

This chapter develops on the corporate culture by exposing the fixation on efficiency and cost control as the basis for measurement and comparison.¬† The whole division is on the chopping block and an endemic atmosphere of fear has taken hold.¬† For example, Rogo‚Äôs and Peach‚Äôs relationship that had, in the past, been marked by camaraderie is a reflection of the fear and animosity that has been generated. Fear hinders the collaboration and innovation that will be needed to save both the plant and the division.¬† W. Edward Deming in his famous 14 principles explicitly stated ‚Äúdrive out fear, so that everyone may work effectively for the company.‚ÄĚ My interpretation of chapter 3¬†is that fear and the tools that generate fear will need to be addressed for the division to survive.

Chapters 1¬†through 3¬†actively present the reader with a burning platform.¬† The plant and division are failing.¬† Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Docker has been around for more than a year already, and there are a lot of container platforms popping up. In this series of blogposts I will explore these platforms and share some insights. This blogpost is about StackEngine.

TL;DR: StackEngine is (for now) just a nice frontend to the Docker binary. Nothing...

Organizations with a product perspective generally have an understanding that a project or release will follow the current project reducing the need to get as large a bite at the apple as possible (having tried this a child, I can tell you choking risk is increased).

The concepts of product and project are common perspectives in software development organizations. A simple definition for each is that product is the thing that is delivered – software, an app or an interface. A project reflects that activities needed to develop the product or a feature of the product. Products often have roadmaps that define the path they will follow as they evolve. I was recently shown a road map for an appraisal tool a colleague markets that showed a number of new features planned for later this year and areas that would be addressed in the next few years. The map became less precise the further the time horizon was pushed out. Projects, releases and sprints typically are significantly more granular and with specific plans for work that is currently being developed. Different perspectives generate several different behaviors.

Roadmap versus plan: The time-boxed nature of a project or a sprint (both have a stated beginning and end) tends to generate a focus planning and executing specific activities and tasks. For example, in Scrum sprint planning, accept and commit to the user stories they will deliver. There is often a many-to-one relationship between stories and features that would be recognized at by end-users or customers. Product planning tends to focus on the features and architectures that meet the needs of the user community. Projects foster short-term rather than long-term focus. Short-term focus can lead to architectural trade-offs or technical shortcuts to meet specific dates that will have negative implications in the future. The product owner is often the bridge between the project and product perspectives, acting as an arbiter. The product owner helps the team make decisions could have long-term implications and provides the whole team with an understanding of the roadmap. Teams without (or with limited) access to a product owner and product roadmap can only focus on the time horizon they know.

Needs versus Constraints: Projects are often described as the interaction between the triple constraints of time, budget and scope. Sprints are no different; cadence – time, fixed team size ‚Äď budget, and committed stories ‚Äď scope. There is always a natural tension between the business/product owner and the development team. In organizations with a project perspective, product owners and other business stakeholders typically have a rational economic rational to pressure teams to commit to more than can reasonably accomplished in any specific project. Who knows when the next project will be funded? This behavior is often illustrated when the business indicates that ALL requirements they have identified are critical, or when concepts like a minimum viable product are met with hostility. Other examples of this behavior can be seen in organizations that adopt pseudo-Agile. ¬†In pseudo-Agile¬†backlogs are created and an overall due date generated for all the stories ¬†before a team even understands their capacity to deliver. Shortcuts, technical debt and lower customer satisfaction are often the results of this type of perspective. Organizations with a product perspective generally have an understanding that a project or release will follow the current project reducing the need to get as large a bite at the apple as possible (having tried this a child, I can tell you choking risk is increased).

Measuring¬†Efficiency/Cost versus Revenue: Organizations with a product perspective tend to take a wider view of what needs to be measured. Books such as The Goal (by Goldratt and Cox) make a passionate argument for the measurement of overall revenue. The thought is that any process change or any system enhancement needs to be focused on optimizing the big picture rather than over optimizing steps that don‚Äôt translate to the goals of the organization. Focusing of delivering projects more efficiently, which is the classic IT measurement, does not make sense if what is being done does not translate to delivering value. Measuring the impact of a product roadmap (e.g. revenue, sales, ROI) leads organizations to a product view of work which lays stories and features out as portfolio of work.

These dichotomies represent how differences in project and product perspectives generate different behaviors. Both perspectives are important based on the role a person is playing in an organization. For example, a sprint team must have a project perspective so they can commit to work with a time box. That same team needs to have a product view when they are making day-to-day trade-offs that all teams take or technical debt may overtake their ability to deliver. Product owners are often the bridge between the project and product perspectives, however the best teams understand and leverage both.

Scala has a lot of different options for handling and reporting errors, which can make it hard to decide which one is best suited for your situation. In Scala and functional programming languages it is common to make the errors that can occur explicit in the functions signature (i.e. return type), in contrast with the common practice in other programming languages where either special values are used (-1 for a failed lookup anyone?) or an¬†exception is thrown.

Let's go through the main options you have as a Scala developer and see when to use what!

Option
A special type of¬†error¬†that can occur is the absence of some value. For example when looking up a value in a database or a List you can use the find method. When implementing this in Java the common solution (at least until Java 7) would be to return null when a value cannot be found or to throw some version of the NotFound exception. In Scala you will typically use the Option[T] type, returning Some(value)¬†when the value is found and None when the value is absent.

So instead of having to look at the Javadoc or Scaladoc you only need to look at the type of the function to know how a missing value is represented. Moreover you don't need to litter your code with null¬†checks or try/catch blocks.

Another use case is in parsing input data: user input, JSON, XML etc.. Instead of throwing an exception for invalid input you simply return a None to indicate parsing failed. The disadvantage of using¬†Option for this situation is that you hide the type of error from the user of your function which, depending on the use-case, may or may not be a problem. If that information is important keep on reading the next sections.

// Use a default value
validateName(inputName).getOrElse("Default name")
// Apply some other function to the result
validateName(inputName).map(_.toUpperCase)
// Combine with other validations, short-circuiting on the first error
// returning a new Option[Person]
for {
¬† name <- validateName(inputName)
¬† age <- validateAge(inputAge)
} yield Person(name, age)

Either
Option is nice to indicate failure, but if you need to provide some more information about the failure Option is not powerful enough. In that case Either[L,R] can be used. It has 2 implementations, Left and Right. Both can wrap a custom type, respectively type L and type R. By convention Right is right, so it contains the successful result and Left contains the error. Rewriting the validateName method to return an error message would give:

This projection is kind of clumsy and can lead to several convoluted compiler error messages in for expressions. See for example the excellent and in detail discussion of the¬†Either type in the¬†The Neophyte's Guide to Scala Part 7. Due to these issues several alternative implementations for a kind of Either have been created, most well known are the \/¬† type in Scalaz and the¬†Or type in Scalactic. Both avoid the projection issues of the Scala¬†Either and, at the same time, add additional functionality for aggregating multiple validation errors into a single result type.

Try

Try[T] is similar to Either. It also has 2 cases, Success[T] for the successful case and Failure[Throwable] for the failure case. The main difference thus is that the failure can only be of type Throwable. You can use it instead of a try/catch block to postpone exception handling. Another way to look at it is to consider it as Scala's version of checked exceptions.¬†Success[T] wraps the result value of type T, while the Failure case can only contain¬†an exception.

The first function needs documentation describing that an exception can be thrown. The second function describes in its signature what can be expected and requires the user of the function to take the failure case into account. Try is typically used when exceptions need to be propagated, if the exception is not needed prefer any of the other options discussed.

Note that Try is not needed when working with Futures! Futures combine asynchronous processing with the Exception handling capabilities of Try! See also Try is free in the Future.

Exceptions
Since Scala runs on the JVM all low-level error handling is still based on exceptions. In Scala you rarely see usage of exceptions and they are typically only used as a last resort. More common is to convert them to any of the types mentioned above. Also note that, contrary to Java, all exceptions in Scala are unchecked. Throwing an exception will break your functional composition and probably result in unexpected behaviour for the caller of your function. So it should be reserved as a method of last resort, for when the other options don‚Äôt make sense.
If you are on the receiving end of the exceptions you need to catch them. In Scala syntax:

What is often done wrong in Scala is that all Throwables are caught, including the Java system errors. You should never catch Errors¬†because they indicate a critical system error like the OutOfMemoryError. So never do this:

Finally remember you can always convert an exception into a Try as discussed in the previous section.

TDLR;

Option[T], use it when a value can be absent or some validation can fail and you don't care about the exact cause. Typically in data retrieval and validation logic.

Either[L,R], similar use case as Option but when you do need to provide some information about the error.

Try[T], use when something Exceptional can happen that you cannot handle in the function. This, in general, excludes validation logic and data retrieval failures but can be used to report unexpected failures.

Exceptions, use only as a last resort. When catching exceptions use the facility methods Scala provides and never catch { _ => }, instead use catch { NonFatal(_) => }

One final advice is to read through the Scaladoc for all the types discussed here. There are plenty of useful combinators available that¬†are worth using.

A product is something that is constructed for sale or for trade for value. In the software world that product is often software code or a service to interface users to software. Typically a project or set of projects is required to build and maintain an IT product. If we simplify and combine the two concepts we could define a product as what is delivered and a project as the vehicle to deliver the product. The idea of a product and a project are related, but different concepts. There are several differences in common attributes:

Agile pushes organizations to take more of a product than a project perspective, however arguably parts of both can be found as the product evolves. A¬†sprint (or even a release) will include a subset of the features that are included on a product backlog. ¬†The sprint or release is a representation of the¬†project perspective. ¬†As time progresses, the product backlog evolves as customer or user needs change (the product perspective). In the long run the product perspective drives the direction of the organization. ¬†For example, a friend that owns a small firm that delivers software services maintains a single product backlog.¬†In¬†classic fashion the items near the top of the backlog have a higher priority and are more granular. The backlog includes some ideas, new services and features that won‚Äôt be addressed for one or more years. The owner acts as the product owner and at a high level sets the priorities with input from her staff once a quarter, based on progress, budget and market forces. The Scrum master and team are focused on delivering value during every sprint, while the product owner in this case is focused of building greater business capabilities.

IT in general, and software development specifically, have historically viewed work as a series projects, sometimes interlocked into larger programs. A project has a beginning and an end and delivers some agreed upon scope. When a project is complete a team moves on to the next job. A simple and rational behavior for a product owner who might not know when the next project impacting his product might occur would be to ask for the moon and to pitch a fit when it isn’t delivered. Because the product owner and the team are taking a project perspective it is impossible to count on work continuing forcing an all or nothing attitude. That attitude put the pressure on a team to accept more requirements than they can deliver leading an increased possibility of¬†¬†disappointment, lower quality and failure. Having either a product or project perspective will drive how everyone involved in delivering functionality interact and behave.

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

Having a “definition of done” has become a near-standard thing for Scrum teams. The definition of done (often called a “DoD”) establishes what must be true of each product backlog item for that item to be done.

A typical DoD would be something similar to:

The code is well written. (That is, we’re happy with it and don’t feel like it immediately needs to be rewritten.)

The code is checked in. (Kind of an “of course” statement, but still worth calling out.)

The feature the code implements has been documented in any end-user documentation such as manuals or help systems.

Many teams will improve their Definition of Done over time. For example, a team using the example above might not be able to do so much automated testing when first starting out. But, hopefully, they would add that to their definition of done over time.

All this is sufficient for the vast majority of teams. But I’ve worked on a few projects whose teams benefitted from having multiple definitions of done. A team takes a product backlog item to definition of done Level 1 in a first sprint, to definition of done Level 2 in a subsequent sprint, and so on.

I am most definitely not saying they code something in a first sprint and test it in a second sprint. “Done” still means tested, but it may mean tested to different—but appropriate—levels. Let’s look an example.

An Example from a Game Studio

One thing I’ve really enjoyed in working with game studios is that they understand that not all work will make it into the finished game. Sometimes, for example, a game team experiments with a new character trying to make the character fun. If they can’t, the character isn’t added to the game.

So it would be extremely wasteful for a game team to have a definition of done requiring all art to be perfect, all audio be recorded, and refresh rates be high when they are merely trying to decide if a new character is fun. The team should do justenough to answer that question.

In a number of game studios, this has led to a four-level definition of done:

Done, Level 1 (D1) means the new feature works and decisions can be made. For animation, this was often “the character is animated in a white room.” It’s “shippable” to friendly users (often internal) who can comment on whether the new functionality meets its objective.

D2: The thing is integrated into the game and users can play it / interact with it.

D3: The feature is truly shippable. It’s good enough to include in a major public release. The team may not want to release it yet—they may first want to improve the frame rate, add some polygons, brighten colors, and so on. But the feature could be shipped with this feature in this state if necessary.

D4: The feature is tuned, polished, and everyone loves it. There’s nothing the team would change. A typical public release will include a mix of D4 and D3 items. There will always be areas the team wants to go back to and further improve. But, time intrudes and they ship the product. So D3 is totally shippable. You’re not embarrassed by D3 and only your hardest core users will notice the ways it could be better. D4 rocks.

Are Multiple Definitions of Done Right for You?

Very likely not. Most teams do quite well with a single definition of done. But the ideas above extend beyond just game development. I’ve used the same approach in a variety of other application domains, notably hardware development. In that case, the teams involved were developing dozens of new gadgets for an integrated suite of home automation products.

They used these definitions:

D1: The new hardware works on a test bench in the office.

D2: The new hardware is integrated with the other products in the suite.

D3: The new hardware is installed and running in at least one model house used for this type of beta testing.

D4: The product is fully ready for sale (e.g., it meets all requirements for UL approval).

Within this company, there were dozens of components in development at all times, and some components could be found at each level of doneness. For example, a product to raise and lower window shades could be in testing at the model home, while a newer component to open and close doors had just been started and was only working on a test bench of one developer.

Most projects will never need this. If you do think it’s appropriate for you, before trying it, really be sure you’re not using the technique as an excuse to skip things like testing.

Each level should exist as a way of making decisions about the product. A good test of that is to see if some features are dropped at each level. It is a good sign, for example, that sometimes a feature reaches a certain doneness level, and the product owner decides the feature is no longer wanted due to perhaps its cost or delivery time.

At my current client, we have a large AngularJS application that is configured to show a full-page error whenever one of the $http requests ends up in error. This is implemented with an error interceptor¬†as you would expect it to be. However, we‚Äôre also using some calculation-intense resources that happen to timeout once in a while. This combination is tricky: a user triggers a resource request when navigating to a certain page, navigates to a second page and suddenly ends up with an error message, as the request from the first page triggered a timeout error. This is a particular unpleasant side effect that I‚Äôm going to address in a generic way in this post.

There are of course multiple solutions to this problem. We could create a more resilient implementation in the backend that will not time out, but accepts retries. We could change the full-page error in something less ‚Äėin your face‚Äô (but you still would get some out-of-place error notification). For this post I‚Äôm going to fix it using¬†a different approach: cancel any running requests when a user switches to a different location (the route part of the URL). This makes sense; your browser does the same when navigating from one page to another, so why not mimic this behaviour in your Angular app?

I‚Äôve created a pretty verbose implementation to explain how to do this. At the end of this post, you‚Äôll find a link to the code as a packaged bower component that can be dropped in any Angular 1.2+ app.

To cancel a running request, Angular does not offer that many options. Under the hood, there are some places where you can hook into, but that won‚Äôt be necessary. If we look at the $http usage documentation, the timeout property is mentioned and it accepts a promise to abort the underlying call. Perfect! If we set a promise on all created requests, and abort these at once when the user navigates to another page, we‚Äôre (probably) all set.

The interceptor will not overwrite the timeout property when it is explicitly set. Also, if the noCancelOnRouteChange option is set to true, the request won‚Äôt be cancelled. For better¬†separation of concerns, I‚Äôve created a new¬†service (the HttpPendingRequestsService) that hands out new timeout promises and stores references to them.

So, this service creates new timeout promises that are stored¬†in an array. When the cancelAll function is called, all timeout promises are resolved (thus aborting all requests that were configured with the promise) and the array is cleared. By setting the isGloballyCancelled property on the promise object, a response promise method can check whether it was cancelled or another exception has occurred. I‚Äôll come back to that one in a minute.

Now we hook up the interceptor and call the cancelAll function at a sensible moment. There are several events triggered on the root scope that are good hook candidates. Eventually I settled for $locationChangeSuccess. It is only fired when the location change is a success (hence the name) and not cancelled by any other event listener.

When writing tests for this setup, I found that the $locationChangeSuccess event is triggered at the start of each test, even though the location did not change yet. To circumvent this situation, the function does¬†a simple difference¬†check.

Another problem popped up during testing. When the request is cancelled, Angular creates an empty error response, which in our case still triggers the full-page error. We need to catch and handle those error responses. We can simply add a responseError function in our existing interceptor. And remember the special isGloballyCancelled property we set on the promise? That‚Äôs the way to distinguish between cancelled and other responses.

The responseError function must return a promise that normally¬†re-throws the response as rejected. However, that‚Äôs not what we want: neither a success nor a failure callback should be called. We¬†simply return a never-resolving promise for all cancelled requests to get the behaviour we want.

That‚Äôs all there is to it! To make it easy to reuse this functionality in your Angular application, I‚Äôve packaged this module as a bower component that is fully tested. You can check the module¬†out on this¬†GitHub repo.

Once upon a time I was asked to help out a¬†software product company.¬†¬†The management briefing went¬†something like this: "We need you to increase productivity, the guys in development seem to be unable to ship anything! and if they do ship something it's only a fraction of what we expected".

And so the story begins. Now there are many ways how we can improve the teams outcome and its output (the first matters more), but it always starts with observing what they do today and trying to figure out why.

It turns out that requests from the business were treated like a good wine, and were allowed to "age", in the oak barrel¬†that was called Jira. Not so much to add flavour in the form of details, requirements, designs, non functional requirements or acceptance criteria, but mainly to see if the priority of this request would remain stable over a period of time.

In the days that followed I participated in the "Change Control Board" and saw what he meant. Management¬†would change priorities on the fly and make swift decisions on requirements that would take weeks to implement. To stay in vinotology terms, wine was poured in and out the barrels at such a rate that it bore more resemblance to a blender than to the art of wine making.

Though management was happy to learn I had unearthed to root cause to their problem, they were less pleased to learn that they themselves were responsible. ¬†The Agile world created the Product Owner role for this, and it turned out that this is hat, that can only be worn by a single person.

Once we funnelled all the requests through a single person, both responsible for the success of the product and for the development, we saw a big change. Not only did the business got a reliable sparring partner, but the development team had a single voice when it came to setting the priorities. Once the team starting finishing what they started we started shipping at regular intervals, with features that we all had committed to.

Of course it did not take away the dynamics of the business, but it allowed us to deliver, and become reliable in how and when we responded to change. Perhaps not the most aged wine, but enough to delight our customers and learn what we should put in our barrel for the next round.

This week‚Äôs Software Process and Measurement Cast is our magazine with three features.¬† We begin with Jo Ann Sweeney‚Äôs Explaining Change column.¬† In this column Jo Ann tackles the concepts of messages and themes.¬† I consider this the core of communication.¬† Visit Jo Ann‚Äôs website at¬†http://www.sweeneycomms.com¬†and let her know what you think of her column.

The middle segment is our essay on commitment.¬† The making and keeping of commitments are core components of both professional behavior and Agile. The simple definition of a commitment is a promise to perform. Whether Agile or Waterfall, commitments are used to manage software projects. Commitments drive the behavior of individuals, teams and organizations.¬† Commitments are powerful!

We wrap this week‚Äôs podcast up with a new column from the¬†Software Sensei, Kim Pries. In this installment Kim discusses software HALT testing.¬† HALT stands for highly accelerated life test.¬† The goal is to find defects, faults and things that go bump in the night in hours or days rather than waiting for weeks, months or years.¬† Whether you are testing software, hardware or some combination this is a concept you need to have in your portfolio.

Call to action!

Can you tell a friend about the podcast?¬† Even better, show them how you listen to the Software Process and Measurement Cast and subscribe them!¬† Send me the name of you person you subscribed and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The next book in our Re-Read Saturday feature will be Eliyahu M. Goldratt and Jeff Cox‚Äôs¬†The Goal: A Process of Ongoing Improvement. Originally published in 1984, it has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. On February 21st¬†we will begin re-read on the Software Process and Measurement Blog

Note: If you don‚Äôt have a copy of the book, buy one.¬† If you use the link below it will support the Software Process and Measurement blog and¬†podcast.

In the next Software Process and Measurement Cast we will feature our interview Anthony Mersino, author of Emotional Intelligence for Project Managers and the newly published Agile Project Management.¬† Anthony and I talked about Agile, coaching and organizational change.¬† A wide ranging interview that will help any leader raise the bar!