On Agile Leadership

Agile leadership is different to traditional project management. Self-organizing teams, flat hierarchies, fast response-times, frequent changes, require a different style of working with people, but also new techniques. In this blog I want to cover once i

In this post I want to talk about company culture. Every company has a culture. Some are outstanding, perhaps some are close to criminal, but I guess most are just mediocre. How do you know where in this spectrum between star and rubbish your company’s culture is located? I think a good way of telling is whether you were given the opportunity and the freedom to improve the way you work. Are you working on new projects that lead to a significant improvement of the processes you use and as a result significantly improved benefits for your customers? Let’s look at a randomly made up example. A team is working on a product that is a combination of hardware, firmware, software and services. The product has not been introduced to the market yet. However you know that you need to finish the project by the end of the month. “Finish” does not mean that you have all features complete that you thought you must have. Instead it means a cross-functional effort between product management, hardware engineers, software developers, marketing, sales and probably a whole raft of others. In that collaboration the team will understand that it may not be able to ship every single feature, but it will also understand the priority of each feature as each feature has a different business value to the company and to the customer. Let’s assume that in this given example a demo in the first week of the next month is highly likely to lead to receiving a significant order. With the wrong culture the team would do it’s 8 to 5 job and then go home. No creativity would be invested and you may even see the occasional finger-pointing. Progress would be slow. With a different culture the team would band together working towards that objective. A sense of excitement would be visible within the team. The team would collaborate, think beyond the boundaries of their “official” role (e.g. developer) and they would be very creative to find even simpler and faster solutions that would allow to stitch together a product that would be good enough for the purpose. They would throw in as many extra hours without the need to be asked or be instructed. The team would do whatever it takes to achieve the objective. They would know that they can make decisions and they would get the tools and the support to get the job done. The leadership in both of these cases is quite different. It is very likely that the former breathes of micromanagement and bureaucracy. Maybe getting new hardware and software is close to impossible and always requires a lengthy process for ordering and procurement. The leadership in the latter is most likely more hands-off. It’s management by exception or management by objective. If the team needs a new tool, hardware or software, the decision can be made within hours if not minutes rather than weeks or months. The leadership would embody trust towards the individuals in the team, willing to take the risk that things can go wrong when you push the envelope. How do you know that you are in the right culture? First of all you need to decide for yourself what company culture you want to work in. Then ask yourself a few very simple questions that I believe are indicative of what the actual culture is in your company. Questions include: Am I working on a project that challenges me, the team and the approach we take> Is the project leading to improvements of the product, the processes you use, learning for yourself? How do you interact with customers? Do you have direct interaction or do you have ‘men in the middle’? How long does it take to get new tools like hardware or software? Is the duration measured in hours and days rather weeks or months? Do you see new tools arriving in your team on a regular basis, e.g. at least once per month? What is the attrition rate in your team? Are people replaced immediately? How easy is it to replace people? I’m sure there are plenty of other indications that help. The important thing is that you do not judge your leadership by what they say but by what they do. On[...]

I just entered a new post on an internal Yammer network. Writing the post took some time as it was one of the longer ones. When I tried to post it, Yammer told me that it (Yammer) had been update and needs to reload.

Result: The post was gone.

It appears as if Yammer is somewhat behind technologically. It should not lose content. GMail saves regularly and I have never lost an email that I hadn’t sent yet.

Recommendation: Use a text editor to create the post. Once you are good to go, reload Yammer, then copy the text into the browser window, and then post. This should avoid the duplicated effort of writing the post.

Last week I wrote about “Eating Your Own Dog Food” as a mentor at an incubator. We have taken the next step and created an initial business model.

There are different options for creating and representing business models. At the eCentre we use business model canvases. This idea originates from an initiative that resulted in a book titled “Business Model Generation”. Alexander Osterwalder and Yves Pigneur wrote it with hundreds of contributors. Online tools are available from several places but a good source is the site for the book.

In the meantime some variations exist. As with other topics there are defenders of the “only truth” who believe that changing the original idea destroys it. To me that type discussion reminds me a little bit of the discussion which programming language is the best. I think it is important that a tool works for you. And if it doesn’t try something else.

In our case we decided to try out a variation called Lean Canvas. Some of the elements are changed over the one created by Osterwalder et al. I want to mention only the differences that are most interesting for our case: Problem, Existing Alternatives and Solution.

As recommended we created the initial canvas in just about 20 minutes. In the past week we then had a more detailed discussion about the canvas and refined a few items.

The basic idea is that everything that you put into the canvas is basically a hypothesis about some aspect of your business model. You then validate each of the hypothesis. The outcome can be either that a hypothesis is upheld or it is wrong. How do we find out?

Steve Blank suggests as part of the customer discovery activity: None of the facts you need exist within the building (see “Four Steps to the Epiphany”). You have to get out of the office and talk to people. Who would you talk to? Your canvas helps you as you will have identified target “Customer Segments”. That’s who you talk to.

So, in our case we have an initial business model canvas. Now we need to validate each hypothesis in it. We are prepared to revisit the canvas as often as needed based on the feedback we get. We have started to generate a list of people who we want to speak with. This will take a few weeks.

Novopay is a payroll system for school and school support staff in New Zealand. It’s development started in 2005, was planned to cost 30 million NZ dollar and take 2 years to develop. As of 01 February 2013 staff are owed an estimated 12 million NZ dollar due to errors in the software. As of writing it is still possible that it may be switched off.

While I can’t speak as an expert for this particular project, the newspaper articles reveal some interesting facts. For example the first emergency meeting apparently was conducted only after 5 years. Note this project was planned to take 2 years. The system was signed off based on the recommendations of four advisers, one of them PWC. Then it went life despite almost 6,000 payslip errors.

One actual result of the live system was a payslip sent to a caretaker who was awarded a 102 million dollar holiday pay packet. Unfortunately for him, he won’t be able to keep the overpayment.

The deputy education secretary reportedly said in April 2012 that “there was a need to run a total of 270 test scripts”. Frankly, I’m hoping that they had more than 270 test scripts and I hope they were all automated.

More details can be found in articles here and here. I’m mentioning this project here as quite obviously something went terribly wrong. Is there something that can we learn from this? Can we do better? Are we doing better?

I’m a mentor at the eCentre in Auckland, New Zealand. The eCentre is an incubator that follows the concepts of the “Lean Startup” as described by Eric Ries who in turn was a student of Steve Blank, author of “The Four Steps to the Epiphany”.

For this blog I am going to describe the journey of one of the startups. The intention is to demonstrate agile principles are at work.

The startup’s name is AgileCore. As of writing it doesn’t even have a proper web site, just the domain has been reserved. AgileCore’s idea is to provide technologies for cloud-based, scalable web applications that can be developed very quickly and then release many times each day.

Where does the idea come from? There were two independent events that coincided a couple of months ago.

One event was that while mentoring startups I discovered that their minimum viable product (MVP) – some call it a prototype – had certainly functionality that wasn’t core to their idea. Instead quite a few of those startups had similar requirements. Examples include sign-up, profile management, social network integration, support for mobile devices and similar more. At some point I started to assemble the same off-the-shelf technologies for some of them and I felt as if I was doing duplicating work instead of having something that they could share.

At around the same time a group of entrepreneurs had a discussion between themselves about how they would be building their first prototype (MVP). They found that they had certain requirements in common and came up with a list similar to the above.

So there was a problem and there was a solution, two important ingredients for a new business idea. As a consequence we started an initiative which we called AgileCore.

Of course we are at the very beginning of our journey and the path forward will be long and tough. It is possible that AgileCore joins all the other failed startups. That would be a success, too, as a failure is nothing else than the discovery of a path that didn’t work (see also Thomas A. Edison and is over thousand attempts to create a working light bulb).

On the other hand for AgileCore we intend to follow the principles we teach in our Sprint program. So we will be eating out own dog food if you like. And that is what I want to blog about once in a while.

A few days ago a friend described to me how in their company they were introducing a new development process. To go with the flow they decided to introduce agile methodologies at the same time. To make sure that all worked on the same basis they are calling their new process “Standard Agile Process”.

I think that is a contradiction within itself. Either it is standard or it is agile but not both. Let me explain.

In my view at the very core of agile methodologies is the ability to adapt to the environment. Many factors can influence the decision including people, experience, customer base, technologies, product, target market, time zone differences. Since 1999 I haven’t found two teams who used the same agile approach not even within the same company.

Given sufficient autonomy and authority a team will adapt as well as they can. Is it reasonable to expect that any two teams (regardless of being in the same organization or not) start their journey at the same time, progress at the same speed, learn the same things at the same time, have the same set of experiences and end up making the same choices? Evolution is never the same if the factors influencing it are different or if the timeline is different.

It’s even more complicated - or easier depending on your viewpoint. One of the teams I’m working with is using multiple process all based on agile principles, no two are the same. Factors that have influenced their choices are type of work, product and customer. And each of the processes (at least three) is evolving at a different speed following a different set of changes. Changes are applied as the team identifies the need for them, if needed multiple times a day. Occasionally a process stays the same for a few weeks.

So when you see or hear the term “Standard Agile Process”: If you are impacted by it, keep the above in mind. Don’t switch off your brain. Think for yourself. Speak up.

Or just have a good laugh in particular if your leadership team allows you and your team to be truly agile. You may or may not believe in Darwinism. But history suggests that those who adapt faster and better not only have a better chance of survival. They also tend to have a better life, i.e. are more successful and have more fun.

I just spoke to a friend who had applied a few weeks ago for a leadership role requiring experience with agile development methodologies. She submitted a resume and also spoke to the recruiter. In the next step she was then asked to do some online tests that would check for her personality and intelligence.

Based on the test results she was told that she was no longer considered included in the remaining candidates. The way I know her the results that she had delivered in her work have always been outstanding. Her experience with agile methodologies is fantastic.

After I heard this story I was wondering: If you want to fill a leadership role in software engineering which of the following criteria is a better indicator for a good fit:

Personality tests without even speaking to the candidate?

Concrete results over many years in various roles and various industries supported by data such as shipped product, improved quality, more features, shorter release cycles, increased customer base, lower development costs?

Of course, I am not an expert in personality tests, so I am probably completely ignorant of the value of such tests. In my own recruiting process I have never used personality tests other than talking to the candidate myself and having several of my team members talking to the candidate. This approach has always worked.

Agile approaches are very much about adapting and using short increments when dealing with uncertainty. Starting a new company comes with a lot of uncertainty. Agile principles can help reducing risk and improving the odds of success.

Eric Ries is the inventor/initiator of the “Lean Startup” movement. His concept is to incrementally develop your customers and your business. By running experiments, some people also call them spikes, you learn a lot about the business idea you are working on. By using small increments you avoid spending months or years building a product or offering a service that in the end is fantastic but nobody is willing to spend money on. Eric Ries helps you de-risk your business idea.

A complementary technique is customer development a term coined by Steve Blank. While product development important (your product can also be a service) it is equally important to also develop your customer base. In his book “Four Steps to the Epiphany” he describes this process in details. His book is a work book, so be prepared that you will have to do homework.

Why am I mentioning these two? The work of both of them is based on agile principles. Therefore if you are considering to test one of your business idea then the work of these two authors should be part of your preparation. And both are not limited to new companies. If you are tasked to build that brand-new product in your company then you are basically running a start-up. The only difference is that you run it in the context of an existing company, which can be beneficial (e.g. financial backup) or a hindrance (e.g. bureaucracy).

If you find yourself in a project where you need to deliver and maintain a large number of customized versions of an otherwise standard product you may want to consider designing your processes in support of mass customization.There are a number of prerequisites that you’ll need to have in place. For one you need a base product that you want to customize. That base product needs a mechanism in place that allows customizing it. For example you could design it so it supports plug-ins.Next you need a fully automated process for building the base product plus all plug-ins. Ideally you have a continuous integration solution and an extensive automated and virtualize test environment. The latter allows automatic instantiation of different target environments and testing of various product configurations in those environments.One option for mass customization is then to create a custom package for each individual customer, e.g. by creating an installer containing the base product and plug-ins for just that customer. While the installer might be a good option for a shrink-wrapped product, continuous deployment in a hosted environment will typically benefit from a different approach. For example instead of packaging different installers, you might have a different deployment for each customer. Only the plug-ins for that particular customer would be included to be deployed in their environment.You can drive this one step further, for example by providing a web site where your customers can select the base product and the plug-ins they want. If they are self-hosted they would receive the custom installer. If they are hosted their deployment in the hosting environment would be maintained accordingly. This web site could include integration with payments systems or with your internal accounting system, e.g. to check whether a maintenance payment was received.Alternatively instead of having custom installers or custom deployments, the availability of plug-ins can also be controlled through the use of licensing in the deployed product. All plug-ins are installed but only the ones that were licensed are loaded and available.Of course there are number of other factors that must be considered, e.g. how to design the process and the product so it can be upgraded without downtime. Once you have this in place, though, you will enjoy a scalable solution that allows mass customization.

In a few previous posts (here, here and here) I described how feature branches can be used to reduce project risk. As you move along there are further ways to simplify the use of feature branches.

For example I mentioned release management branches. Release management branches can be used for the final Quality Assurance (QA) work when preparing a release. Having a separate branch for this has the benefit that trunk (or master or main) doesn’t have to be locked down until the release day. The release management branch is controlled in that all changes that are accept to it, are carefully reviewed and specific to the preparation of the release. The rest of the team – that is everybody not involved immediately in the release management – can continue to commit new work into trunk.

That comes at a price, though. While working on the release management branch, trunk still requires work, e.g. merging new features. Alternatively you can neglect trunk while working on the release management branch. Even better is reducing or even eliminating the need for a release management branch altogether.

As a first step you could reduce the length of release management branches, e.g. from a week to a couple of days. Or you go all the way and just remove the release management branch altogether. This is feasible if you release out of trunk and have tight controls over what goes into trunk (“no junk in the trunk”). For example you could give your QA person the authority to decide when a branch is merged back into trunk. The equivalent in Git could be a pull request and the QA person decides which and when to accept a pull request into master. A review on the branch could be a prerequisite, for example QA could verify whether enough automated tests where written by trying to break the system.

By giving responsibility for trunk to QA and by minimizing the required release management tasks releases can easily be delivered out of trunk (or master). With tight controls trunk/master is no longer a “moving target”. Once you have eliminated the release management branch, the overall branch management and the release management has been simplified. A new release can be made from every single successful build out of trunk/master.

In my previous post I wrote about the benefits of using feature branches for quality assurance. As with all tools features branches don’t come with benefits only. There are unwanted side effects. Fortunately there are ways to minimize or eliminate them.

One such area is creation of the branches. There are various factors to look at. For one your version control system (VCS) should make it easy to create branches. For some systems this is a no-brainer and for other VCS’s some more work may be required. Most of the teams I work with use Subversion or GIT, and branching is not an issue.

When creating a feature branch, the VCS is only one of the tools that is affected. You also need to consider the client side of the equation. For example, how easy is it to switch between branches? How often do you need to switch between branches? What does the support in the VCS client look like? Do you need integration of your VCS client into the integrated development environment (IDE) your team is using? By choosing an appropriate VCS introducing feature branches becomes much easier.

Apart from the VCS other systems your development team is using may be affected as well when branches are created, whether they are feature branches or others. For example you may be using a continuous integration (CI) system such as TeamCity, Jenkins or CruiseControl.NET. Once the new branch has been created you want to make sure it is picked up by the CI system as well and automatically built. Therefore each time you create a branch you typically want to set up a new branch as well.

You may have other systems that may need to reflect that a new branch has been created. For example a bug tracking system may offer the branch name as the affected version when entering a bug. Or you may use a tool to plan and track progress for a particular feature. Again this would then be set up accordingly when a new branch is created.

As these activities will have to be done each time a feature branch is created, it is an obvious candidate for automation in particular if your team works on dozens or hundreds of features each year. With appropriate APIs of affected systems automation is not an overly complex task. Instead it is just a matter of putting the time in.

In closing I’d like to give you a specific example from one of the teams I am working with. Creating feature branches is completely automated in this case and the time required is only as long as it takes to type in the name of the features branch. The remainder is automated. This includes the creation of the feature branch in the VCS, setting up the new build configuration in the CI, creating the feature as a project in the project tracking system, adding a configuration to the test bed controller and setting up automated merging. Taken together it typically takes about 10 seconds and is saving hundreds of hours of development time per year.

In one of the next posts I’ll describe techniques that help making the merge of feature branches into the main development branch easier.

An almost “classic” way of managing branches is to have a main branch into which all development work is committed. When a release is prepared a release branch is created. Quality assurance (QA), including testing and bug fixing, is done on that release branch. Any code changes are usually propagated to the main branch.

When introducing feature branches the question is whether you would still do all the quality assurance work on that release branch. You can, but in my experience there are better options available to you.

With the “classic” setup you also often see teams putting a lot of quality assurance work into the release branch. This tends to increase significantly the time from when you create the branch to when you release the software. This can lead to spikes in the QA workload. For example if you have planned 4 weeks for release management (QA plus other items required for release) and you are on a quarterly release cycle, every 3rd month typically has a higher workload for the team taking care of the release.

Therefore it might make sense to look for better ways to level the release management work. Feature branches give you additional options.

Since content wise a feature branch is your main development branch plus only one feature, some quality assurance efforts can take place in the feature branch and focus on that particular feature without having to worry about changes that may be in progress in other branches. Also, all QA efforts can start as soon as the first story has been completed on the feature branch. In this case “QA efforts” doesn’t mean that quality is tested into the product. Instead it means that certain tests, e.g. platform, installation and upgrade tests can be run very early in the project. Equally you can start with usability and performance testing very early, too. Quality assurance may also include feedback sessions with customers. By using prototype version from the feature branch, those sessions gain focus as well.

The general idea is to move in as many work items as possible from the release management process and to move as many items from the release branch to the feature branches. With this approach the time between creating a release branch and shipping the software can be shortened extremely, more in the range of hours or days. Since more options exists for quality assurance work, this also reduces or avoids altogether spikes in the QA workload of the development team.

There is one other item that requires consideration. Even despite all efforts in the feature branch you cannot guarantee that the system is not broken when the branch is merged back into the main development branch. You will want to have some integration tests in place to speed up that verification process.

In summary you are distributing the release management work across three places. Firstly you put as many of these items into the feature branches as you can. Second, you need integration testing for when the feature branches are merged. And you keep only the unavoidable release management task in the release branch that you cannot reasonably do in the release management branch.

In a future post I will discuss another technique for what you can do to reduce the effort and risk for merging a feature branch back into the main development branch.

Software development projects are subject to a number of risks. One of these risks is schedule risk. By this I mean that you may not be able to ship on time because of some nasty discovery while executing the project.

Of course you don’t know what you don’t know. You cannot foresee what you will discover as you work through the project plan (or backlog). No matter how much effort you put in planning your project, breaking down the stories, even running some spikes, you will find that you cannot totally eliminate surprises that add to your workload.

Although we cannot totally eliminate that risk there are techniques that help with mitigating it. Firstly you can build in an allowance for discovery into your plans. Some people would call it a buffer.

A different option is to break your deliverable into multiple features. Often you will observe that most of the features are completed within the deadline while a small number may overrun. You will run into a problem, though, if you are using a single branch in the version control system (VCS) for your work. Your only option will be to deliver late once all features are complete.

Therefore a different approach is using feature branches. For each feature that you have scheduled you create a separate branch. Once the feature is complete you merge it back into your main branch, e.g. ‘trunk’ in Subversion. As you approach the deadline, e.g. the release date, you now have options. You can either ship with just the features that are complete or you wait until some or all of the remaining features are complete as well.

Of course this changes the scope of the release. However, if the features are worked on in their priority you may be able to ship the more important features even early or at least on time. This is sometimes a viable alternative to not shipping at all.

In the teams that I have worked with this approach is working quite well. There are a few more aspects to this but I’ll cover these in a future post.

Not too long ago I had a conversation with a friend and we discussed productivity in software engineering. In particular we got hung up on the question how to measure it.

It’s not that nobody had ever tried to measure it. I have probably about 30 to 40 books just on metrics alone. So far, however, I haven’t found a method that would meet my requirements. Let me explain.

Let’s assume you have a team that is working frantically and has a high productivity. Let’s further assume that you had a metric that can measure the productivity reliably. At some point the team ships the new system, not only on time but with a record breaking productivity maybe even ahead of time.

Enter the customer or even better the actual user. They sit in front of your brand new system. The initial comment: “This is not what I wanted. I don’t need a system for making hotel reservations I need a system that helps me control my production plan.” (OK, not a real case but I have to protect the innocent and I think it makes the point.)

In this scenario, what good is it if you have the best productivity on the planet if as a result you deliver the wrong features? Wouldn’t this indicate that in the end it is the customer who judges whether they get enough value for money? And wouldn’t that be a good metric for productivity? In this case, because the wrong system was delivered the “productivity” in terms of features with business value was zero. At least from this customers perspective.

In the discussion with my friend we really go stuck at some point. We had already agreed that counting lines of code (LOC) doesn’t help, or counting the number of classes, methods, function points, statements, screens, etc. Equally counting the hours doesn’t work.

My friend and I agreed on all of this. We couldn’t find a metric that we believed would work. Then I asked him how he measures productivity and his answer was: “gut feeling”. I’m not quite sure about his answer.

As for me I have decided that if several customers tell me that they believe they get good value for their money, e.g. for their maintenance fees, then I take that as a sign for a good productivity. Productivity measured as in business value perceived by the customer. Does that mean we now lean back and have a good time? No. We still use a continuous improvement process within our team to find even better ways of working, even more waste we can eliminate. And we continue to have a good time on top of the hard work!

Again and again I see example of why it pays off to have small commits. By that I mean something very practical, namely a commit to a version control system.

As I work through a story I don’t implement the entire story in one go. Instead I look for even smaller steps, ideally using tests to drive the development.

When I say small I mean more on the scale of every few minutes rather than maybe a couple times per day.

If something doesn’t work correctly I have only a very small number of places where to look for what is wrong. Most issues I find by just looking at the code changes. This works even better when I work with a programming partner.

As I find most issues by just looking at source code only rarely I need a debugger let alone stepping through vast amounts of code. This, too, is a time saver.

And in case I stuffed up the code completely I don’t lose much work, maybe just a few minutes, and I pull out all my code changes. Then I start from where I was just at the last commit. I continue from a known position.

The principle at work is small increments. This also applies to roadmaps, budget spreadsheets and other items. Just give it a try!

Somewhere I read that task switching is one of the biggest time killers for your daily work. For example if you have your team working on many different items in parallel and you expect them to devote at least some time to each item each day task switching can become a huge waste of time. Apparently it takes the human brain about 15 to 30 minutes to become completely immersed into a new topic.

This certainly does not apply to somebody selling movie tickets. No disrespect, selling movie tickets is important. I love watching movies!

Switching between task has a bigger impact on knowledge workers and certainly on people developing software.

About two months ago I assessed the work distribution in my team. I want to find out whether task switching was an issue and if so what could be changed to reduce it.

In our particular case we found that almost all members of our team were switching between fixing bugs and working on improvements. At the same time everybody was also expected to answer the phone and mails, participate in forum discussions and provide second level support. Bottom line: A lot of task switching went on.

So what did we do about this? We split up our team into two teams and assigned each with a subset of the above activities. At the moment we are still experimenting which activities should be assigned to which team.

A little more than one month has passed since we implemented the change. The initial observations are encouraging. One team has been assigned the items that we believe can be best planned using iterations as time boxes. The other team is working on the items that are better managed using a kanban system. Both teams are now in a much better position in terms of reducing task switching. Transparency has significantly increased as we are now using planning and tracking tools that are better suited to the type of tasks assigned to each of the teams.

I’d like to encourage you as an agile leader to go and look for yourself and assess how much task switching is going on in your team. Chances are you find a an easy way to improve the performance of your team.

In my last blog I wrote about smartphone junkies. I’ve discovered another species recently: “Email junkies”. Let me explain.

Last week I got a phone call and the person at the other end asked me a question and when I said that I don’t know what he was talking about he then asked whether I hadn’t seen the email he had sent 20 minutes earlier.

I don’t know about you but I’m not sure whether I see value in spending my day starting at my email inbox. Sure I love hearing from people but then if it is really important they can reach me via text or give me a phone call right away.

In my team I typically give the advice to check email only once in the morning and once in the afternoon. Or check it three times a day if you must.

At all other times it probably is the best to close the email client completely. Then it won’t even show those small little pop-ups at the lower right corner of your screen, which are another popular distraction.

If you want to focus on more important things then make wise choice as to when and how often you check your email. It can be a big time saver.

Equally if you send me an email don’t feel offended if I don’t read it and respond to it within a few minutes!

Ever been in a meeting and a smartphone goes off? Sounds familiar? It's amazing what smart phones can do these days in particular how you can stay connected all the time, be it email, SMS, twitter, Facebook, you name it. Cleverly used these devices can support collaboration and keep feedback loops short.

Some rules are common sense in the meantime. Switch at least the sound off. In some cases it might be useful to even switch off vibration as it can still distract if the smartphone is just sitting on the table and then starts hopping around. And even if both are switched off, some people are so addicted to checking every five minutes whether there is a new message that it could become a distraction even in short meetings like a daily scrum.

So, in my experience the best option is not bringing these devices to meetings in the first place. A different option could be a tray close to the door where people could then just put their phones during the meeting and pick them up again afterwards.

Bottom line: It's not the phone that is smart. It's the person using it cleverly!(image)

I just added another entry to the “Interesting Links” section. There are quite a few sites about agile approaches in particular for software development. Kelly Waters has put a lot of effort in her site – “All About Agile” - over the last view years and I find the material and links to further information very valuable. Have a look and I’m sure you will find nuggets, too.

An important capability of an agile team - in fact of any software development team - is estimating. Certainly you can break down any body of work into tiny bits, then estimate each tiny bit and total up the numbers to get an overall estimate of the entire body of work. But how small should you break it down?When you use stories - or you may call them tasks or something similar - you can use a story as the unit which needs to get estimated. Initially it will be hard on the team. For example how would you estimate the size of a story if the estimators have different roles such as developer, business analyst, tester, user interface expert, performance engineer, etc. How can a developer assess what amount of effort is required by the user interface expert or the tester?In reality they don't need to. With the initial set of stories all you need to do is agreeing on some relative sizing. These sizing will be in all likelihood completely off anyways. That fact of life should make this first step more relaxed. After the iteration is complete you look at how much you completed. Let's say you use NUTS as the unit for relative size. (NUTS = nebulous units of time, credits go to Darren Rowley from whom I learned this one.) Then you can look at the completed stories at the end of the iteration and see whether the initial estimate was correct. Was story 'xyz' really twice the size of story 'abc'? It doesn't have to be scientifically perfect. All that matters is that you give it your best shot and record the actuals.By recording the initial estimates and the actuals you are already on your way to improving your estimates. Please keep in mind that generally estimates are provided by a cross-functional team rather than by individuals. And ideally the estimates are provided by the team that will do the work eventually.By default you should sign up for stories that don't fit an iteration. If they are too large, break them into smaller pieces. At times, however, it can happen that a story is not complete at the end of the iteration and at the beginning of the next. One example could be that some capacity was left over towards the end of the iteration and work on an additional story was started.If a story is incomplete at the end of the iteration (for whatever reason!) then the team should assess whether the size of the story is still good or whether it needs to be updated (either way!). If the estimate is changed then you should record the updated estimate as well. Why? The only reason to record the updated estimate is to allow for a proper capacity planning in the new iteration. You need to know the updated/current estimate and how much is left, so that the team doesn't over-commit but signs up to only as much work as they think they can accomplish.So in effect, you are recording three numbers: The initial estimate, the updated estimate (history of this is not required), and the actual figure once the story is complete. The comparison of initial estimate and actual number allows you to measure how - as a team - you become better at estimating. The updated estimate is important for understanding how much work your team signed up for in a particular iteration.If you like you can use a simple spreadsheet for recording these numbers. Make sure you add some dates for further analysis, e.g. how was the quality of estimates in quarter one compared to quarter two? A team is getting good at estimates if you use a mix of about 10 to 20 stories and the delta between the total of initial[...]

Sometimes I'm asked what to test, in particular when I explain that testing the happy day scenarios is not sufficient. For example in web applications I'd certainly expect that everything that has a link to a different page actually brings you to that other page.

Another example would be any kind of control for entering data, e.g. text boxes, radio buttons, drop down lists, check boxes, and more. Let's take a text box for a product number. The valid range might be a positive number that has 6 digits. In that case you would also want to test whether you can enter less or more digits. The system should have a defined behavior. Then try entering spaces, e.g. 2 digits, then a space, then 3 more digits. Test whether you can enter nothing. Test what happens if you enter a mix of digits and characters. I'm sure you can think of more depending on the system you are working on.

One less obvious case are routes. A route allows you to enter a link that the system can interpret and translate in a specific way into a url. Routes allow certain items to be bookmarked. For example: you may want to support a url such as "http://nirvana.org/Product/246337/View" (of course your domain name would be different). The concept here is that you have the domain class name first ("Product"), the specific instance id next ("246337") and the method ("View") last. In essence a route is then: "http://nirvana.org/Product/{productId}/View". Depending on the technology you use to implement this route, somewhere you will have to extract the product id and create a url to the page that can handle this request.

The point I want to make is this: A route like this needs to be treated like a method. In essence it is similar to a method and hence there are quite a few test cases. Some examples of test you should consider:

No product id: "http://nirvana.org/Product//View"

Non-numeric product id: "http://nirvana.org/Product/foo/View"

Negative product id: "http://nirvana.org/Product/-123456/View"

Product id too short: "http://nirvana.org/Product/12345/View"

Product id too long: "http://nirvana.org/Product/1234567/View"

And this is just a selection for this very basic example. I'm sure you can think of more tests. The point is, sometimes things like this are easily overlooked and as a result your system may contain defects that you are not aware of. In the case of a web application it means, that if you allow people to bookmark certain pages, be aware that people not only can but will enter invalid URLs! Be prepared!(image)

This time I'm writing about an item that is admittedly very specific to software development. More than once when I spoke to a members of a development team I was told "yes, we have an automated test suite". And yet, further along the conversation it turned out that despite a significant test suite the resulting quality wasn't where all the efforts put into creating those tests indicated it should be. And in all these cases when we then took a closer look at the tests themselves it turned out that at least one key element was missing.That begs the question: What makes up a good test? What key characteristics should a good test have?Setup, Execute, ValidateThe first key ingredient is that a test consists of three parts: The first parts sets up the data that is needed for the test. This could be restoring a particular database content, it can be setting up a few objects in your programming language, it can be launching a particular user interface (e.g. browser) and many more. The second part is the actual execution of the test. This means you invoke functionality that modifies data. In the final third party you validate whether you have the expected outcome, e.g. the actual data is equal to the expected one.Occasionally I've found, though, that people forget about the third step. I don't have data but suspect that this happens when people come from a background where executing a piece of code without crashing is almost a success. Think early/mids 90s of last century. C and C++ were still very dominant in the PC industry. Exceptions in the midst of running a program were nothing completely out of the ordinary. (Maybe you are the only one who never experienced them?) However, we can do better. Just because it doesn't crash with a nasty null pointer exception doesn't mean it performs as expected. Therefore at the end of a test always validate the outcome! The typical tool for that are the various assertions that come as part of test tools.RepeatableNot strictly a requirement but there are quite a few scenarios where running the same test more than once reveals and thereafter prevents certain bootstrapping type issue from happening. Assume your service implementation does some sort of housekeeping upon startup. The first time you invoke an operation on the service everything is still fine. But then perhaps as you repeat the same test (or set of tests) using operations on that service things go off track. Maybe connections are not properly closed. Maybe the service cannot handle more than 10 open connections at a time (rightly or wrongly). By repeating the same test over and over again chances increase are that discover a hidden issue and resolve it before your product is shipped.Random OrderTests should not depend on each other. A test should not require a different test to run first. If they do changes to one test may trigger further changes to other tests in the suite thus making changes more expensive and time consuming. You don't want to lose time. You want to be fast.For example, lets assume you are working on a system that has Project as a concept and the name of the project becomes a unique identifier for each project. If all tests use the same project name for their tests, then each test would have to check during setup whether the project already exists. If it doesn't it would create it. The alternative would be to use a generated name in each test such as a string with the value "ProjectName" + Rando[...]

A question that I've been asked several times is where to start with improving your software engineering practices. Based on my experience, my answer typically is "It depends." Let me explain.In one of my previous roles the release cycle was very long. This was not because we didn't want to release more often. It was simply because the system was very large and there were only few customers. Typically the upgrades would require significant manual intervention anyways. On the other hand, quality was a real issue and starting the improvements in that direction was a good choice by introducing a set of new tools, new designs, new technologies, new equipment, and new process. It was very focused on the inside of the development team.In my current role we moved from one major release per year to monthly release cycles earlier this year. And although there were some concerns with it we were able to mitigate the impact in such a way that our clients can choose their own upgrade cycle. Whenever there is a feature of interest in a new release they can upgrade from their current version to the latest version in a one-step process. There is no need any longer to upgrade to the any of the versions in between. So, while we still offer new features each month, no client is force to upgrade on a monthly basis. There are, though, some clients who do, and each month there are clients who upgrade to that monthly release.With this background it has become clear that just moving to monthly releases wasn't enough. Instead we combined it with significant efforts to simplify the upgrade process for our clients. And the feedback we get from clients in different geographies is very positive. Therefore we will continue improving the upgrade process so that it becomes even easier for our clients to move to later and even better versions of our software. With this ever-improving upgrade process in place we have established a delivery mechanism that allows us shipping new features faster to the market place. New features are picked up sooner and we receive feedback and suggestions faster as well. Our clients benefit from earlier availability of new features that in the turn allow them to run their business more efficiently.In the second example I focused on the delivery process first - short release cycle combined with easier upgrades - while in the first example I focused on an improved engineering environment first. In the second example think of this: What if you had the perfect product but it would be a nightmare to upgrade? It would be very difficult to get improved versions installed on client's sites. If on the other hand you have a very simple upgrade process (and hence delivery process) you can roll out product improvements much faster.There is no hard and fast rule what to do in each scenario. The key learning is that you need to identify what the determining factors are for your team's environment. Then create options and see how they would address the biggest challenges in your situation. Start in one place, and start small. Then observe and take the next step. And make sure you line up your team members. In particular their creativity and innovation are key elements to selecting the right starting point and to successfully move from there.[...]