Archive for April 2013

Many of us struggle with and even dread meetings. Amazon lists almost 70,000 books on how to have more effective business meetings. There are innumerable approaches to having or hosting good meetings. There are consultants and coaches to assist with meetings. What’s going on?

Be sure to check out the poll results at the end of this post on “the line outside your manager’s door” where we had a very high level of participation. Also take this week’s poll here (https://www.surveymonkey.com/s/NKK2TCZ)

Software was supposed to make meetings better, but that just seemed to make meetings more frustrating as the first 15 minutes of every meeting involve connecting, echoing, or adjusting camera angles. Who wants to make something more efficient that you don’t even want to do in the first place!

Based on the survey from a previous post, readers of this blog spend approximately 3 hours per day in meetings!

The type of meetings that can benefit the most from taking a step back are those internal meetings that are about informing or seeking approval. We’re in a new era where information is flowing all around us and not only are things changing rapidly, but we all can see and understand the changes happening because of the information around us.

We are seeking to be continuously productive. Meetings that cause us to stop everything, snapshot a state of the world and hope that snapshot is still relevant by the time the meeting process completes, or at the extreme cause us to move forward on known “bad data” all need to be a thing of the past. This need forcontinuous productivity will cause us to seek out a different approach to meetings (among other things).

Accountability

Meetings are fundamentally about accountability. Managers want to have meetings so they are comfortable with what is going on and feel informed for their managers. Teams want to have meetings to gain approval for initiatives or budgets. There are of course many other types of meetings, but the important ones involve approval and management up the chain.

Meetings with your peers are about collaboration. Collaboration requires a shared context and shared goals. Meetings to get to this point are a key part of “middle integration” and are much less about accountability and much more about walking in each other’s shoes. The most important tools in these meetings are openness and honesty, since the foundation of any collaboration requires those. From those it is easy to build accountability.

Getting back to approval. It is, statistically speaking, no surprise that meetings are more often than not awful.

Consider this two by two as a generalization. In this scenario, we break up the point of a meeting between informing management and gaining approval. One can see how things can unravel quickly. Despite everyone being fully informed, meetings have an element of prisoner’s dilemma when it comes to accountability.

In the best case (upper right) management got what they wanted and the team got approval. We can assume that if the team executes then management will be supportive and accountability is clear.

Contrast this with the lower left where the meeting didn’t move things forward. For whatever reasons, management was not informed. This is the sort of meeting that usually starts off debating the assumptions of the work or the “non-goals” and is usually characterized by being “in the weeds”. In this case accountability is clearly shifted, 100%, to the team and usually a scramble results to recalibrate and rework.

The other cases are the “coin toss”. Teams can go into meetings and be fully prepared but for whatever reason (context they did not know about or inputs they weren’t aware of) and fail to move forward. Or the team can move forward, but with a weird feeling that things aren’t right. In these cases, accountability shifted squarely to the team and management is left wondering what went wrong.

Of course these are broad generalizations. In the real world most meetings have some characteristics across this 2×2 because situations are more nuanced.

It is critical to know before you go into a meeting how you will recalibrate and establish accountability based on these potential outcomes. A significant part of a meeting is knowing how to manage any of the typical outcomes

Tips

Since Amazon is filled with so many books about having effective meetings, we can all assume that problem still exists and there are no magic answers. There are some things we can all do.

Context. Do you fully understand the other party’s context, before asking something of them? Spending time in a meeting both asking for something and learning about what the other party might be thinking is going to be a challenge. Push yourself and the team to really know the goals and constraints before you go into the meeting.

Success. Do you really know what success is supposed to look like? Often in preparation leading up to the meeting the focus turns from the goal to the tactics, which might be ok but also might lose sight of the big picture. Be sure that you’re defining success in a way that everyone agrees is measureable and useful in the context of the goals.

Details. Are you really buttoned up on the kinds of details your manager cares about? If you know your manager cares about the budget, or specific parts of the budget, or likes to measure things in a certain way then “ride the horse in the direction it is going” and prepare that way. You want to try to use the meeting time for things you can’t anticipate.

Brevity. Are you really being concise enough in describing what your there to decide and talk about? By definition you and the team know way more about what is going on than folks up the chain, but you don’t have the time to transfer all that knowledge. Be sure to focus on what matters.

These are in a sense the basic approaches to meetings. The most important tip might be to ask yourself if the meeting can be “avoided” in the first place. Meetings are expected to produce results. Even meetings to prepare for meetings are expected to move things forward. That’s reality.

Today’s environment is one where things are changing very quickly, information is flowing in real-time, and with tools from big data to smartphones, stopping the real work from happening (or keeping it from starting) can only put you further behind your competition or your own team’s goals.

The real question for your whole team is how accountability can be established so that everyone can be accountable and keep moving without having to take time out to stop. The only thing that is certain is that if you’re not moving you can’t be going forward.

Three Questions Poll

Thanks to everyone who took the recent survey on the blog entitled “A line to see someone is not cool, but is blocking progress.” Here are some top level points:

We had over 300 responses from around the world. 191 people managers and 134 non-people managers

On average, we all spend about three and a half hours per day in meetings of one kind or another

On average, about one third of our team’s work requires our approval, feedback or decision

Generally, we think highly of our teams! On a scale of 1(strongly disagree) to 5(strongly agree) we said our teams:

Operate with rhythm/flow: 3.6

Have high morale:3.6

Work quickly:3.6

It was worth noting that when considering managers separately from non-managers, managers rated their teams over half a point higher on all three attributes vs. non-managers

The purpose of this survey was to evaluate the hypothesis that a line outside the manager’s door blocks team progress. To do so, we can test the effect the “line” variables have (more time in meetings, greater % requiring manager approval) on the “progress” variables (Flow, Morale and Quickness).

Surprisingly, the hours we spend in meetings had little impact on these positive outcomes, for both non-managers and managers, as well as the group as a whole. It should be noted for some (i.e. sales, medicine, etc.) most of a day is consumed by meeting with others, which might disrupt results

Among non-managers, there was little correlation between meetings/approval process and reported flow, morale or quickness.

Among managers, there was a slight but statistically significantly impact of the % of their team’s work that requires their approval and their own sense of team flow, morale, quickness.

Bottom Line: Though additional study would be required to fully understand this complex topic, there is evidence that the line at the manager’s door can decrease progress, if only as perceived by the manager.

Like this:

Today was “Launch Day” for the startups in Harvard’s first year MBA program. Many of the products and services created are available on the web to try out or to order (though several are specific to the Boston area). The process of creating the company from scratch, with a limited budget, on a tight deadline, in a collaborative team environment is super cool.

This post introduces you to over a dozen new companies. Check them out!

Background

The Harvard MBA program enrolls over 1800 students, which means there are about 900 students in each year. The whole year is divided up into sections and you spend your academic year within your section (the startups in this post are all from Section C, which I was lucky enough to work with, led by Prof. Jan Hammond). There are about 90 students in a section (and 6 students in each startup team). As a first year student in Harvard’s MBA program (RC—required curriculum) you take a series of required courses generally taught using the traditional case method. Starting last year, the RC introduced the FIELD program, Field Immersion Experiences for Leadership Development, a full year program which emphasizes learning by experience.

For the spring semester, FIELD3 focuses on creating a new company from scratch (FIELD1 is about leadership, FIELD2 is about global topics). Imagine you are given a small cash budget, a fixed team size, and a fixed schedule including specific milestones for investment, viability and launch! That’s what FIELD3 is all about.

The calendar is roughly:

Two weeks to develop a product concept

Funding simulation (“stock market”) which gives some teams the opportunity to raise more capital and others will need to make do with less, and thus pivot their ideas

About 8 weeks to fully develop the idea, go to market strategy, prototype or actual product, and basically to show that the product can be made

Launch day – this is where we are today! On this day your product or service is ready to be used by people. The stock market is opened for trading and based on the launch readiness and pitches, the value of companies goes up or down and some companies do not make it past this stage.

About 3 weeks to actually sell the product or service and ready for…

IPO day!

Of course this an academic exercise but it is also a very serious one as it brings together much of the classroom learning into focus for a real world trial. The products and services are real and really meant to be used by people outside the student community. That’s why you’ll be able to try them out below.

Launch Day

Today was launch day. Each company (there are 15 companies in a section) has 10 minutes to pitch their ideas to the section that will buy/sell shares in those 15 companies (these are done pairwise so Section C are the investors in another section).

In the pitch, the investors see:

Demonstration and/or Product samples

Business fundamentals

Competitive advantage

Demand generation approach

Below you can see a pitch by one of the companies that created a packaged product that enables children to customize a pair of plain sneakers. Here you see the market testing summarized along with the countdown clock for the pitch.

All of the startups used tools of modern product development. Eric Reis, author of Lean Startup, is an Entrepreneur in Residence at Harvard and so the ideas of “MVP”, measuring the right things, and even the pivot are all front and center for the founders. Because of the markets and the feedback loop, a number of companies in Section C went through substantial pivots.

The companies also make use of all the platform tools we see today that are available to quickly create new companies: paypal, wepay, shopify, AWS, and more. Businesses requiring components source them from local manufacturers or online such as Alibaba. Plus the companies make use of local services such as Task Rabbit or Harvard Student Agencies in order to bootstrap any labor that might be required. Apps target widely used mobile platforms. Facebook, twitter, and Google were used for sourcing early testers and demand generation/awareness via tools such as SEO and keyword buys in addition to branding sites.

Also critical, is that all companies adhere to local and state laws for anything that involves safety, privacy, and more. HBS has a code of conduct and separate set of rules around how the companies can interact with the University Community. So yes, this is like real life!!

Each pitch must also leave time for investor questions. These can be pointed and often return back to the previous rounds of investment. The investors are not given unlimited funds and are also keeping “score” trying to maximize their own return. This is a full financial market simulation.

Here’s the ticker before the market opened based on the closing prices of the last round:

The Companies

Here’s a chance to try out a few of the companies and see the work – keep in mind this is work done in the past 10 weeks or so and almost all coding was done via outsourcing! Personally, I could not be more impressed with the progress and the ability for the companies to navigate the tricky waters of both developing a product and a business, all while learning and doing all their other class work!

View The Rental (ticker:VIEW, http://www.viewtherental.com/). View the Rental provides objective information about apartments and houses for rent in Boston or Cambridge via remote video chat for renters who are unable to view their rental in person. This product uses Skype to establish a live walk-through of the exact apartment you might be renting.

RescueMe (ticker:RSCU, http://rescuememedical.com/, also available at local retailers). Traveling for Spring Break? Don’t leave without RescueMe, the all-in-one travel meds (and essentials) pack!

LaunchPad (ticker:PREP, http://getlaunchpad.net). Looking for a job or an internship? LaunchPad gives you the answers you need to stand out in your job search. LaunchPad sets up a customized 1:1 conversation between a student looking for a job and someone in that field who can offer advice and feedback on the industry or approach to finding a position.

easybiodata (ticker:BIO, http://www.easybiodata.com). Targeted at singles of Indian background, easybiodata is a solution to matrimonial search. Creating, Sharing and Managing your Biodata has never been easier with easybiodata.com whether you are the parent or extended family member helping the matrimonial search. EasyBiodata.com helps you spend less time sending emails so you can spend more time finding the perfect candidate. (Note: showing the global nature of the typical HBS team, this team was made up of students from 6 different countries: USA, Japan, Haiti, Slovak Republic, Kenya, and India).

Dinner Rally (ticker: RLLY, http://www.dinnerrally.com/). The best food from Harvard Square delivered straight to your door. Dinner Rally makes available food that is not normally delivered at a very affordable price, delivered straight to your door.

HuddleUp (ticker:HDDL, http://huddleevents.com). HuddleUp helps fans find the best place to watch their upcoming sports games at local bars. We know it can be tough, especially for fans of out-of-town teams, to find places to watch their games AND other fans to watch with.

Sepono (ticker: SPNP, http://sepono.co/). Sepono delivers on-demand nail and salon service booking. Tapping into over 1400 spas in the Boston area, Sepono makes it easy to find an appointment and obtain service.

SitCrawlWalk (ticker: BABY, http://www.sitcrawlwalk.com/). SitCrawlWalk We help parents discover the best products for their little ones at each stage of the baby’s life. Featuring reviews, curated and clutter free product offerings, and unbiased research sitcrawlwalk is a unique shopping approach tapping into the market for “social shopping” and affiliate sales.

PaintSteps (ticker:PNTS, http://www.paintsteps.com/). A creative shoe painting kit that lets your children’s creativity flourish and keep children occupied in a fun activity for hours. It includes a pair of children’s white canvas shoes, safe acrylic paint, palette and brushes, as well as an educational inspiration book. The inspiration book is designed by a professional illustrator and allows children to practice coloring on paper before painting the shoes.

stARTworks (ticker:ARTS, http://startworks.myshopify.com). Welcome to stARTworks brings the beautiful artwork of blossoming student and local artists to your doorstep! Here, you can view and purchase existing pieces, or create custom art from your own pictures and photos. Browse our artist pages or “What We Do” section to get started today! stARTworks is a socially conscious company that supports local and student artists.

PrepChef (ticker:CHEF, http://myprepchef.net). Simple, delicious recipes delivered right to your door! Free delivery includes portioned ingredients delivered to your door and simple step-by-step instructions. Teach yourself how to cook, host a dinner party, or just enjoy an evening of a self-cooked meal.

Party In A Box (ticker:PRTY, http://boxyourparty.com). One click theme party solution for those who love to party, like experimenting with new party themes and want great supplies and decorations delivered to their door. Feeling nostalgic for Backstreet Boys, slap bracelets and the Fresh Prince? Check out our 1990s box. Missing Ferris Bueller, track suits and bad hair? Our 1980s box has you covered. Your friends at Party-in-a-Box are also here for you on those special, once-in-a-year events: St. Patrick’s Day, Cinco de Mayo, Independence Day, the Kentucky Derby, etc.

Phew…those are just the companies from Section C. There are 9 other RC sections as well. You can see that many of these company ideas came about because of the unique problems faced by students leaving the workforce, relocating, traveling, meeting new people, living the life of students, and so on. Mother necessity is alive and well in FIELD3.

The stock market is still open and folks are still settling on their investments.

The next step will be the IPOs. But you can try these out now (note, some require you to be in Boston). And who knows, some of these might be the seeds of future companies as students continue to evolve the businesses.

Like this:

It might seem cool if there is a line outside someone’s door (or an inbox full of follow-ups in Outlook or a multi-week wait to “get on the schedule”). “Boy that person is really important” is what folks might say. In reality this bottleneck is a roadblock to progress and a sign of a team in need of change.

Most of the time we see managers with a line outside a door, but it can also be key leaders on a team of all sorts. Here are some tips to get out of the way and stop the gridlock.

Be sure to take the poll at the end of this post http://www.surveymonkey.com/s/QXR9WLZ. Feel free to use the comments to share your experience with a bottleneck on your team–there are folks out there probably experiencing something similar and benefit from your perspective. At the end of this post are the results from Career: Journey or Destination, which has some very interesting trends.

Why is there a line?

Managers or org leaders are busy. But so are the members of the team that work for the manager or depend on that leader. Unfortunately the way things go, too many folks end up as a bottleneck in getting things done. It might be a sign of importance or genuine workload, but it can also be a sign of a structural challenge. What are some of the reasons for a line?

Approval. A manager asks to approve work before it can move forward.

Feedback. Members of the team awaiting feedback from on proposed work.

Decision. A leader is the decision maker in a situation.

On the face of it, each of these sound like the role of a manager (or leader, we’ll use them interchangeably in this post). The dictionary definition of a manager even supports this, “a person who has control or direction of an institution, business, etc., or of a part, division, or phase of it”. The operative notion is “in charge”.

There are several problems with this approach:

Demotivating. If a job involves creativity (artistic, design, creation, problem solving, or a million other ways of being creative) then people who do those jobs well don’t generally do their best work under control. At an extreme, highly creative people are notorious for not wanting to be directed. The close cousin of demotivating is disempowering and very quickly creative people on the team lose the motivation to do great work and seek to get by with merely good work.

Scale. A manager that operates a team as an “extension” of him/herself is not highly scalable. The line out the door represents the scale problem—it is trying to squeeze 64 bits through a 32 bit gate. There’s simply more work than can be done. The manager is overworked trying to do the work of the whole team, which is not sustainable.

Slow. A manager that inserts him/herself in the middle of the flow of work causes the flow of work to slow down. The reaction time of the whole team no longer represents the capability of the team, but is limited by the ability of one person. Most folks are pretty frustrated by the roadblock to approval and then ultimately approval of the work as initially presented.

Tactical. Those who operate in the middle of the work like this often justify their style as “adding strategic context”. This is often the exact opposite of what happens as the person is too busy to breath, take a step back, or to think long term because of the line out the door!

There are many justifications for why managers see these downsides as worth the risk. Managers feel like they have the experience to do better, know more, or maybe the team is new, understaffed, and so on. These are juicy rationalizations. Like parents doing homework and school projects for their kids, the short term seems reasonable but the long term becomes problematic.

Accountability

Beyond gridlock, the deep, long term problem created by a line outside a manager’s door is the transferal of accountability that takes place. Once the manager is in the middle of approving, providing feedback, or deciding then the very best case is that the manager is accountable for the outcome. Wait, you say that’s always the case, right?

A manager should be accountable when things don’t go well and stand up to claim the work of the team that wasn’t what it needed to be. When things go well, the manager should fade away and the team should shine. This isn’t some ideal. This is just the basics of teamwork and what needs to happen. That goes beyond management and is leadership.

But when a manager is in the middle of everything, members of the team have a tough time feeling a sense of pride of ownership. The further the results are from ideal, the less likely individuals feel responsible. It is simply too easy to point to places where each person surrendered accountability to management. And unfortunately, this opens up potential for the worst form of dysfunction which is a manager in the middle of everything stepping back and still assigning accountability to the team when things don’t go well, politics.

Ultimately, any healthy team is about everyone feeling an equal sense of accountability for the groups work and full accountability for their work. The role of the manager is to create a team and workflow that enables everyone to contribute and grow.

Rhythm of the team

The most important thing a manager can do to create a workflow for the team is to foster a continuous rhythm of work on the team. The world of modern products and service means things are in a state of change and adaptation all the time. Stores roll over promotions constantly. Web sites are always being programmed. Social networks provide a constant dialog to contribute to and respond to. Product feedback is available all the time. The team that is standing on a line is not just missing all the action, but is playing a losing strategy.

In his famous book, Flow: the psychology of optimal experience, Mihaly Csikszentmihalyi talks about how important it is to be engaged in self-controlled, goal-related, meaningful actions. That when you’re doing that you are in a flow and things are much better (“happier”) for everyone.

A flow on a business team or product team is about working towards a shared goal and doing so without the starts and stops that interrupt the flow. As a manager there are two simple things you can do:

Never schedule your full day. As a rule of thumb, you should never schedule more than 50% of your day in structured meetings and other required activities. This leaves your day for “work” which is your work as a contributor (being a manager does not mean you stop having concrete deliverables!) and for keeping things from being blocked by you. If you have time during the day you can interact in an ad hoc manner with the team, find time to participate before things reach a bottleneck, and most importantly you have time to listen and learn. This is the number one crisis prevention tool at your disposal. The more time you have available the more time you can provide feedback when the time is right for action, as an example. You can provide feedback when a plan is a draft and do so casually and verbally, rather than the team “presenting” a draft in a meeting and you needing to react, or sending you an attachment that forms another line in your inbox, all usually too late for substantial feedback anyway.

Stop approving and deciding. As heretical as this sounds, as an experiment a manager is encouraged to spend a month pushing back on the team when they ask for approval or a decision. Instead just ask them to decide. Ask them what would go wrong if they decided. Ask them if they are prepared for the implications of a decision either way. Ask them if they are comfortable owning and “defending” a decision (knowing you as the manager will still be supporting them anyway).

As a member of the team waiting in line, there’s an option for you too. Instead of asking for approval or the other side of the coin, acting now and worrying later, take the time to frame your choice in a clear and confident manner. Don’t be defensive, aggressive, or shift accountability, but simply say “Here’s what I’m suggesting as a course of action and what we’re prepared to deal with as the risk…” No choice is free of risk. The risky path is simply not being prepared for what could potentially go wrong.

The optimal team is one that is moving forward all the time and operating with a flow and rhythm. A line outside the door of a manager is a sign of a dysfunctional team. It isn’t hard to break the cycle. Give it a shot.

Thanks to everyone who responded to our last survey on the “Defining your career path: journey or destination” post. We had an amazing response, with over 800 responses from around the world. Here are a few of the highlights:

On average (mean), people have spent around 13 years in their career

In those years, people have held 5.5 jobs or roles; or about 2 years per job/role

And about 8% more sought to be “breadth leaders” vs. “field experts (42% vs. 34%)

On average, we’re pretty satisfied with our careers: 3.7 on a 5-point scale

In this survey we had a nice “response variable” to consider: career satisfaction. If we agree that this is a goal we share, we can consider how the other “explanatory variables” contribute to overall career satisfaction:

Those that claimed to be more “experience oriented” tended to have a higher level of career satisfaction vs. those that were more “goal oriented”; those that reported being “very satisfied” with their careers were >3x more likely to be “experience oriented”

Those with longer careers tended to be more satisfied: both “career years” and “number of jobs” provided a fractional lift in the 5-point career satisfaction scale

Pursuing a goal of “organizational leader” tended to provide more lift than “domain expert”

And pursuing a experiences as a “field expert” tended to provide more lift to satisfaction than experiences as a “breadth leader” (though more consider themselves to be the latter)

None of the models built in analyzing this data did a great job of explaining all of the variance in your responses; we are all different and find satisfaction in our careers in different ways

Bottom Line: There is no “silver bullet” which guarantees our career satisfaction; people are different and their satisfaction is driven by various factors, at different career stages. That said, as leaders, we generally tend to find satisfaction based on our experiences with other people (as org leaders, experts in our field, more time in our careers/more roles over time) over the specific goals or attained knowledge we encounter through our journey.

Like this:

In a previous post, the topic of surviving legacy code was discussed. Browsers (or rendering engines within browsers) represent an interesting case of mission critical code as described in the post. A few folks noticed yesterday that Google has started a new rendering engine based on the WebKit project (“This was not an easy decision.” according to the post)

Relative to moving legacy code forward this raises some interesting product development challenges. This blog focuses on product development and the tradeoffs that invariably arise, and definitely not about being critical or analyzing choices made by others, as there are many other places to gain those perspectives. It is worth looking at actions through the lens of the product development discipline.

In this specific case there is an existing code base, legacy code, and a desire to move the code base forward. Expressed in the announcement, however briefly, is the architectural challenge faced by maintaining the multi-process architecture. Relative to the taxonomy from the previous post, this is a clear case of the challenges of moving an architecture forward. The challenge is pretty cut and dry.

The approach taken is one that looks very much a break in the evolution of the code base, a “fork” as described some. Also at work are efforts after forking to delete unused code, which is another technique for managing legacy code described previously. These are perfectly reasonable ways to move a code base forward, but also come with some challenges worth discussing.

What the fork?

(OK, I couldn’t resist that, or the title of this post).

Forking a code base is not just something one can do in the open source world, though there is somewhat of a special meaning there. It is a general practice applicable to any code base. In fact, robust source code control systems are deliberate in supporting forks because that is how one experiments on a code base, evolves it asynchronously, or just maintains distinct versions of the code.

A fork can be a temporary state, or sometimes called a branch when there are several and the intent to be temporary is clear. This is what one does to experiment on an alternate implementation or experiment on a new feature. After the experiment the changes are merged back in (or not) and the branch is closed off. Evolution of the code base moves forward as a singular effort.

A fork can also be permanent. This is where one can either reap significant benefits or introduce significant challenges, or both, in evolving the code. One can imagine forks that look like one of these two:

In the first case, the two paths stay in parallel. That’s an interesting approach. It is essentially saying that the code will do the same thing, but differently. In code one would use this approach if you wanted to maintain two variations of the same product but have different teams working on them. The differences between the two forks are known and planned. There’s a routine process for sharing changes as each of the branches evolve. In many ways, one could view the current state of webkit as this state since at no point is there a definitive version in use by every party. You might just call this type of fork a parallel evolution.

In the second case, the two paths diverge and diverge more over time. This too is an interesting approach. This type of fork is a one-time operation and then the evolution of each of the branches proceeds at the discretion of each development team. This approach says that the goals are no longer aligned and different paths need to be followed. There’s no limitation to sharing or merging changes, but this would happen opportunistically, not systematically. Comments from both resulting efforts of the WebKit fork reinforce the loosely coupled nature of the fork, including deleting the code unused by the respective forks along with a commitment to stay in communication.

For any given project, both of these could be appropriate. In terms of managing legacy code, both are making the statement that the existing code is no longer on the right evolutionary path—whether this is a technical, business, or engineering challenge.

Forking is a revolutionary change to a code base. It is sort of the punctuation in a punctuated equilibrium. It is an admission that the path the code and team were on is no longer working.

Maintaining functionality

The most critical choice to make when forking code is to have an understanding of where the functionality goes. In the taxonomy of managing legacy code, a fork is a reboot, not a recast.

From a legacy code perspective, the choice to fork is the same as a choice to rewrite. Forking is just an expedient way to get started. Rather than start from an empty source tree, one can visualize the fork as a tree copy of all the existing code to a new project and a fast start. This isn’t cheating. It can be a big asset or a big liability.

As an asset, if you start from all the same existing code then the chances of being compatible in terms of features, performance, and quality are pretty high. Early in the project your code base looks a lot like the one you started from. The differences are the ones you immediately introduce—deleting code you don’t think you need, rewriting some parts critical to you, refactoring/restructuring for better engineering. All of these are software changes and that means, definitionally, there will be regressions relative to the starting point in the neighborhood of 10%.

On the other hand, a fork done this way can also introduce a liability. If you start from the same code you were just using, then you bring with it all the architecture and features that you had before of course. The question becomes what were you going away from? What was it that could not be worked into the code base the way it stood? The answers to these questions can provide insights into the balance between maintaining exact functionality out of the gate and how fast and well you can evolve towards your new goals down the road.

In both cases, the functionality of the other fork is not standing still (though on a project where your team controls both forks, you can decide resource levels or amount of change tolerated in one or the other fork). The functionality of the two code bases will necessarily diverge just because everything would need to be done twice and the same way, which will prove to be impossible. In the case of WebKit it is worth noting that it was derived from a fork of KHTML, which has since had a challenging path (see http://en.wikipedia.org/wiki/WebKit).

Point of view required

As said, the process of rebooting via any means is a perfectly viable way to move forward in the face of legacy code challenges. What makes it possible to understand a decision to fork is having (or communicating) a point of view as to why a fork (a reboot, rewrite) is the right approach. A point of view simply says what problem is being solved and why the approach solves the problem in a robust manner.

To arrive at such a conclusion, the team needs to have an open and honest dialog about the direction things need to go and the capabilities of the team and existing code to move forward. Not everyone will ever agree—engineers are notoriously polarizing, or some might say “religious”, at moments like this. Those that wrote the code are certain they know how to move it forward. Those that did not write the code cannot imagine how it could possibly move forward. All want ways to code with minimal distraction from their highest priorities. Open minds, experimentation, and sharing of data are the tools for the team to use to work (and work it is) to a shared approach for the fork to work.

If the team chooses a reboot the critical information to articulate is the point of view of “why”. In other words, what are assumptions about the existing code are no longer valid in some new direction or strategy. Just as critically are the new bets or new assumptions that will drive decision making.

This is not a story for the outside world, but is critical to the successful engineering of the code. You really need to know what is different—and that needs to map to very clear choices where one set of assumptions leads to one implementation and another set of assumptions leads to very different choices. Open source turns this engineering dialog into an externally visible dialog between engineers.

Every successful fork is one that has a very clear set of assumptions that are different from the original code base.

If you don’t have a different set of assumptions that are so clearly different to the developers doing the work, then the chances are you will just be forked and not really drive a distinct evolutionary path in terms of innovation.

Knowing this point of view – what are the pillars driving a change in code evolution – turns into the story that will get told when the next product releases. This story will not only need to explain what is new, but ultimately as a matter of engineering, will need to explain to all parties why some things don’t quite work the way they do with the other fork, past or present at time of launch.

If you don’t have this point of view when you start the project, you’re not going to be able to create one later in the project. The “narrative” of a project gets created at the start. Only marketing and spin can create a story different than the one that really took place.

–Steven

Share this:

Like this:

In the software industry, legacy code is a phrase often used as a negative by engineers and pundits alike to describe the anchor around our collective necks that prevents software from moving forward in innovative ways. Perhaps the correlation between legacy and stagnation is not so obvious—consider that all code is legacy code as soon it is used by customers and clouds alike.

Legacy code is everywhere. Every bit of software we use, whether in an app on a phone, in the cloud, or installed on our PC is legacy code. Every bit of that code is being managed by a team of people who need to do something with it: improve it, maintain it, age it out. The process of evolving code over time is much more challenging than it appears on the face of it. Much like urban planning, it is easy to declare there should be mass transit, a new bridge, or a new exit, but figuring out how to design and engineer a solution free of disruptions or worse is extremely challenging. While one might think software is not concrete and steel, it has a structural integrity well beyond the obvious.

One of the more interesting aspects of Lean Startupfor me is the notion of building products quickly and then reworking/pivoting/redoing them as you learn more from early adopters. This works extremely well for small code and customer bases. Once you have a larger code base or paying [sic] customers, there are limits to the ability to rewrite code or change your product, unless the number of new target customers greatly exceeds the number of existing customers. There exists a potential to slow or constrain innovation, or the reduced ability to serve as a platform for innovation. So while being free of any code certainly removes any engineering constraint, few projects are free of existing code for very long.

We tend to think of legacy code in the context of large commercial systems with support lifecycles and compatibility. In practice, lifting the hood of any software project in use by customers will have engineers talking about parts of the system that are a combination of mission critical and very hard to work near. Every project has code that might be deemed too hot to handle, or even radioactive. That’s legacy code.

This post looks at why code is legacy so quickly and some patterns. There’s no simple choice as to how to move forward but being deliberate and complete in how you do turns out to be the most helpful. Like so many things, this product development challenge is highly dependent on context and goals. Regardless, the topic of legacy is far more complex and nuanced than it might appear.

One person’s trash is another’s treasure

Whether legacy code is part of our rich heritage to be brought forward or part of historical anomalies to be erased from usage is often in the eye of the beholder. The newer or more broadly used some software is the more likely we are to see a representation of all views. The rapid pace of change across the marketplace, tools and techniques (computer science), and customer usage/needs only increases the velocity code moves to achieve legacy status.

In today’s environment, it is routine to talk about how business software is where the bulk of legacy code exists because businesses are slow to change. The inability to change quickly might not reflect a lack of desire, but merely prudence. A desire to improve upon existing investments rather than start over might be viewed as appropriately conservative as much as it might be stubborn and sticking to the past.

Business software systems are the heart and soul of what differentiates one company’s offering from another. These are the treasures of a company. Think about the difference between airlines or banks as you experience them. Different companies can have substantially different software experiences and yet all of them need to connect to enormously complex infrastructures. This infrastructure is a huge asset for the company and yet is also where changes need to happen. These systems were all created long before there was an idea of consumers directly accessing every aspect of the service. And yet with that access has come an increasing demand for even more features and more detailed access to the data and services we all know are there. We’re all quick to think of the software systems as trash when we can’t get the answer or service we want when we want it when we know it is in there somewhere.

Businesses also run systems that are essential but don’t necessarily differentiate one business from another or are just not customer facing. Running systems internally for a company to create and share information, communicate, or just run the “plumbing” of a company (accounting, payroll) are essential parts of what make a company a company. Defining, implementing, and maintaining these is exactly the same amount of work as the customer facing systems. These systems come with all the same burdens of security, operations, management, and more.

Only today, many of these seem to have off-the-shelf or cloud alternatives. Thus the choices made by a company to define the infrastructure of the company quickly become legacy when there appear to be so many alternatives entering the marketplace. To the company with a secure and manageable environment these systems are assets or even treasures. To the folks in a company “stuck” using something that seems more difficult or worse than something they can use on the web, these seem like crazy legacy systems, or maybe trash.

Companies, just as cities, need to adapt and change and move forward. There’s not an option to just keep running things as they are—you can’t grow or retain customers if your service doesn’t change but all the competitors around you do. So your treasure is also your legacy—everything that got you to where you are is also part of what needs to change.

Thinking about the systems consumers use quickly shows how much of the consumer world is burdened by existing software that fits this same mold—is the existing system trash or treasure? The answer is both and it just depends on who you ask or even how you ask.

Consumer systems today are primarily service-based. As such the pace of change is substantially different from the pace of change of the old packaged software world since changes only need take place at the service end without action by consumers. This rapid pace of change is almost always viewed as a positive, unless it isn’t.

The services we all use are amazing treasures once they become integral to our lives. Mail, social networking, entertaining, as well as our banking and travel tools are all treasures. They can make our lives easier and more fun. They are all amazing and complex software systems running at massive scale. To the companies that build and run these systems, they are the company treasures. They are the roads and infrastructure of a city.

If you want to start an uproar with a consumer service, then just change the user interface a bit. One day your customers (users, people) sign on and there’s a who moved my cheesemoment. Unlike the packaged software world, no choice was made no time was set aside, rather just when you needed to check your mail, update status, or read some news everything is different. Generally the more acute your experience is the more wound up you get about the change. Unlike adding an extra button on an already crowded toolbar, a menu command at the end of a long menu, or just a new set of optional customizations, this in your face change is very rarely well-received.

Sometimes you don’t even need to change your service, but just say you’re going to shut it down and no longer offer it. Even if the service hasn’t changed in a long time or usage has not increased, all of a sudden that legacy system shows up as someone’s treasure. City planners trying to find new uses for a barely used public facility or rezone a parking lot often face incredible resistance from a small but stable customer population, even if the resources could be better used for a more people. That old abandoned building is declared an historic landmark, even if it goes unused. No matter how low the cost or how rich the provider, resources are finite.

The uproar that comes from changing consumer software represents customers clamoring for a maintaining the legacy. When faced with a change, it is not uncommon to see legacy viewed as a heritage and not the negatives usually associated with software legacy.

Often those most vocal about the topic have polarizing views on changes. Platforms might be fragmented and the desire is expressed to get everyone else to change their (browser, runtime, OS) to keep things modern and up to date—and this is expressed with extreme zest for change regardless of the cost to others. At the same time, things that impact a group of influentials or early adopters are most assailed when they do change in ways that run counter to convential wisdom.

Somewhere in this world where change and new are so highly valued and same represents old and legacy, is a real product development challenge. There are choices to be made in product development about the acceptance and tolerance of change, the need to change, and the ability to change. These are questions without obvious answers. While one person’s trash is another’s treasure makes sense in the abstract, what are we to do when it comes to moving systems forward.

Why legacy?

Let’s assume it is impossible to really say whether code is legacy to be replaced or rewritten or legacy to be preserved and cherished. We should stipulate this because it doesn’t really matter for two reasons:

Assuming we’re not going to just shut down the system, it will change. Some people will like the change and other’s will not. One person’s treasure is another’s trash.

Software engineering is a young and evolving field. Low-level architecture, user interaction, core technologies, tools, techniques, and even tastes will change, and change dramatically. What was once a treasured way to implement something will eventually become obsolete or plain dumb.

These two points define the notion that all existing code is legacy code. The job of product development is to figure out which existing code is a treasure and which is trash.

It is worth having a decision framework for what constitutes trash for your project. Part of every planning process should include a deliberate notion of what code is being treated as trash and what code is a treasure. The bigger the system, the more important it is to make sure everyone is on the same page in this regard. Inconsistencies in how change is handled can lead to frustrated or confused customers down the road.

Written with different assumptions

When a system is created, it is created with a whole host of assumptions. In fact, a huge base of assumptions are not even chosen deliberately at the start of a project. From the programming language to the platform to the basic architecture are chosen rather quickly at the start of a project. It turns out these put the system on a trajectory that will consistently reinforce assumptions.

We’ve seen detailed write-ups of the iOS platform and the evolution of apps relative to screen attributes. On the one hand developers coding to iOS know the specifics of the platform and can “lock” that assumption—a treasure for everyone. Then characteristics of screens potentially change (ppi, aspect ratio, size) and the question becomes whether preserving the fixed point is “supporting legacy” or “holding back innovation”.

While that is a specific example, consider broader assumptions such as bandwidth, cpu v. gpu capability, or even memory. An historic example would be how for the first ten years of PC software there was an extreme focus on reducing the amount of memory or disk storage used by software. Y2K itself was often blamed on people trying to save a few bits in memory or on disk. Structures were packed. Overlays were used. Data stored in binary on disk.

Then one day 32-bits, virtual memory and fast gigabyte disks become normal. For a short time there was a debate about sloppy software development (“why use 32 bits to represent 0-255?”) but by and large software developers were making different assumptions about what was the right starting point. Teams went through code systematically widening words, removing complexity of the 16 bit address space, and so on.

These changes came with a cost—it took time and effort to update applications for a new screen or revisit code for bit-packing assumptions. These seem easy and right in hindsight—these happen to be transparent to end-users. But to a broad audience these changes were work and the assumptions built into the code so innocently just became legacy.

It is easy for us to visualize changes in hardware driving these altered assumptions. But assumptions in the software environment are just as pervasive. Concepts ranging from changes in interaction widgets (commands to toolbars to context sensitive) to metaphors (desktop or panels) or even assumptions about what is expected behavior (spell checking). The latter is interesting because the assumption of having a local dictionary improve over time and support local custom dictionaries was state of the art. Today the expectation is that a web service is the best way to know how to spell something. That’s because you can assume connectivity and assume a rich backend.

When you start a new project, you might even take a step back and try to list all of the assumptions you’re making. Are you assuming screen size or aspect ratio, keyboard or touch, unlimited bandwidth, background processing, single user, credit cards, left to right typing, or more. It is worth noting that in the current climate of cross-platform development, the assumptions made on target platforms can differ quite a bit—what is easy or cheap on one platform might be impossible or costly on another. So your assumptions might be inherited from a target platform. It is rather incredible the long list of things one might assume at the start of a project and each of those translates into a potential roadblock into evolving your system.

Evolved views of well-architected

Software engineering is one of the youngest engineering disciplines. The whole of the discipline is a generation, particularly if you consider the micro-processor based view of the field. As defined by platforms, the notion of what constitutes a well-architected system is something that changes over time. This type of legacy challenge is one that influences engineers in terms of how they think about a project—this is the sort of evolution that makes it easy or difficult to deliver new features, but might not be visible to those using the system.

As an example, the evolution of where code should be executed in a system parallels the evolution of software engineering. From thin-client mainframes to rich-client tightly-coupled client/server to service-oriented architecture we see very different views of the most fundamental choice about where to put code. From modular to structured to object-oriented programming and more we see fundamentally different choices about how to structure code. From a focus on power, cores, and compute cycles to graphics, mobility, and battery life we see dramatic changes in what it means to be modern and well-architected.

The underlying architecture of a system affords developers a (far too) easy way to declare something as legacy code to be reworked. We all know a system written in COBOL is legacy. We all know if a system is a stateful client application to install in order to use the system it needs to be replaced.

When and how to make these choices is much more complex. These systems are usually critical to the operations of a business and it is often entirely possible (or even easier) to continue to deliver functionality on the existing system rather than attempt to replace the system entirely.

One of the most eye-opening examples of this for me is the description of the software developed for the Space Shuttle, which is a long-term project with complexity beyond what can even be recreated, see Architecture of the space shuttle primary avionics software system. The state of the art in software had moved very far, but the risks or impossibility of a modern and current architecture outweighed the benefits. We love to say that not every project is the space shuttle, but if you’re building the accounts system for a bank, then that software is as critical to the bank as avionics are to the shuttle. Mission critical is not only an absolute (“lives at stake”) but also relative in terms of importance to the organization.

A very smart manager of mine once said “given a choice, developers will always choose to rewrite the code that is there to make it better”. What he meant was that taken from a pure engineering approach, developers would gladly rewrite a body of code in order to bring it up to modern levels. But the downside of this is multi-faceted. There’s an opportunity cost. There’s often an inability to clearly understand the full scope of the existing system. And of course, basic software engineering says that 10% of all code changes will yield regressions. Simply reworking code because the definition of well-architected changed might not always be prudent. The flip side of being modern is sometimes the creation of second system syndrome.

Changed notion of extensibility

All software systems with staying power have some notion of extensibility or a platform. While this could be as obvious as an API for system services, it could also be an add-in model, a wire protocol, or even file formats. Once your system introduces extensibility it becomes a platform. Someone, internal or external, will take advantage of your extensibility in ways you probably didn’t envision. You’ve got an instant legacy, but this legacy is now a dependency to external partners critical to your success.

In fact, your efforts at delivering goodness have quickly transformed someone else’s efforts. What was a feature to you can become a mission critical effort to your customer. This is almost always viewed as big win—who doesn’t want people depending on your software in this way. In fact, it was probably the goal to get people to bet their efforts on your extensibility. Success.

Until you want to change it. Then your attempts to move your platform forward are constrained by what put in place in the first version. And often your first version was truly a first version. All the understanding you had of what people wanted to do and what they would do are now informed by real experience. While you can do tons of early testing and pre-release work, a true platform takes a long time before it becomes clear where efforts at tapping extensibility will be focused.

During this time you might even find that the availability of one bit of extensibility caused customers to look at other parts of your system and invent their own extensibility or even exploit the extensibility you provided in ways you did not intend.

In fact whole industries can spring up based on pushing the limits of your extensibility: browser toolbars, social network games, startup programs.

Elements of your software system that are “undocumented implementation” get used by many for good uses. Reversed engineered file formats, wire protocols, or just hooking things at a low level all provide valuable functionality for data transfer, management, or even making systems accessible to users with special needs.

Taking it a step further, extensibility itself (documented or implied) becomes the surface area to exploit for those wishing to do evil things to your system or to use your system as a vector for evil.

What was once a beautiful and useful treasure can quickly turn into trash or worse. Of course if bad things are happening then you can seek to remove the surface area exposed by your system and even then you can be surprised at the backlash that comes. A really interesting example of this is back in 1999 when the “Melissa” virus exploited the automation in Outlook. The reaction was to disable the automation which broke a broad class of add-ins and ended up questioning the very notion of extensibility and automation in email. We’ve seen similar dynamics with viral gaming in social networks where the benefits are clear but once exploited the extensibility can quickly become a liability. Melissa was not a security hole at the time, but since then the notion of extensibility has been redefined and so systems with or utilizing such extensibility get viewed as legacy systems that need to be thought through.

Used differently

While a system is being developed, there are scenarios and workflows that define the overall experience. Even with the best possible foresight, it is well-established that there is a high error rate in determining how a system will be used in the real world. Some of these errors are fairly gross but many are more nuanced, and depend on the context of usage. The more general purpose a system is the more likely it is to find the usage of a system to be substantially different from what it was designed to do. Conversely, the more task-oriented a system is the more likely it is to quickly see the mistakes or sub-optimal choices that got made.

Usage quickly gets to assumptions built into the system. List boxes designed to hold 100 names work well unless everyone has 1000 names in their lists. Systems designed for high latency networks behave differently when everyone has broadband. And while your web site might be great on a 15” laptop, one day you might find more people accessing it from a mobile browser with touch. These represent the rug being pulled out from under your usage assumptions. Your system implementation became legacy while people are just using it because they used it differently than you assumed.

At the same time, your views evolve on where you might want to take the system or experience. You might see new ways of input based on innovative technologies, new ways of organizing the functionality based on usage or increase in feature scope, or whole new features that change the flow of your system. These step-function changes are based on your role as designer of a system and evolving it to new usage scenarios.

Your view at the time when designing the changes is that you’re moving from the legacy system. Your customers think of the system as treasure. You view your change as the new treasure. Will your customers think of them as treasure or trash?

In these cases the legacy is visible and immediately runs into the risks of alienating those using your system. Changes will be dissected and debated among the core users (even for an internal system—ask the finance team how they like the new invoicing system, for example). Among breadth users the change will be just that, a change. Is the change a lot better or just a lot different? In your eyes or customer’s eyes? Are all customers the same?

We’re all familiar with the uproar that happens when user interface changes. Starting from the version upgrades of DOS classics like dBase or 1-2-3 through the most recent changes to web-based email search, or social networking, changing the user experience of existing systems to reflect new capabilities or usage is easily the most complex transformation existing, aka legacy, code must endure.

Approaches

If you waded through the above examples of what might make existing code legacy code you might be wondering what in the world you can do? As you’ve come to expect from this blog, there’s no easy answer because the dynamics of product development are complex and the choices dependent upon more variables than you can “compute”. Product development is a system of linear equations with more variables than equations.

The most courageous efforts of software professionals involve moving systems forward. While starting with a clean slate is often viewed as brave and creative, the reality is that it takes a ton of bravery and creativity to decide how to evolve a system. Even the newest web service quickly becomes an enormous challenge to change—the combination of engineering complexities and potential for choosing “wrong” are enough to overwhelm any engineer. Anyone can just keep something running, but keeping something running while moving it to new and broader uses defines the excitement of product development.

Once you have a software system in place with customers/users, and you want to change some existing functionality there are a few options you can choose from.

Remove code. Sometimes the legacy code can just be removed. The code represents functionality that should no longer be part of your system. Keeping in mind that almost no system has something totally unused, you’re going to run into speed bumps and resistance. While it is often easy to think of removing a feature, chances are there are architectural dependencies throughout a large system that depend on not just the feature but how it is implemented. Often the cost of keeping an implementation around is much lower than the perceived benefit from not having it. There’s an opportunity to make sure that the local desire to have fewer old lines of code to worry about is not trumping a global desire to maintain stability in the overall development process. On the other hand, there can be a high cost or impossibility to keeping the old code around. The code might not meet modern standards for privacy or security, even though it is not executed it exposes surface area that could be executed, for example.

Run side by side. The most common refrain for any user-interface changes to existing code is to leave both implementations running and just allow a compatibility mode or switch to return to the old way of running. Because the view is that leaving around code is usually not so high cost it is often the case that those on the outside of a project view it as relatively low cost to leave old code paths around. As easy as this sounds, the old code path still has operational complexities (in the case of a service) and/or test matrix complexities that have real costs even if there is no runtime cost to those not accessing it (code not used doesn’t take up memory or drain power). The desire most web developers have to stop supporting older browsers is essentially this argument—keeping around the existing code is more trouble than it might be worth. Side by side is almost never a practical engineering alternative. From a customer point of view it seems attractive except inevitably the question becomes “how long can I keep running things the old way”. Something claimed to be a transition quickly turns into a permanent fixture. Sometimes that temporary ramp the urban planners put in becomes pretty popular. There’s a fun Harvard Business School case on the design of the Office Ribbon ($) that folks might enjoy since it tees up this very question.

Rewrite underneath. When there are changes in architectural assumptions one approach is to just replumb the system. Developers love this approach. It is also enormously difficult. Implicit in taking this approach is that the rest of the system “above” will function properly in the face of a changed implementation underneath or that there is an obvious match from one generation of plumbing to another. While we all know good systems have abstractions and well-designed interfaces, these depend on characteristics of the underlying architecture. An example of this is what happens when you take advantage of a great architecture like file i/o and then change dramatically the characteristics of the system by using SSDs. While you want everything to just be faster, we know that the whole system depended on the latency and responsiveness of systems that operated an order of magnitude slower. It just isn’t as simple as rewriting—the changes will ripple throughout the system.

Stage introduction. Given the complexities of both engineering and rolling out a change to customers, often a favored approach is the staged rollout. In this approach the changes are integrated over time through a series of more palatable changes. Perhaps there are architectural changes done first or perhaps some amount of existing functionality is maintained initially. Ironically, this brings us back to the implication that most businesses are the ones slow to change and have the most legacy. In fact, businesses most often employ the staged rollout of system changes. This seems to be the most practical. It doesn’t have the drama of a disruptive change or the apparent smoothness of a compatibility mode, and it does take longer.

Taking these as potential paths to manage transitions of existing code, one might get discouraged. It might even be that it seems like the only answer is to start over. When thinking through all the complexities of evolving a system, starting over, or rebooting, becomes appealing very quickly.

Dilemma of rebooting

Rebooting a system has a great appeal when faced with a complex system that is hard to manage, was architected for a different era, and is loaded with dated assumptions.

This is even more appealing when you consider that the disruption going on in the marketplace that is driving the need for a whole new approach is likely being led by a new competitor that has no existing customers or legacy. This challenge gets to the very heart of the innovator’s dilemma(or disruptive technologies). How can you respond when you’ve got a boat anchor of code?

Sometimes you can call this a treasure or an asset. Often you call them customers.

It is very easy to say you want to rewrite a system. The biggest challenge is in figuring out if you mean literally rewrite it or simply recast it. A rewrite implies that you will carry forth everything you previously had but somehow improved along the dimension driving the need to rework the system. This is impossibly hard. In fact it is almost impossible to name a total rewrite that worked without some major disruption, a big bet, and some sort of transition plan that was itself a major effort.

The dilemma in rewriting the system is the amount of work that goes into the transition. Most systems are not documented or characterized well-enough to even know if you have completely and satisfactorily rewritten it. The implications for releasing a system that you believe is functionally equivalent but turns out not to be are significant in terms if mismatched customer expectations. Even small parts of a system can be enormously complex to rewrite in the sense of bringing forward all existing functionality.

On the other hand, if you have a new product that recasts the old one, but along the lines of different assumptions or different characteristics then it is possible to set expectations correctly while you have time to complete the equivalent of a rewrite or while customers get used to what is missing. There are many challenges that come from implementing this approach as it is effectively a side-by-side implementation but for the entire product, not just part of the code.

Of course an alternative is just an entirely new product that is positioned to do different things well, even if it does some of the existing product. Again, this simply restates the innovator’s dilemma argument. The only difference is that you employ this for your own system.

The biggest frustration software folks have with the “build a new system that doesn’t quite do everything the old one did” is the immediate realization of what is missing. From mail clients to word processors to development tools and more, anything that comes along that is entirely new and modern is immediately compared to the status quo. This is enormously frustrating because of course as software people we are familiar with what is missing, just as we’re familiar with finite time and resources. It is even more interesting when the comparison is made to a competitor who only does new things in a modern way. Solid state storage is fast, reliable, and more. How often it was described as expensive and low capacity relative to 1TB spindle drives. Which storage are we using today—on our phones, tablets, pcs, and even in the cloud? Cost came down and capacities increased.

It is also just as likely that featured deemed missing in some comparison to the existing technology leader will prove to be less interesting as time goes by. Early laptops that lacked wired networking or RGB ports were viewed quite negatively. Today these just aren’t critical. It isn’t that networking or projection aren’t critical, but these have been recast in terms of implementation. Today we think of Wi-Fi or 4G along with technologies for wireless screen sharing, rather than wires for connectivity. The underlying scenario didn’t change, just a radical transformation of how it gets done.

This leads to the reality that systems will converge. While you might think “oh we’ll never need that again” there’s a good chance that even a newly recast, or reimagined, view of a system will quickly need to pick up features and capabilities previously developed.