field notes – LeadingAgilehttps://www.leadingagile.com
Agile Training | Agile Coaching | Agile TransformationTue, 18 Dec 2018 15:05:59 +0000en-UShourly1https://wordpress.org/?v=4.9.9https://www.leadingagile.com/wp-content/uploads/2018/02/cropped-favicon-32x32.pngfield notes – LeadingAgilehttps://www.leadingagile.com
3232Balanced Professional Interesthttps://www.leadingagile.com/2018/12/balanced-professional-interest/?utm_source=Balanced%20Professional%20Interest&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/12/balanced-professional-interest/#respondTue, 18 Dec 2018 12:55:29 +0000http://www.leadingagile.com/?p=56224Warning: Preachy content. In working with technical people at the individual and team levels, I often find attitudes that pull toward one extreme or the other: Either our work is inherently uninteresting, and we’re only in it for the paycheck; or our work is a boundless source of joy, learning, and achievement through which we […]

In working with technical people at the individual and team levels, I often find attitudes that pull toward one extreme or the other: Either our work is inherently uninteresting, and we’re only in it for the paycheck; or our work is a boundless source of joy, learning, and achievement through which we can transcend the human condition.

Both have it partly right. But I think both are missing a thing or two.

tl;dr (Conclusion in a Nutshell)

Don’t be put off when agilists seem to be demanding more of you than is reasonable. They like to use extreme language, like awesome and passionate. They really mean competent and professionally engaged. On the other hand, software work is more than “just a paycheck,” even if it’s less than “a profession” in the sense of medicine or law. You have to do more than just show up.

Extreme #1: It’s Just a Paycheck

Technical Coach (TC): We’re here to help improve the organization, and to help technical teams apply good practices to produce high-quality products. Are you up for it?

Software Engineer (SE): Couldn’t care less. You’ll be gone soon. You aren’t the first consultants to blow through here, you know.

TC: What about professionalism?

SE: Professionals are licensed and regulated. We don’t have letters after our names. You don’t, either. We just write code. So, stuff it!

TC: What about CSP or PMP?

SE: <tone mood=”sarcastic”>What about MD or PE?</tone>

TC: Fair enough. Even if we aren’t formally licensed, don’t you think we do better work when we approach it in a professional way, at least?

SE: <gesture type=”wave” style=”dismissive”>This is just a paycheck.</gesture> As long as I can take care of my family, I don’t care what software I write.

TC: Did you hear about that guy at Volkswagen who got prison time for writing code to cheat emissions tests?

SE: <gesture type=”eyeroll”>This isn’t Volkswagen.</gesture>

TC: But at least you care about doing a good job on the code, right? For the sake of craft.

SE: If it’s good enough to run in production, it’s good enough. Nobody cares if it’s “clean” and all that jazz.

TC: And personal pride?

SE: Piffle (or words to that effect). Get over yourself.

TC: But at least you care if the code you write helps or hurts people, right?

SE: Pie in the sky hippie nonsense. What do I care if my work helps humanity? If I earn enough to pay my bills and feed my family, I don’t care what the software does.

TC: So, your only motivation is to make money.

SE: Right.

TC: And you don’t care how?

SE: Nah. I write software because that’s what I was trained to do. That’s all. If I were trained to do something else, then I’d do that instead.

TC: Would you change jobs if you were offered $1 a year more?

SE: In a heartbeat.

TC: And you wouldn’t care what the job was?

SE: If it pays $1 more a year, then no.

TC: So, you’d take a job as a hired killer or a human trafficker, provided you were paid $1 a year more than you are right now?

SE: We’re done here.

Extreme #2: Gotta be Awesome!

Agile Coach (AC): We’re here to make the company #Agile and to make your team #Agile and to make you #Agile! It’s gonna be awesome! #Agile! Yay! #Awesome!

Software Engineer (SE): Couldn’t care less. You’ll be gone soon. You aren’t the first consultants to blow through here, you know.

AC: Oh, but you’re awesome. You just haven’t discovered your intrinsic awesomeness. Don’t you want to be awesome? Of course you do! Who doesn’t want to be awesome? Agile! Yay!

SE: Okay, sure. Awesome sounds good. How can I be awesome?

AC: Great! Awesome! The first step is you have to be passionate!

SE: Passionate about what?

AC: About your work, of course! Your work is awesome!

SE: Actually, my work isn’t all that awesome. When they hired me, they made me do a bunch of arcane technical exercises to prove I was a great developer. And for what? This job doesn’t call on any of those skills. I support an old Java Spring MVC app. Just about all the logic is contained in the no-arg constructor of a Spring-loaded bean. It’s 4,000 lines long. There’s also a utility class with fifty static methods. The whole thing runs on an obsolete version of the framework and doesn’t work with any supported version of Java.

AC: Wow, that sounds awesome!

SE: Well, it isn’t.

AC: It can be, if you’re passionate!

SE: Really?

AC: Sure! Look, it’s a question of mindset, isn’t it? Antoine de Saint-Exupery wrote, “If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.”

SE: Hmm. I can see, maybe, yearning for the endless immensity of a better job.

AC: The job is what you make of it. If you can get into the right frame of mind, it can be awesome!

SE: Master, eh? It must have taken a lot to get that. I mean, you don’t throw around a word like “master” lightly, right?

AC: Oh, yeah. You said it. Three days of games and snacks, and a short multiple-choice quiz at the end.

SE: Learned a lot from that, did you?

AC: You’ve no idea. Legos®. Pipe cleaners. Sticky notes. I can draw a picture of a three-legged stool. <gesture type=”CountOnFingers”>The first leg is Transparency, the sec—</gesture>

SE: Yeah…yeah, it sounds pretty intense. Um…look, I don’t think I can be passionate about this.

AC: Well, you’ve got to find your passion, if you want to be awesome!

SE: Have you ever maintained a crufty, monolithic Java webapp?

AC: <tone mood=”hesitant”>Not…as such.</tone>

SE: But you’ve maintained some other sort of crufty, monolithic code, right? So you have an idea how awesome it isn’t?

AC: <tone mood=”hesitant”>I wasn’t actually in software before I took the CSM class.</tone> <tone mood=”enthusiastic”>But that doesn’t matter. It’s all about passion, mindset, spirit. You know, culture.</tone>

SE: We’re done here.

Words Have Impact

It’s only fair to warn you that I’m still in the midst of a 12-step program to get over Caring Too Much About What Words Mean, because in the next few sentences I’m about to care too much about the words passionate and awesome.

In plain English, “passionate” doesn’t mean that you enjoy doing something or that you have a serious professional commitment to it. It means far, far more than that. Your passion in life is that which motivates you to keep on breathing; nothing less.

When agilists tell you to find passion in your work, they’re literally suggesting that you place your day job ahead of your family, your deep personal interests, your country, your religion, and everything else. That isn’t what they actually mean, of course, but it is what they say.

In plain English, “awesome” doesn’t mean “pretty good.” It refers to something that inspires awe. And “awe” means (according to Merriam-Webster), “an emotion variously combining dread, veneration, and wonder that is inspired by authority or by the sacred or sublime : stood in awe of the king : regard nature’s wonders with awe.”

When you walk into the room at your place of work, do you really want to inspire an emotion variously combining dread, veneration, and wonder, as if your colleagues were left speechless before the glory of the universe? Isn’t that a little bit excessive? Maybe it’s just me, but frankly I’d be uncomfortable with that sort of attention.

Here’s something I consider “awesome.” It’s the Antennae Galaxies, NGC 4038 and NGC 4039; two galaxies that are in a slow-motion collision. (Public domain photo from Wikipedia.)

It’s pretty awesome to be able to create a universe just by saying it.

Have you written any code lately that has inspired awe on the level of those examples? Neither have I. Even if, as the poet suggests, “I contain multitudes,” how many of said multitudes can inspire an emotion variously combining dread, veneration, and wonder, as if one were left speechless before the glory of the universe? Not many, I’ll warrant. If I thought that’s what was expected of me, I’d be pretty stressed out.

Is it any wonder that people react to this sort of thing by moving toward Extreme #1? From a Reddit question

This is only one small example of the effect the extreme language is having on people in the industry. There’s a perception that in order to have a successful career in software development, you have to sacrifice every minute of your life to it. A fair amount of angst is expressed on social media over the perceived “fact” that you can’t get a job as a software developer unless you have a portfolio of Github repositories, as if you were an architect or a fashion model with a portfolio of samples of your work. There’s worry that employers expect software developers to be passionateenough to spend all their spare time coding, and the results of that coding must be awesome.

Reality vs. Perception

It isn’t so. You don’t have to have side projects. You don’t have to spend all your time coding.

People worry anyway. I think it’s because of the constant barrage of excessive language about passion and awesomeness. Other than spending all your time coding, how else could you possibly become “awesome”: literally capable of inspiring “an emotion variously combining dread, veneration, and wonder” based on your software work?

It turns out agilists aren’t asking us to be “awesome” in any real sense of the word. They’re just talking about baseline professional performance. Consider this post by Martin Oesterberg, entitled “Team Mission and Definition of Awesome”. I’m not picking on him; I actually like this example because the title explicitly states that the post will provide a definition of “awesome” as the word applies to “agile” software teams. (His definition happens to play into my agenda, too, but that’s purely coincidental. Pay no attention to the man behind the curtain.)

Martin describes several different kinds of teams. I ask you to notice something: The descriptions of awesomeness are descriptions of baseline characteristics of a team. The word “awesome” is over the top. The teams are just not dysfunctional, that’s all. Lacking those characteristics, they aren’t “teams” at all; they’re work groups. It isn’t a question of Awesome Team vs. Regular Team; it’s a question of Team vs. Clump-o-Folks.

Linguistic reinforcing loop

How did we get here? I think there has been a reinforcing loop in the “agile” space. Agilists have used extreme words in an attempt to inspire people to pause and think about ways to improve their work and their working lives. That’s a worthwhile thing to do. But people have reacted to the excessive language by moving toward Extreme #1: get off my back, it’s just a job, it’s not Life. Agilists then reacted to the reaction by moving toward Extreme #2: they amped up the rhetoric in an attempt to be even more inspiring. People then reacted…well, you know.

Maybe if agilists would resist the temptation to use extreme words like awesome and passionate when all we really mean are, respectively, competent and professionally engaged; and if people would resist the temptation to over-react to those words without even asking for clarification first, then we wouldn’t find everyone clustered at the extremes.

Okay, so if we aren’t supposed to cluster at the extremes, then where are we supposed to cluster?

Survival and self-actualization

There was this guy, Abraham Maslow, who came up with a model. The model suggests human needs exist in a hierarchy. Basic needs have to be met before people can pursue higher-level needs. It seems pretty sensible to me. I’m not going to reproduce a copyrighted image, but here’s a site that has a diagram of the model: https://simplypsychology.org/maslow.html.

If you’re drowning, your immediate need is air. You won’t be thinking about whether your day job makes you awesome, or even if it pays enough to take care of your family, while you’re under water.

Let’s say you get out of the water. Assuming you have a dependable supply of air, your basic needs might be water and food. You won’t go looking for water while you can’t breathe, but once you’ve got the breathing thing handled, your next goal might be to find water, and maybe something to wash down with it. So, even if it’s dangerous, you’re going to go out into the world to look for water and food.

Let’s say you’ve got a reliable supply of water and food. Are you still going to rush out into the dangerous world? According to Maslow, your next level of needs are physical safety and security. So you won’t go looking for trouble, as long as you have enough water and food. The ancient Romans knew people wouldn’t go looking for trouble as long as they had plenty to eat, and maybe a little entertainment to watch while they ate it.

Those are your basic needs. Once those are handled, you’re going to be interested in psychological needs, like friendships, intimate relationships, and belonging to a group. You won’t care too much about those things if you’re in direct, immediate physical danger, but otherwise you’re going to want them.

The next level up from there is a feeling of accomplishment, of recognition by other members of your group. Of course, if you aren’t in a group at all, you can’t pursue this level of needs. But once you’re accepted as a member of a group, you’re going to be looking for recognition, prestige, rank, recognition and all that.

Given all that, you’ll probably be interested in self-fulfillment. Maslow calls it self-actualization, or becoming the person you’re meant to be. This isn’t possible if you’re still stuggling to meet more-fundamental needs.

Options and Necessities

All this stuff about self-actualization probably sounds absurd unless you’re at a level in Maslow’s hierarchy where self-actualization is feasible. If you’re just barely hanging on, as many people are, even in affluent countries, then you’re still worried about where your next meal is coming from or how you’ll manage to pay the rent next week. You may have to choose between rent or medications that you need. You may have to go hungry in order to keep your kids fed, since your ex isn’t helping. Unfortunately, there are lots of people in that sort of situation. Self-actualization isn’t on the menu.

The technical people I work with are not in that category. They aren’t desperate to survive. They’re in a better position than that. Their circumstances do not compel them to take the attitude that their job is “just a paycheck.” Anyone who has a job in the software field is in a privileged position: They needn’t worry about water, food, and all that. They are in a position to pursue self-actualization. Yet, many of them complain endlessly about their jobs. I’m sure there are plenty of people around who would gladly trade places with them.

On the other hand, it’s only fair to recognize that there are more-exciting and less-exciting assignments within the software field. Those enthusiastic Agile coaches may be asking for too much when they insist we have to be “passionate” about everything. If your job is to tweak the same Siebel work flow definition over and over, or modify the same COBOL paragraph over and over, or restart the same server over and over when the out-of-date COTS package it hosts runs out of memory, how will you dredge up anything like “passion” for the work? What the hell is “awesome” about it? It’s an unreasonable demand.

Where’s Your Passion?

People like Bill Gates and Steve Jobs might be able to find self-actualization through their “day jobs.” I’m not sure that works for most people. People may have a compelling interest in something unrelated to their paid work, and that’s where they find self-actualization. It may be raising your kids and being a good partner to your spouse. It may be contributing to community service programs through your church or a non-profit organization. It may be making craft furniture with wood. It may be coaching youth sports teams. It may be restoring old vehicles. It may be martial arts, music, sculpture, or amateur astronomy. It may be reading about history or philosophy.

Your “passion” may not lie in an area that has the potential to pay your bills. That’s how it goes. Human value and market value don’t always align. With a “day job” in the software field, you’re positioned to pursue self-actualization wherever you need to pursue it. That’s not a bad spot to be in.

Balance

This is a field that calls for professional responsibility; we aren’t just digging holes wherever the foreman points and says, “dig a hole there.” The attitude that our work is “just a paycheck” is inappropriate. It isn’t good enough just to show up, even if passion is an unrealistic goal. The question becomes: Is there a practical middle ground where people can function as fully-engaged professionals and yet live balanced, fulfilling lives?

When I was working with a large client in the financial sector not too long ago, one of the younger developers asked me about this. She was a couple of years out of school. Her boyfriend was also a software developer. She said he spent every evening playing video games and writing code for fun. She didn’t want to spend all her time doing that, after a day at the office. How was she supposed to manage her career development without falling into that trap?

I suggested that each of us has to decide how much of our time and money we want to invest in our professional development. Maybe we want to spend very little time and no money at all. That’s fine. Allocate whatever amount of time feels right to you for learning new things. Put it into your personal schedule and stick to it.

Let’s say you’re a .NET developer and you want to learn Python. You don’t want to spend “all your time” on software. Allocate, say, two hours every Monday and Thursday evening for learning. Use that time to follow an online Python tutorial, or more than one. There’s no cost. After a couple of weeks, you’ll have a basic understanding of Python. Then pick another topic; maybe Docker or Linux or whatever. It doesn’t have to be a big intrusion on your spare time.

Maybe you’re willing to invest more than four hours a week. Maybe you can do eight hours; two per night, four nights a week. It’s still not a big intrusion on your spare time; most people (well, most Americans, anyway) spend more time than that slumped in front of a television. Maybe, too, you’re willing to spend $10 to $50 a month on career development. That will get you a book a month, an inexpensive online course, and some small-scale resource usage on AWS or some other service.

If you’re up for it, there are numerous technical user groups and meetups all over the place. There are probably several within an easy drive of your location. Investing a little time for professional networking and learning at these events is a cheap way to keep up with new developments and create some options for yourself beyond your present job. Plus, they usually serve snacks.

The Power of Habit

It isn’t just about you. The attitude you bring to work affects all your co-workers. If you’re a downer, then everyone around you will be subdued and less effective in their work. They’ll drag themselves home with diminished energy, and transfer the effect to their families and friends. In turn, they will infect their families, friends, and co-workers with your attitude. Before long, civilization as we know it will grind to a halt, and it will be your fault.

Well, okay, that might be an exaggeration. But it’s no exaggeration to say that the habits you form through repeated practice carry over into other activities in your life. If you spend 2/3 of your waking life on your day job, then you’re building habits with everything you do at work. When you finally get off work and you can spend time on whatever it is that brings you self-actualization, don’t you want to have the energy and attitude necessary to take full advantage?

Conclusion

Checking out completely and treating software-related work as “just a paycheck” doesn’t cut it. You have to care if they ask you to do something unethical. You have to do your best to do the work properly. You have to keep up with new developments in the field. You signed up for those things when you chose this career, even if you didn’t realize it at the time.

It’s as if you picked up a shiny coin you saw lying on the ground. Your intent was to enjoy the shiny side of the coin that caught your eye. You didn’t care about the reverse. But you can’t have one without the other. You may have entered the software field because there are lots of jobs around and the work pays well. But the only way you can have those things is by accepting the other side of the coin, as well. The level of responsibility that goes along with this field of work is the reason it pays more than digging holes for a living. We all have the obligation to be worthy of the pay grade. You can’t choose just the pay grade and not the professional responsibility that goes along with it.

On the other hand, you don’t have to be “passionate” and “awesome” and all that stuff. You can be a responsible, engaged professional without sacrificing your personal life. It’s up to you to choose how much time you’re willing to devote to professional devleopment, alongside all the other things you want to do.

The balance between “passion” and “just a paycheck” is professional interest. Take an interest in your work and in yourself.

]]>https://www.leadingagile.com/2018/12/balanced-professional-interest/feed/0The Importance of Imports in Javahttps://www.leadingagile.com/2018/12/the-importance-of-imports-in-java/?utm_source=The%20Importance%20of%20Imports%20in%20Java&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/12/the-importance-of-imports-in-java/#respondMon, 17 Dec 2018 14:19:23 +0000http://www.leadingagile.com/?p=56218A tweet from Kevlin Henney in August 2018 reopened an old can of worms: The “issue” has never seemed a Big Deal® to me. Even so, a couple of things came to mind when I first read Kevlin’s tweet: Java is designed to work both ways: With wildcard imports or with specific ones. So, programmers […]

A tweet from Kevlin Henney in August 2018 reopened an old can of worms:

The “issue” has never seemed a Big Deal® to me. Even so, a couple of things came to mind when I first read Kevlin’s tweet:

Java is designed to work both ways: With wildcard imports or with specific ones. So, programmers who prefer one way or the other are using the language “as designed.”

IDEs don’t necessarily default to specific imports. For example, the leading Java IDE on the market today, JetBrains IntelliJ IDEA, defaults to wildcard imports (well, it makes reasonable choices based on what imports are needed). And I’m pretty sure you can configure IDEs to your liking, too.

I didn’t really expect anything to come of the tweet, and I was a little surprised at the responses. For instance:

Why would anyone think import statements had any effect on class loading?

In case you were thinking that first point suggests I’m not treating the topic as seriously as a lot of other people, let me clarify: You’re right.

What is a little bit interesting is the fact many programmers regard this as an important point, and will defend their preference with great emotion.

In the first response to Kevlin’s original tweet, wildcard imports were labeled “bad practice” flatly, as if it were an established fact requiring no further explanation. And 007 isn’t the only one who thinks so. In a StackOverflow question from October, 2013, user2810581 (I can’t imagine naming my child user2810581; THX-1138, maybe, but not user2810581) reports getting conflicting advice from books and from her instructor: “I am a student, and a couple of the books I have been reading (Java for Dummies, for one) has said using the wildcard import statement is bad programming practice and encourage the reader to avoid using it. Whereas, in class, we are encouraged to use it. Can somebody please explain why it is poor programming practice?” So, the assumption is written into books as well as expressed on social media, also (apparently) without explanation.

In answer to a different StackOverflow question, from October 2008, Benjamin Pollack explains, “The only problem with it is that it clutters your local namespace. For example, let’s say that you’re writing a Swing app, and so need java.awt.Event, and are also interfacing with the company’s calendaring system, which has com.mycompany.calendar.Event”. That’s true, but it seems easy enough to specify individual classes when there is ambiguity, and otherwise use wildcard imports, if “clutter” is a concern for you. Oh, and, please accept my condolences if you’re writing a Swing app.

So, it looks like a matter of personal preference. Yet, it seems people are willing to spend substantial time and effort to force their tools to avoid generating wildcard import statements. The reasoning given in that StackOverflow answer: “It’s obvious why you’d want to disable this: To force IntelliJ to include each and every import individually. It makes it easier for people to figure out exactly where classes you’re using come from.”

Figuring out exactly where included code comes from is a Good Thing®. But if you’re using a sophisticated IDE such as IntelliJ IDEA (not to mention less-sophisticated ones, or any reasonably good text editor) you don’t have to figure that out manually. Most tools offer a way to search the project for references and declarations, and to help us identify the jars containing class files for which we don’t have the source.

You needn’t spell out every imported class in its own import statement, unless you happen to like that sort of thing. And if you don’t like it, you needn’t change existing code to use wildcard imports just to get the visual clutter out of your way; you can use the collapse feature of most editors to reduce all the import statements to a single line.

Because nearly all tools have these features, variations in personal preference don’t have to result in lengthy debates or convoluted work-arounds. Everyone can have code that looks the way they like it to look.

On a related note, in Java we expect to find packages that support related functionality. Our intent is typically to import a package rather than single classes on a one-off basis.

A practical rule of thumb could be to specify the package name down to the last node, but not necessarily write a separate import statement for each class. Seems to me that is intention-revealing without inviting visual clutter, as in Dan North’s example from the thread Kevlin started:

There are similar circular debates around import statements in other languages that have a concept similar to Java “packages,” too.

At my age, you’d expect me to have an opinion about everything. Maybe more than one. But in this case, I don’t. To me, the question of whether to use wildcard imports with Java is on par with burning questions such as:

how far to indent

tabs vs. spaces

opening curly brace on same line or next line

avoid static imports so you can see that assertEquals() is really Assert.assertEquals()

others in a similar vein

Well, I might have just one opinion. It’s a small one.

I think we can save ourselves some grief if we strive for consistency within the community that maintains our code base. If that’s a single team, a group of teams within a larger organization, an Open Source project with multiple maintainers, or a worldwide community of developers using a given programming language, then that’s the “community.” If your community calls for wildcard imports or four space indentation or tabs or curly braces on the next line or whatever else, it’s generally best to go along with it; not because one way is technically or objectively better, but because of the usefulness of the principle of least astonishment.

]]>https://www.leadingagile.com/2018/12/the-importance-of-imports-in-java/feed/0Hidden Business Rules in Legacy Codehttps://www.leadingagile.com/2018/12/hidden-business-rules-in-legacy-code/?utm_source=Hidden%20Business%20Rules%20in%20Legacy%20Code&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/12/hidden-business-rules-in-legacy-code/#respondWed, 12 Dec 2018 11:58:49 +0000http://www.leadingagile.com/?p=56216Large organizations are often hesitant to replace very old legacy applications that have served mission-critical business operations for many years. In fact, they are reluctant to modify the applications at all. Why is that? Before answering that question, let’s answer this one: Who Cares? Looking back over the course of the “agile” movement, it seems […]

Large organizations are often hesitant to replace very old legacy applications that have served mission-critical business operations for many years. In fact, they are reluctant to modify the applications at all. Why is that?

Before answering that question, let’s answer this one:

Who Cares?

Looking back over the course of the “agile” movement, it seems to me that larger companies began to take a serious interest in “going agile” once the idea reached the Early Adopter phase of the diffusion of innovations curve, and looked as if it was not going to fizzle out. Since then, many large organizations have undertaken “agile” initiatives, and quite a few have made multiple attempts. The proliferation of “scaling frameworks” and “certifications” reflects the demand for agility on the part of deep-pocketed companies.

The success stories are all about the outer layers of the enterprise technical environment. “Full stack” means (merely) mobile and Web front-ends talking to web servers and, possibly, app servers and local databases, and then handing off the “real work” to back-end systems that are not included in the “agile” ecosystem. The mission-critical business rules are baked into the back-end systems. The “full stack agile” apps mostly pass messages back and forth between client devices and APIs that lead to back-end systems. The true “full stack” in these larger enterprises is not involved at all.

To call that a successful “agile” transformation is akin to saying planet Earth consists only of its crust. The crust is certainly the most interesting part to us humans, as it’s the part we can interact with. But there’s a whole lot more underneath, without which the crust would be uninhabitable.

In the early years, “agilifying” the crust was sufficient to provide customers with the sense that the company was moving forward and able to respond to the market effectively. On some level, however, that approach papers over the larger problem of what to do with business-critical core systems that live on older platforms, behind the APIs the newer applications call. Changes on thosesystems don’t fit into the canonical “two-week sprint” model. There are all kinds of reasons for this, from technical to procedural to structural to regulatory.

Things have progressed to the point that this has become the next “weak link” to strengthen. How is that to be done? Several options are possible:

Option 1: Isolate the back end behind an API layer, leave it alone, and hope for the best.The challenge: Increasing risk as the platforms age off of vendor support and qualified technical staff retire. Both the probability and the impact of a failure will increase over time.

Option 2: Bring the back end up to date and fold it into the larger “agile” environment, and stop operating the data center as if it were still 1985. IBM has never stopped evolving the mainframe platform and related technologies. There are no technical barriers to doing this. As difficult as it already is to modernize these systems, the longer we wait the more difficult and expensive it will become.The challenge: Price – not cost as in cost-benefit, but price. Activating the necessary features and capacity on zSeries systems is expensive. In some cases, the effort to modernize systems may require significant company resources for two or more years, as well. We should have started doing this 20 years ago, when it wasn’t so difficult. We have inherited the results of short-term thinking from that era.

Option 3: Identify business-critical functions within legacy applications and carve them out into microservices or serverless functions.The challenge: The effort to extract and repackage selected routines from legacy languages is at least as great as the effort to rewrite those applications from scratch. In addition, calling a subprogram written in a mainframe language via foreign function interface (FFI) only works for the first invocation, due to differences in conventions for passing arguments and return values, making it problematic to wrap “rescued” legacy routines in cloud-friendly scripts (yes, I’ve tried it). Maybe that’s okay for a FaaS or “serverless” setup, but it feels pretty clunky. There’s also the fact that once we have analyzed the legacy source code sufficiently to identify key business rules hard-coded therein, we also understand those rules well enough to rewrite the application, and that is probably an easier task.

Option 4: Rewrite key applications or replace them with off-the-shelf products or network-based services.The challenge: Ensuring market-differentiating logic baked into the legacy code is not lost in translation.

This post addresses the key concerns around Option 4.

Reasons for Hesitation

Clients have expressed two main reasons to me for their reluctance to delve into longstanding legacy applications. First, they don’t have many (or any) technical staff left on board who can confidently work with the code. They worry that any change will lead to a regression that they won’t be able to correct easily, if at all.

Second, they worry that the mysterious source code, written in a mysterious old language, contains mysterious hard-coded business rules that provide some of the company’s competitive advantage, and that are not documented anywhere or understood fully by anyone. Replacing or attempting to update the old code may wipe out that special logic. I suspect the second reason is a corollary of the first.

Some companies are in more dire straits than others. I remember one company that had re-hired a retired employee to maintain their single most mission-critical application. At the age of 78, he was the last remaining human who could work on that system. They were paying him $250,000 per year to work 2 days a week. He had accomplished the Vulcan dream to live long and prosper, but what of the company’s future prospects?

On a side note: Why would a company ever build such a solution in the first place? Remember that in the years when large enterprises were first taking advantage of early computer systems, there were no COTS packages or Internet-based services. Everything had to be written from scratch, and all of it was proprietary, so programmers could not benefit from things others had learned.

That much is understandable. However, why would people go out of their way to design a solution so arcane and complicated that no one else could work with it? To seek deeper understanding of the phenonenon, scholars have created a cross-disciplinary field of study drawing from formal logic, psychology, and pharmacology known as It Must Have Seemed Like A Good Idea At The Time. That is out of scope for this piece, although we look forward to reading (or smoking) the research findings.

Unreadable Code?

Let’s examine the first reason organizations avoid changing old code: Current staff are not familiar with the programming language. For context, let me clarify that we’re primarily talking about applications originally developed on IBM mainframes in Assembly language, COBOL, PL/I, or RPG and on (then) Tandem NonStop systems (now owned by HP) using TAL or COBOL. We’re also mainly considering large, established companies in the financial sector, which adopted computer technology very early in the game. These are the companies most likely to have legacy applications of that vintage still in production, with some core applications dating back to the 1970s.

It’s true that young professionals today are neither trained on nor interested in these older programming languages. They don’t want to be shunted into a multi-year conversion project that makes their marketable skills go stale. On the other hand, when we’re converting code from a language, we only need to be able to understand what the code is doing; we don’t have to learn the language in depth.

The good news is that COBOL accounts for nearly all the legacy applications we might wish to migrate or preserve. Even when written carelessly, COBOL is relatively understandable compared with assembly, TAL, or RPG (especially older RPG versions based on tabular input). PL/I is not inherently hard to read, but there was a tendency for people to write “clever” code; Perl Golf predates the invention of Perl in 1987, at least in spirit.

There was a time when over 97% of business application code in production worldwide was written in COBOL. You will probably not have to deal with other legacy languages that may be harder to understand than COBOL.

Why So Much Stylistic Variation?

Not only are legacy languages unfamiliar to younger colleagues, but the original authors did not always follow consistent conventions. The old code is full of surprises, some more fun than others. The optimistic appraisal of COBOL’s readability offers cold comfort to those who must cope with some of the more creative examples of one-off designs.

I’ve seen accidental “features” of COBOL exploited to the extent IBM could not fix the compiler without crashing hundreds of customers; in particular, the use of nested OCCURS DEPENDING ON clauses to achieve variable-length tables at run time, which is not a defined feature of COBOL and only worked by accident one fine day in an IBM compiler release; a primordial example of Hyrum’s Law. I’ve seen “clever” application generators based on assembly macros; a sort of Cretaceous-era version of the Rails scaffold command, only not as good an idea. I’ve seen COBOL code that wrote object code directly into a WORKING-STORAGE area and then passed it to an assembly subprogram to execute the code on the fly; in effect monkey-patching a static language.

It isn’t necessary for things to be that crazy for the code to be hard to follow. In some shops, people wanted to achieve design-time reuse. They depended on an IBM extension to COBOL that allows for nested COPY REPLACING statements. Just the opposite of the giant monolithic source file, their applications comprised a large number of code snippets filled with placeholder text. The code might look clean, as far as naming conventions and indentation are concerned, but the intent of the code was obfuscated. Eyeballing a physical print-out of the compiled program, with all the COPYs expanded, might be the only way to examine the source code…and we could be talking about 70,000-90,000 lines of code.

Today, each programming language community has settled on certain conventions. For instance, we normally expect people to write method names in upper camel case in C#, lower camel case in Java, and snake case in Ruby, even though the compilers don’t care. There are numerous small conventions that people generally try to follow when working in a given programming language. It helps others read and reuse their code. It’s easy to learn the conventions because we work in an “open” world where we can find tutorials and examples and shared code bases easily.

The world of mainframe development in the mid-20th century was “closed,” in contrast to the “open” world of today. Developers crafted their own conventions and styles within each company, and were largely unaware of the conventions used in other companies. Those of us who moved from organization to organization frequently had the opportunity to see code designed and written in a multitude of different ways. On the whole, things were more chaotic then than they are now, even though we now work with many more different platforms, languages, and frameworks.

I think that’s a consequence of the “closed” world of the time. There were no such things as the World Wide Web, StackOverflow, or Open Source. No one had a mainframe computer in their home. Developers learned on the job, and unless they changed jobs frequently they learned exactly one way of doing things. The way they learned could be very different from the way things were done in the office building next door. The wheel was reinvented independently in thousands of companies, thousands of times over. And there are a lot of ways to make a wheel.

Finding the Hidden Business ules

Fortunately, we don’t have to read every line of source code. With a little practice, we can visually scan the source for patterns that suggest something interesting is going on. For purposes of migrating solutions, we’re particularly interested in spotting code that appears to be doing something that looks convoluted or, to use the proper technical term, “squirrelley.”

As with any programming language, when you see long swathes of uninterrupted code, multiply-indented conditional statements, or excessive source comments trying to compensate for non-intention-revealing code, you know the program may be doing something that is not straightforward. After all, COBOL was designed to be relatively readable. If the intent of the code isn’t apparent, then something is definitely squirrelley. If there are any valuable hidden business rules in the code, they will be hiding there, among the squirrels.

Static Code Analysis

Finding patterns like complicated conditional logic sounds like a job for static code analysis. There are products that support COBOL, like SonarCOBOL, from the well-known code analysis company SonarSource, and Fortify, now owned by Microfocus, the leading cross-platform COBOL provider.

Unusual business rules almost always correlate with complicated conditional logic. Set up the tools to highlight source files that contain patterns like the ones shown in the examples below. Also look for programs or subprograms that are invoked very frequently, and examine the source for those programs visually.

Expect False Positives

Most of the time, you will not find anything worth preserving. Typically, the “special” rules embedded in old source code amount to hacks or workarounds that people came up with long ago, before things were standardized. There is more visceral fear than objective reason about the risk of missing something important when replacing a legacy system.

Here’s a common scenario: A financial institution processes credit card account numbers. They need a way to test their code. If they test using real account numbers, all kinds of undesirable things can happen. Besides, they’re not supposed to have access to real account numbers. There’s that Sarbanes-Oxley thing, you know, as well as rules about personally-identifiable information.

I’m reminded of a time when a colleague working on a back-end credit authorization system used his own Visa card to test the code. When his account showed a balance of $1,000,000 while his credit limit was $25,000, Visa were not amused. Fortunately, he was able to talk his way out of it.

Not everyone is equally adept at talking their way out of things, so people invented schemes to identify fake account numbers. Living in a “closed” world as we did, everyone invented a differentscheme.

Throughout all the application code, any references to an account number had to take the fake numbers into consideration. Yes, I know, that means the production code contained baked-in knowledge of testing. This is now, that was then. Just roll with it.

Here’s a contrived snippet of COBOL code that identifies credit card issuers based on the account number.

The basic idea is the code has to recognize test account numbers and handle them differently than real ones. This is in the nature of the “hidden business logic” that legacy applications contain. It isn’t a special way of processing that distinguishes our company from the competition; a valuable trade secret we may lose if we replace the existing application. Far from valuable business rules that create competitive advantage, it’s just a workaround for the fact that it was difficult to manage test data. When we migrate the solution to a different platform or language, we won’t carry that sort of thing over.

Not all legacy code is as clean as the example above. Here’s an equivalent routine based on some real-world examples I’ve seen. Real legacy code may be cluttered with comments documenting the full change history of the program, as we didn’t have very good version control systems in those days. This example uses IF/ELSE with inconsistent indentation instead of EVALUATE, and uses the older convention of ending each statement with a period, which leads to not-so-pretty ways to break out of the conditional block. This example also has some commented-out “debugging” code such as you might find in legacy programs, and features out-of-date comments at the top of the routine. I think you can still parse it even if you aren’t familiar with COBOL.

Even if you’re not familiar with COBOL, you probably find the first example easier to follow than the second. And yet, you can see there’s a pattern in the code that warrants a closer look: A complicated if/else structure. In many cases, there will be just one or two paragraphs or blocks in a program that you really need to study. It isn’t too daunting to identify valuable logic in legacy code.

There are many other reasons people may have created hacks for one thing or another. It isn’t always or only to enable testing. The point is that most of this sort of thing doesn’t prevent us migrating the application or replacing it entirely.

I’ll reiterate that it is possible an old program really does contain genuinely valuable business rules that aren’t documented anywhere else. But it’s far less likely than most people assume.

]]>https://www.leadingagile.com/2018/12/hidden-business-rules-in-legacy-code/feed/0The Problem with Story Pointshttps://www.leadingagile.com/2018/12/the-problem-with-story-points/?utm_source=The%20Problem%20with%20Story%20Points&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/12/the-problem-with-story-points/#respondMon, 10 Dec 2018 10:38:43 +0000http://www.leadingagile.com/?p=56209For starters, let me just say there’s no problem with story points. Story points are a way of expressing the relative sizes of stories. They can be helpful for short-term planning of software development activities. Unfortunately, sometimes people get hung up on the numbers, and they lose track of the substance. People can get into […]

Story points are a way of expressing the relative sizes of stories. They can be helpful for short-term planning of software development activities. Unfortunately, sometimes people get hung up on the numbers, and they lose track of the substance.

People can get into the habit of repeating the buzzwords and marketing phrases that go along with a particular named or branded Thing. I’ve noticed a lot of software development teams bring certain buzzwords into their everyday vocabulary and use them creatively (or carelessly).

It’s cool for a team to develop its own internal jargon. It helps make work more fun and may foster team identity and cohesion. But people sometimes lose track of the intended meaning or the value behind a term. In some cases, this can lead them to focus on things that aren’t important or helpful. “Story point” is a term that is subject to that pattern.

The Nounification of Verbs, and Vice Versa

It seems we’re always in a hurry, and we’re interested in ways to abbreviate or summarize so that our converations will be as brief as possible. After all, we have a lot of dependencies to wait for, and we’re keen to get busy waiting for them. No time for chit-chat.

Sometimes we get a bit carried away. The result can be puzzling, and even amusing.

To ask is a verb. The ask is not a Thing.

To spend is a verb. The spend is not a Thing.

Solution is a noun. To solution is not a Thing.

Leverage is a noun. To leverage, as a verb, started to appear in English dictionaries fairly recently, as these things go. The noun leverage was verbified as a shorthand way to say to apply leverage.

When you hear something like, “We’ll leverage the spend to solution the ask,” remain calm. Back away slowly. Make no sudden moves. Keep nodding and smiling. Remember your Happy Place.

Point, as in “story point,” is a noun. To point is a verb, of course. It means to indicate an object or direction by extending a finger toward the object or in the direction of interest. If you’re a dog, you can point with your nose instead of the finger that you don’t have. It’s considered good form to lift one of your front paws and stand very still.

How I Learned That I Don’t Know Agile

After years of working with various agile methods in various contexts, I thought I had a pretty good handle on the whole thing. And yet, the first time someone said to me, “We’re pointing today,” my response was:

And they did. And they were happy. All the stories ended up with numbers associated with them. And the ScrumMaster smiled upon the team, and the backlog was dapper, and the ALM spilled forth its gifts unto the Ready column, and the Sprint was packed, and the Product Owner did grin, and behold! snacks were upon the team, and a joyous noise rose up to the heavens.

A Thing to Do

As time went on, it became clear that there was no connection between the points and the team’s delivery capacity, value generation, sustainable pace, or quality. The points were never mentioned during sprint reviews or heartbeat retrospectives. They were just numbers.

“Sizing” might have had a purpose; a function; some sort of value. “Pointing” was nothing more than an activity that had been dictated to the team; an event to be checked off on the Sprint schedule. It was “a thing to do, like feeding Vaal“.

Well, Okay, What, Then?

Here’s what happens on proficient teams (at least, this is what I’ve seen happen):

1. Someone reads the title of a story card aloud.

2. Simultaneously, without analysis, without discussion, and without delay, everyone throws out a number from the set [1, 2, 3, 5, 8, 13]. They might use physical cards, a phone app, or their fingers. The range of values doesn’t exceed a single order of magnitude (except the highest value, which really only means “this story is too big”), they don’t include zero, and they don’t use values that imply false precision, like big numbers ending in zeroes, fractions, or numbers with decimal points.

3. If the numbers are close together, say 2-3-3-5-3, then a value in the middle is taken, like 3. In a case like 3-5-3-3-5, we might go with 5.

4. If there are any outliers, the team pauses to discuss the reason for the outlier. For instance, if the numbers are 2-1-8-2-2, the team will want to know why one person thought it was an 8. If the numbers are 8-13-8-8-2, the team will want to know why one person thought it was a 2. Then the team repeats the procedure. Once they get a reasonably close set of numbers, they write it down and move on to the next story.

5. If the story is “too big,” it means the team lacks clarity about the story. For instance, if they got 13-13-8-5-13, they might collaborate with stakeholders to improve their understanding. Then they size again. If the story is still too big, they split it. They repeat this until it’s possible to get a gut feel for the relative size of all the stories, and the work is broken down into reasonably-sized chunks.

You can see there is a sort of informal “estimation” thing happening here. But more importantly, the team is using the process to ensure they have a common understanding of the scope of each story. For a novice team, sizing the stories serves the dual purpose of estimation and collaboration. For a proficient team, it’s much more about the collaboration; these teams typically use empirical methods to forecast their work.

But it’s not about the points themselves. That’s sort of missing the…what’s the word I’m looking for?…missing the intent.

Now, if you’ll excuse me, it’s time for me to job now, so I have to car to the office. My only ask is that I’ll have time to coffee first. I need to leverage some caffeine before I road. There’s a lot to solution today, and my spend is limited!

]]>https://www.leadingagile.com/2018/12/the-problem-with-story-points/feed/0Live from Product404, in Atlanta: Agile RoadMappinghttps://www.leadingagile.com/2018/12/live-from-product404-in-atlanta-agile-roadmapping/?utm_source=Live%20from%20Product404%2C%20in%20Atlanta%3A%20Agile%20RoadMapping&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/12/live-from-product404-in-atlanta-agile-roadmapping/#commentsThu, 06 Dec 2018 13:45:36 +0000http://www.leadingagile.com/?p=56186﻿﻿ This talk covers techniques and approaches to developing and communicating roadmaps to your organization. Balancing Agility and predictability in a roadmap is a challenge, and stakeholders require transparency in order to effectively drive organizations forward. Product teams that efficiently deliver value to market require the flexibility to make decisions using the latest discovery and […]

This talk covers techniques and approaches to developing and communicating roadmaps to your organization.

Balancing Agility and predictability in a roadmap is a challenge, and stakeholders require transparency in order to effectively drive organizations forward. Product teams that efficiently deliver value to market require the flexibility to make decisions using the latest discovery and learning, while giving their team time to plan for the product’s direction.

Whether your team is a startup looking to define your product strategy, or an enterprise looking to remain competitive while coordinating across hundreds of teams, this event will deliver valuable insights and techniques on how Product leaders can remain agile while developing and sharing their roadmap.

Speakers

Erik Goranson: Vice President of Product at Gather

Erik leads product strategy at Gather, and is at the forefront of bridging technology and hospitality to help venues and guests gather and connect. Prior to Gather, he led the product management team at ExpApp to drive mobile commerce, flexible ticket sales, and data solutions empowering sports and entertainment leaders. He’s also served as a Senior Associate at Bain & Company in their Technology Private Equity Group.

Scott Sehlhorst: SVP, Executive Consultant at LeadingAgile

Scott has 25+ years in product management & strategy consulting, software development, and mechanical engineering to help companies make better decisions about how to invest in their products. Scott “went Agile” in 2001 and has since carved a niche helping companies with long-term planning horizons to connect their strategy to iterative and incremental development cadences. Scott brings B2B2C and B2C perspective across multiple industries including eCommerce, aviation ecosystems, telecom equipment and services, mobile devices, insurance and financial services. He has worked with firms ranging from under 100 employees to Fortune-100 enterprises. Scott is also a product management lecturer in both the Dublin Institute of Technology and Lviv Business School advanced degree-programs in product management and technology management.

]]>https://www.leadingagile.com/2018/12/live-from-product404-in-atlanta-agile-roadmapping/feed/1Designing a Feedback-Driven Strategic Execution Model w/ Dennis Stevenshttps://www.leadingagile.com/podcast/designing-a-feedback-driven-strategic-execution-model-w-dennis-stevens/?utm_source=Designing%20a%20Feedback-Driven%20Strategic%20Execution%20Model%20%3Cbr%3E%20w%2F%20Dennis%20Stevens&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/podcast/designing-a-feedback-driven-strategic-execution-model-w-dennis-stevens/#respondThu, 06 Dec 2018 13:20:54 +0000http://www.leadingagile.com/?post_type=podcast&p=56193A few weeks ago, Dennis Stevens and I recorded a podcast called Building an Organizational System That Can Embrace Change. The podcast introduced three critical concepts which we are exploring at a deeper level in additional interviews. In this episode of SoundNotes, Dennis Stevens, LeadingAgile’s co-founder and Chief Methodologist, and Dave Prior, dig into one of […]

In this episode of SoundNotes, Dennis Stevens, LeadingAgile’s co-founder and Chief Methodologist, and Dave Prior, dig into one of those key concepts:

How to design an execution model that can provide feedback which can then be incorporated back into the strategic planning.

If you’re involved in making strategic decisions and planning at the program and portfolio level, there is a lot of valuable information here about how to reduce risk and create greater optionality for your organization.

]]>https://www.leadingagile.com/podcast/designing-a-feedback-driven-strategic-execution-model-w-dennis-stevens/feed/0Governance that Goes with the Flowhttps://www.leadingagile.com/2018/12/governance-that-goes-with-the-flow/?utm_source=Governance%20that%20Goes%20with%20the%20Flow&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/12/governance-that-goes-with-the-flow/#respondMon, 03 Dec 2018 16:13:22 +0000http://www.leadingagile.com/?p=56189In slide #9 of this deck about using cycle time analytics in forecasting, Troy Magennis shows that when four people plan to meet for dinner, there is only one chance in 16 that all four will arrive on time. Typical corporate management thinking would solve this problem by creating a Department of Dinner Planning and […]

In slide #9 of this deck about using cycle time analytics in forecasting, Troy Magennis shows that when four people plan to meet for dinner, there is only one chance in 16 that all four will arrive on time. Typical corporate management thinking would solve this problem by creating a Department of Dinner Planning and promulgating detailed procedures, based on Industry Best Practices, to ensure people arrived on time for each dinner.

When that didn’t work (and it wouldn’t, of course), the typical response would be to tighten the screws. Make the procedures even more detailed. Limit the choice of restaurants to an approved list. Add bureaucracy to manage the approved list. Add some quality gates where work in progress can be inspected by accountable personnel. Punish the accountable personnel if they don’t find anything wrong, as something surely must be wrong. Increase the penalty for late arrival.

Other solutions become evident once we start to think outside the corporate box. For instance, we could duct-tape the four people together. Wherever they go, they go as one. That simple change increases the probability of on-time arrival from 1 in 16 to 1 in 2, an 8-fold improvement. What consultant wouldn’t be proud of an 8-fold improvement? Consultants call that sort of thing “low-hanging fruit.” I’ll bet we could get a client testimonial out of it, too.

Another possibility, and one that might appeal to Millennials, is to have dinner delivered to each person individually, and let them interact via social media. That imperfect solution still requires synchronous engagement and a degree of personal interaction, neither of which Millennials crave. Dinner 2.0 might include the ability to record and playback one’s participation in the event, represented by an AI-equipped avatar. That way, everyone eats when and what they please, without the burden of trying to hold up their end of a conversation. Each can play back the experience on demand, or just let the avatars get together in virtual reality and leave it at that. No need to venture out into meatspace at all.

If those solutions seem silly to you, then you may not be embracing corporate thinking fully. Remember that people are merely resources. Resources can be treated in any manner whatsoever, provided the end result comes out the way you want it to.

Just for grins, let’s pretend the four resources who are planning to meet for dinner are actual humans. How can they solve the problem of erratic arrival times?

One way I like to solve problems is not to solve them at all, but rather to change the parameters of the situation in such a way that the problem winks out of existence. In this case, we could agree to meet at a conveyor-belt restaurant. Dinner is served continuously, so it doesn’t matter exactly when each person arrives. No best practices, formal procedures, duct tape, or Departments of Dinner Planning are necessary. It’s definitionally impossible to arrive at the wrong time because there is no wrong time.

Solution in Search of a Problem

What if it isn’t four people who want to meet for dinner. What if it’s four teams that contribute to the creation of a product. And what if “arrive on time” means “deliver your piece of the product when the teams that depend on you expect and need it.”

So we have the same pattern, but different details. Instead of people going to dinner, we have teams delivering subsets of a solution. There’s a 1 in 16 chance that all four teams will deliver on the planned schedule. How do we improve the odds? Typical corporate management thinking would solve this problem by creating a Department of Dinner…er, that is, a Program Management Office, or similar.

After creating unique and clever branding for their internal SharePoint site, the PMO would busily fail to coordinate the work of the four teams. Next step is to bring in outside consultants to implement an “agile framework” of some sort. Now they have four levels of PMO function, and 40 teams instead of 4. They hire 40 Product Owners and 40 Scrum Masters.

Things get worse instead of better. So they try harder. Maybe they aren’t using the framework well. Maybe they’re using it well enough, but they aren’t doing enough of it. Push, push, push. Grind, grind, grind. Stomp, stomp, stomp.

None of that helps, because management is attempting to solve a Thing That Is Not The Problem.

Structure, Governance, Metrics

After becoming acquainted with lean thinking, I started to notice things that I hadn’t really noticed before. Things about the effects of organizational structure, formal procedures, and technical practices on the results delivered to customers. I came to the conclusion that organizational structure has the greatest impact on effectiveness, procedures next, and practices last.

One of the things that attracted me to LeadingAgile was the emphasis on structure, governance, and metrics. Structure is just what it sounds like. Governance, in LeadingAgile parlance, isn’t about regulatory compliance; it’s an umbrella term for all sorts of process and procedures. It’s the governance of all your work. And metrics…well if you haven’t measured anything, then how can you possibly know whether you’ve improved anything?

Speaking of metrics, there’s one that I like quite well. It’s called process cycle efficiency. It’s the ratio of value-add time to total lead time. It’s interesting to figure this one out in IT organizations. Most people tend to guess their processes are 70% to 80% value-add and 20% to 30% waste. It turns out that process cycle efficiency is in the 1% to 2% range in most IT organizations. And if you examine the impact of cross-team dependencies, you’ll almost certainly find that it’s the single greatest cause of waste.

Most people in most IT organizations spend most of their time waiting for each other. Granted, they find ways to fill the time. Mostly, they fill the time by starting even more tasks that can’t be completed until they wait for someone else to do something, thus creating more waits. David Anderson of Kanban fame observed this years ago and coined the oft-quoted phrase, stop starting and start finishing!

Structure

Jim Benson, a noted thought leader in the lean space and the creator of Personal Kanban, captures a key idea: “Optimize when you can, standardize if you must.”

In contrast, conventional wisdom holds that we need to standardize as much as we can in the organization as a hedge against the unexpected and unplanned. This is important, you see, because anything unexpected or unplanned is Bad By Definition. It’s failure.

If we organize teams around products or product lines rather than assembling teams on a per-project basis, we can reduce the interdependencies among teams and ensure each product is supported by people who possess all the necessary skills and have access to all the necessary resources to support the product properly.

We can do even better than that. We can organize people around value streams. A value stream comprises all the actions necessary to deliver some form of value to customers. In the IT field, a value stream may be a more expansive concept than any single product or product line. Sometimes, we can structure the organization like a conveyor-belt restaurant of value. (There’s that tingling sensation again.)

Governance

It isn’t difficult to gain agreement from organizational leaders about that idea, at least on a conceptual level. Why, then, do they persist in creating processes, procedures, and standards that restrict their ability to take full advantage of it?

On occasion I’ll ask people to play a little game. Assume there’s a flood. People are being washed downstream. Describe or draw how you would rescue them from the rushing water. People usually come up with something like this:

What happens in this scenario is that victims are slammed into the rope with the full force of the rushing water, quite likely knocked unconscious, and then they drown and/or get splattered by debris, such as automobiles and parts of buildings. They have a single, split-second opportunity to grab the rope, a feat which will require significant physical strength and exquisite timing under life-or-death pressure, when they’re probably already injured, exhausted, and terrified. The odds are very much against them.

When asked how to improve the plan, most people suggest using a stronger rope, or adding a second rope downstream. You know: Quality gates, PMOs, tight access restrictions regarding who can touch the rope or pull victims out of the water, etc.

The more formality, the more bureaucracy, the more layers of indirection, the better. The cynic in me wonders if they aren’t really interested in rescuing the victims, but just avoiding blame for those who drown. It’s the mentality behind the creation of “diversity programs” despite their well-known non-effectiveness, or the idea that no one ever gets fired for hiring IBM. We did something as opposed to nothing, so you can’t blame us.

But what if we wanted to do something effective, as opposed to just any old something?

Observing the way the universe appears to work, it seems we would want to “go with the flow” and use nature’s power to help us achieve the goal, rather than fighting against nature. Like this, perhaps:

What happens in this scenario is that people are carried downstream to the rope, which is stretched across the flowing water at an angle. They need not grasp the rope firmly but rather use it as a guide to reach the shore. They might just bump up against it repeatedly. The current pushes them downstream and the angled rope nudges them sideways. There’s a reasonable chance many will survive. If a car comes floating along it’s probably best to lift the rope out of the water to make room, or (worst case) let go of one end of the rope and throw it back across after the car passes.

This approach relies on the way things naturally tend to work, and allows everyone to adapt when unexpected things happen. It’s based on the unstated assumption that people are fundamentally good and will do the right thing if given the opportunity and the resources to do so.

Why is it that so many people can’t seem to imagine this approach, even when offered multiple chances to think of it?

Zen and the Second Law of Thermodynamics

I’ve found one answer in the work of Katherine Kirk, as expressed (for example) in her talk with Dan North at GOTO Chicago 2018 on SWARM: Scaling Without A Religious Methodology. As part of her ongoing attempt to figure out why things are the way they are, she looked into the philosophy of ancient Buddhist monks. They, too, wanted to understand why things are the way they are. Their approach was to observe the universe and make note of they way it appears to work. They figured that if we go with the flow of the way things naturally work, we’ll enjoy better outcomes than if we constantly bang our heads against reality in hopes of forcing it to be what we wish it were.

The monks noticed a few patterns. They observed that existence has three fundamental characteristics:

Change – everything in the universe is in constant change

Interdependency – we are always in some form of a system

Dissatisfaction – people are uncomfortable with the uncertainty that results from change and interdependency

Katherine suggests we can take some lessons from this:

Change drives the need to adapt

Interdependency drives collaboration

Dissatisfaction drives accepting feedback and iteration

That may sound familiar to you if you’ve read the Agile Manifesto. Or science. Or anything other than business school material.

Conclusion

Rather than governance designed to prevent or avoid change, we ought to come up with governance designed to adapt gracefully to change. Rather than organizing people and resources in a way that maximizes individual utilization, we ought to organize them in a way that maximizes collaboration. Rather than feeling frustrated that we can’t force the universe (or our company) to function in a manner contrary to Nature, we ought to use our discomfort about uncertainty to iterate over potential solutions based on feedback.

This may sound reasonable and it may be based on the way the universe really works, but it’s uncomfortable for many people nonetheless. Why? Well, people generally want answers, not more questions. This view of reality means our favorite Methodology or Framework can’t be the Final Answer Forever, that we can lock in and operate on autopilot. There is no Final Answer Forever, thanks to the #1 fundamental characteristic of existence: Constant change. Rather than a Final Answer Forever, what we need is a set of guidelines for adapting to change at scale.

If you’re not sure how to get started with this, I might know some people who can help.

]]>https://www.leadingagile.com/2018/12/governance-that-goes-with-the-flow/feed/0Using Process Mapping to Understand How Agile Can Help w/ AJ Sanford and Andrew Fuquahttps://www.leadingagile.com/podcast/using-process-mapping-to-understand-how-agile-can-help-w-aj-sanford-and-andrew-fuqua/?utm_source=Using%20Process%20Mapping%20to%20Understand%20How%20Agile%20Can%20Help%20%20%3Cbr%3Ew%2F%20AJ%20Sanford%20and%20Andrew%20Fuqua&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/podcast/using-process-mapping-to-understand-how-agile-can-help-w-aj-sanford-and-andrew-fuqua/#respondThu, 29 Nov 2018 14:27:55 +0000http://www.leadingagile.com/?post_type=podcast&p=56148While many organizations would like to adopt an Agile approach, there are certain types that—due to the nature of their work and their relationship with the client—are not an easy fit with a process like Scrum. In this episode of SoundNotes, LeadingAgile Senior Consultants AJ Sanford and Andrew Fuqua explore how Process Maps and Value […]

While many organizations would like to adopt an Agile approach, there are certain types that—due to the nature of their work and their relationship with the client—are not an easy fit with a process like Scrum. In this episode of SoundNotes, LeadingAgile Senior Consultants AJ Sanford and Andrew Fuqua explore how Process Maps and Value Stream Maps can help you gain greater clarity on where there are practices that need tuning and how Agile might be able to help. During the conversation, AJ, Andrew, and Dave also discuss the difference between Value Stream Maps and Process Maps and when one may be more helpful than another.

]]>https://www.leadingagile.com/podcast/using-process-mapping-to-understand-how-agile-can-help-w-aj-sanford-and-andrew-fuqua/feed/0The Value of Your Work has Echoeshttps://www.leadingagile.com/2018/11/the-value-of-your-work-has-echoes/?utm_source=The%20Value%20of%20Your%20Work%20has%20Echoes&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/11/the-value-of-your-work-has-echoes/#respondWed, 28 Nov 2018 13:07:35 +0000http://www.leadingagile.com/?p=56177I work with some really smart people. On a regular basis, they create and do things that challenge and inspire me. This morning I got up and read this post by Dave Nicolette. In it, Dave shares his thoughts on the relationship between Quality, Craft, and Value. He uses the metaphor of the impact a […]

I work with some really smart people. On a regular basis, they create and do things that challenge and inspire me.

This morning I got up and read this post by Dave Nicolette. In it, Dave shares his thoughts on the relationship between Quality, Craft, and Value. He uses the metaphor of the impact a jazz solo has to explain the relationship between these things.

Because I have consumed an irresponsible amount of caffeine this morning—and because I love trying to connect music, art, and what we build…

I’m Having All The Thoughts and They Must be Set Free

Dave lists a number of “stakeholders” who are impacted by the value of a jazz solo. In exploring this, he extends the definition of stakeholder to include a musician in the audience who heard the solo, takes what they learn back to their practice room, and builds on it.

In December of 1964, John Coltrane, McCoy Tyner, Jimmy Garrison, and Elvin Jones got together and recorded the album A Love Supreme. It is widely recognized as one of the greatest albums ever recorded.

In 2016, John Scheinfeld released a documentary on John Coltrane called Chasing Trane: The John Coltrane Documentary. In the film, Carlos Santana talks about the impact that A Love Supreme has had on him and how whenever he enters a hotel room, the first two things he does are put on A Love Supreme and then burn some incense in order to get the energy right in the room.

So, on the nights Santana is in the hotel room because he has a gig, the album impacts him, and thereby impacts everyone in his audience. Each of those people in turn impact everyone they come in contact with the next day.

When Coltrane spent days and nights locked in his studio, ignoring his wife, 4 year old daughter, and new born son, writing the music that would be recorded for A Love Supreme, the value of the work he did there alone in the room extends to stakeholders who are going to see Carlos Santana in concert all those years later.

On July 4, 1976, The Sex Pistols played a show in Manchester. The show is sometimes referred to as “the gig that changed the world.” There were a lot of people in the audience at that show. Well, actually, not a lot of people. Certainly not the number of people you’d find at a Taylor Swift show. But in that audience were people who would go on to form The Buzzcocks, Joy Division, The Fall, Simply Red, and The Smiths. While punk didn’t actually begin that night in Manchester, that show is marked as one of the fixed points that gave birth to punk rock and what came after.So if you’ve ever listened to Joy Division, The Smiths, Peter Shelley, or any of the bands that evolved from those bands, you are a stakeholder of that show in Manchester.

The Value of Your Work has Echoes

In 1974, Chip Lord, Hudson Marquez, and Doug Michels, who were part of an art collective called Ant Farm, buried 10 beat up old Cadillacs in the ground in Texas. The Cadillacs were buried in chronological order based on year of creation in order to showcase the tail fins. (The tail fin changed each year and you needed a new car each year to keep in style.) The exhibit was a social comment on “the mythical Texas oilman who dumps his Cadillac when the ashtray gets full.”

In 1948, Franklin Quick Hershey was working as the chief of the GM Special Car Design Studio. Hershey decided to take an idea he saw in an early production model of a P-38 and include it in the design of the 1948 Cadillac. This is where the automobile tailfin came from.

In 1980, Bruce Springsteen released The River, which included a large picture of Cadillac Ranch and a song by the same name.

In 1985, James Brown included Cadillac Ranch in his video for “Living in America”

In 2006, Pixar released Cars, which features Cadillac Ranch.

In 2008, Cage the Elephant included Cadillac Ranch in their video for “Ain’t No Rest for the Wicked”

The Value of Our Work Has Echoes

Quality extends well beyond passing QA. The work we do impacts the people who use it. Those people impact others around them. Every interaction we have with others does the same.

As Dave Nicolette says, Value, Quality, and Craft are all deeply intertwined. You cannot have one without the other.

If you are a PM, you are not excluded from this. Every interaction you have with someone in your organization impacts value, quality, and craft.

Value can have a very long tail. Our work has echoes. Franklin Quick Hershey never heard of Speed McQueen. John Coltrane never met the 20-year-old attending the Carlos Santana show just so they could hear him sing “Smooth.” Yet, the connection and impact are still there.

]]>https://www.leadingagile.com/2018/11/the-value-of-your-work-has-echoes/feed/0What’s the Scope of a Unit Test?https://www.leadingagile.com/2018/11/whats-the-scope-of-a-unit-test/?utm_source=What%26%238217%3Bs%20the%20Scope%20of%20a%20Unit%20Test%3F&utm_medium=RSS&utm_campaign=RSS%20Reader
https://www.leadingagile.com/2018/11/whats-the-scope-of-a-unit-test/#respondMon, 26 Nov 2018 13:42:33 +0000http://www.leadingagile.com/?p=56169“Unit test” is a funny term. It seems to be the subject of all kinds of debate. Most of that debate is unresolvable, as it involves personal opinion and emotion above all else. Most of the issues people have with the term come down to just two things: The word “unit,” and the word “test.” […]

]]>
“Unit test” is a funny term. It seems to be the subject of all kinds of debate. Most of that debate is unresolvable, as it involves personal opinion and emotion above all else. Most of the issues people have with the term come down to just two things: The word “unit,” and the word “test.” People can accept the other words in the phrase, as long as we remove those two.

Some people object to the term unit test because of the word “test.” An executable unit test is really a functional check; not the same thing as testing the software in the true sense. I agree with that. However, most of the world doesn’t use that terminology, and I’ve learned it’s all but impossible to change entrenched industry lingo, so I’m going to write “unit test” in this piece, even though you and I know all the examples here are really checks. It will just have to do.

Some people find value in unit tests and others don’t. Okay, live and let live. I’m in the “finds value” group. I’m not going to debate it here. It’s a “given” for purposes of this post that unit tests are valuable. In fact, I’ll go so far as to assert it’s even more valuable to write the unit tests before writing the production code, too. If you fundamentally disagree that unit tests are useful, you can save yourself some time; have a nice day.

Everyone has some idea of what a “unit” of code is, so everyone has some idea of what a “unit test” is. The trouble is there are lots of different ideas about that. I’ve seen unit tests that cover an entire executable, a collection of collaborating objects, an end-to-end transaction, an entire batch jobstream, a series of steps in an ETL process, and all kinds of other things that are of large scope and that have lots of tightly-coupled dependencies. And no one is objectively “wrong” to call those things “unit tests,” because there’s no generally-accepted definition.

But there are a couple of ideas about unit tests that I think are pretty useful.

Michael Feathers

Michael Feathers came up with a set of guidelines for designing unit tests way, way back in the proverbial Mists of Time that goes like this:

“A test is not a unit test if:

It talks to the database

It communicates across the network

It touches the file system

It can’t run at the same time as any of your other unit tests

You have to do special things to your enviroment (such as editing config files) to run it”

That’s from a 2005 article. This next one was published in 2018, but it isn’t a brand new idea.

Michael “GeePaw” Hill

“In TDD, almost without exception, we want everything: a test, a method, a class, a step, a file, really, almost everything, to be as small as it can be and still add value to what we’re doing.”

If that sounds like a fairly loose definition, it’s only because it’s a fairly loose definition. It has to be. The smallest unit that still adds value to what we’re doing varies by programming language and available tooling for running test cases. Sometimes, it depends on just how you write the solution, too.

Java

Let’s consider Java, a language widely used for business application programming. I’ve used an example in the past of an iterative solution to the Fibonacci problem that I found on StackOverflow. Here’s a slightly-modified version of it:

That seems fine. If anything goes wrong with the implementation, that test will fail. But what if we apply GeePaw’s “No, smaller” guideline? Technically, that looks like this:

[1] “Is that small enough?”

[2] “No, make it smaller.”

Sorry if that was too technical.

The thing about the Fibonacci series is that the first couple of values are sort of predetermined, and from a certain point onward the values are calculated based on the preceding values. Granted, this is a bit trivial, but the pattern occurs in many more-realistic business application scenarios. We’d like to have unit tests that zero in on the exact part of the logic that’s incorrect. That saves us some analysis time, and also helps us plug and play different implementations without having to rip up our test suite. If the same tests pass before and after dropping in a new implementation, then we know the new behaves the same as the old. One less thing to worry about in this crazy old world.

With that in mind, we could write unit tests to validate the method that computes each number:

Okay, those test cases are pretty small. Any smaller, and they wouldn’t be meaningful. But what if the production code were implemented differently? Here’s a Java implementation using a lambda expression:

Here we have a single line of code that produces the whole series. Do those little tiny unit tests still make sense? I don’t think so. This code doesn’t repeatedly call a method to calculate each value. All that work is handled by the iterate, limit, map, and collect methods. We needn’t test those separately because they’re supplied by the Java Development Kit; they aren’t our code. Were we to write the same logic in C, without a library, it would be a different story; but in this case the library is part of our tooling. (If you lack confidence that your tools work, then you have a different problem altogether.)

That first unit test we looked at would be the smallest meaningful one in this case. Something like this, maybe:

This meets GeePaw’s criteria for small enough, I think. We might be able to force the issue and come up with smaller unit tests here, but because of the way the production code is written, doing so wouldn’t yield any value. We’re testing a single line of source code. A lot of stuff happens when that code is executed, but we don’t directly control any of that and it isn’t split out in a way that enables us to test the pieces separately. So, we’re done.

COBOL

Most contemporary languages lend themselves nicely to unit testing frameworks. Traditional languages running on older platforms…not so much. What’s a “unit” in a mainframe environment? Unless you want to beat COBOL code into submission with your bare hands, maybe like this, you’ll be using IBM Rational Developer for zSeries with zUnit, Compuware Topaz Workbench, or rolling your own test harness. The smallest unit of code you’re likely to be able to run independently is a whole load module. (Kids: A “load module” is like an “executable.”)

You don’t necessarily need special tooling. You can set up a batch test run more-or-less like this:

There’s nothing required besides standard JCL and utilities. And a lot of people wouldn’t blink if you called a whole batch program a “unit,” because that’s the smallest unit they’ve ever been able to work with anyway. The idea of executing a single COBOL paragraph in isolation would strike most mainframers as a little odd. I’ve even heard the word “impossible” used, although that’s not literally true.

So, if we can accept the idea that a whole load module is the smallest practical unit of code we can test in isolation, we’re good to go.

Except that we’re violating one of Michael Feathers’ principles: We’re touching the file system. We’re touching the heck out of the file system.

Mainframe batch programs generally run against files and databases. That’s what batch processes do, after all: They process large batches of data. And where does data live? In files and databases.

Could we mock that out? Well, maybe we could figure out a way to mock it out, some of the time. But as a practical matter, taking into account the realities of the execution environment, running a test job with test files that are fully under our control isn’t so bad. We’re not introducing dependencies on data sources outside our control, so the test cases are repeatable and consistent. That’s what matters, above and beyond rules and guidelines.

I think it’s fair to call this a “unit test,” in context.

Microservices

What if we travel forward in time instead of backward? I’ve been reading and hearing a lot of people say there’s no need to unit test microservices, because they’re so small already that all we need to do is run API-level checks and we’re done.

Depending on the programming language, how the logic is implemented, and just how “micro” the microservice really is (it’s one of those overused popular buzzwords, after all), it’s conceivablethat there’s no smaller part to check than the whole service, but it’s more likely the case that there’s more than one line of code behind the API.

The fact the code is invoked over HTTP using a RESTful API doesn’t invalidate all the usual considerations for what is or isn’t a unit test and how small is small enough. If it’s Java or C# or Python or Ruby or something like that, there will be methods in there that we can (and should) validate individually.

Both those solutions are dripping with microtests. So, it looks like you can follow Feathers’ and Hill’s guidelines for microservices, after all.

Embedded Systems and IoT

I’m going to gloss over the details here, as the basic information has already been covered. There’s nothing magical or fundamentally different about designing and building software for embedded systems and the Internet of Things.

A lot of the code is in the nature of a finite state machine, taking an action when the device is in a given state and a certain event is detected. The usual design considerations for this category of problem apply. Depending on how complicated the state machine is, you can choose from a range of implementation approaches from a switch statement to the state pattern.

In any case, we’ll usually prefer to isolate the functionality for handling each state in small methods, functions, or subroutines that lend themselves nicely to isolated microtesting and unit testing. There is no reason not to design the code in this way. There are reasons why it is not usually designed in this way, but those reasons are not technical.

The rules of thumb for writing unit tests for embedded and IoT solutions are the same as those for conventional applications: A unit test doesn’t interact with external dependencies (Feathers) and its scope is as small as we can make it without losing the value the test case provides (Hill).

There may be a few differences in the details, depending on context. For test-driving embedded solutions, we’ll usually do development on a different hardware platform than the target. Our quick TDD cycle will happen on the development environment. An additional step beyond the standard red-green-refactor cycle to compile with the compiler settings for the target environment can expose integration problems early in the TDD cycle.

Test-driving or unit testing IoT solutions usually involves two types of tests: Unit tests, which are no different from unit tests in any other environment, and instrumentation tests, which mock out the input signals from the device. The instrumentation tests are conceptually equivalent to UI tests for applications that present a user interface or API tests for services. Here’s a nice article by Nilesh Jarad on test-driving Android applications.

Conclusion

There are those who disagree, and some of them are pretty darn smart, but if you ask me we should write executable checks or tests at the unit/micro level that follow Feathers’ and Hill’s guidelines, regardless of the domain or technology stack we’re using. My experience, along with that of many thousands of other developers, is that the benefits outweigh the costs by a large margin.