When Does The Creativity Happen? Design, Agile and the Studio Model

The subject of creativity and design has been on my mind for a while, especially since the publishing of two of my articles: The End of Agile and Beyond Agile: The Studio Model . My contention has been from the start that one of the biggest problems that Agile in general faces is that it doesn’t really get to the root of creativity, or, to put it in more concrete terms, confuses where and how design takes place, seeing it as primarily a group activity. My experience, after having written a number of books, developed a lot of software, and worked in many creative domains over the years, is that this is just not true, and design is in fact one of the few areas where a group is counterproductive.

To understand this contention, it makes sense to get an understanding of how most successful creative projects succeed. I’m going to pare this down to how I (and most other writers) write a novel, but it actually scales up pretty well in most creative endeavors.

Ideas usually emerge from the need to solve a problem. Identifying what that problem is, however, is seldom as easy as you may think.

GETTY

Choosing the Seed

Today In: Innovation

There’s this fallacy that is, unfortunately, rampant in our society: the notion that having an idea makes you a creator. The reality is that, for someone like a writer, ideas are all over the place. Indeed, the challenge is keeping from being overwhelmed by all the potential ideas that are out there, to not want to immediately start writing every time a new idea crosses your line of sight. Ideas come from conversations, from articles or books you’ve read, from challenges at work, from meeting with clients, sometimes just from daydreaming.

Ideas almost always come about because your brain is constantly making connections trying to fit what it has learned with new information as it comes in. A common thread among architects, writers, artists and other designers is how broad their knowledge base is. They are constantly reading, seeking out connections, and building up data that may have some relevance. Writer knowledge isn’t necessarily deep, but it is broad, and more to the point, they know where to go and who to talk to when they are trying to figure out whether you could in fact stop a hurricane with a nuclear bomb (no, you can’t) or the states of decomposition that a dead body goes through in an anaerobic environment (the body turns into soap).

When I worked as a sales engineer, I would quite frequently sit in on meetings with clients, trying to determine what their pain points were. One thing that emerged from those meeting was the awareness that in general the people commissioning thought work could usually describe what their problems were, but usually only in terms of their existing tools or processes. They could not envision a solution to their problems.

This is a seed. It’s a challenge. Successful fiction writers understand that conflict lies at the heart of every story, and conflict in general arises because a problem exists between two or more people that needs to be resolved. Sometimes the obvious problems are not where the solution needs to be applied, rather, they are symptoms that emerge because of the conflict.

This is one reason why sales engineers and salespeople also tend to be in conflict. The sales person is interested in one thing – closing the sale. The sales engineer, on the other hand, is attempting to figure out where the problems actually exist, and may come in with recommendations that don’t necessarily favor the closing of that sale. However, that’s a different discussion.

One of the first questions I ask clients is why they believe a given project is necessary. Most often the answer they give will be that they believe the technology will give them a competitive edge or something similar. This is so much bull-hockey, but is given because more managers than not do not like to admit that they have problems, because it makes them look bad. Sometimes it will come down to something that they perceive as simple because they don’t understand the complexity of the problem domain: ‘I want a dashboard that will tell me at an instance how well my company is doing’ being a common one in that category. Sometimes it comes from their department seeing a significant slip in sales in the last couple of quarters, and they have to do something or they risk losing their bonuses.

Occasionally, I would come across a manager who had started out in the trenches and in general understood the problem domain well enough to clearly articulate what specifically was needed, and their requirements for working with consultants was simply to get a better handle on options available. These projects always went smoothly, because he or she had clearly done a lot of the preliminary design work involved in ascertaining both the functional and non-functional requirements. However, this was much more the exception than the rule.

In writing an article, the first thing that I do is write a title. The title is the seed. It is a question being asked (a form of conflict), and the article starts out answering that question. In my “board” I will typically have a dozen or so drafts that consist of nothing but a title and a very preliminary first paragraph. When I work on novels, my chapters follow the same form of a title and maybe enough of a hook to tell me what I was thinking at the time.

The thing about seeds is that you seldom rely upon just one. Not all article ideas will actually make it to the point of publication. Not all software “solutions” will either. There’s a peculiar business activity called brainstorming, where people sit down with a whiteboard to work out a solution in concert. Brainstorming can be fun, but it is almost invariably not done right.

The idea behind brainstorming is that you are attempting to create multiple seeds, each of which represents a potential solution. At the end of this session, the manager of the session should then auction off solutions. Let me explain this in a bit more detail: at the beginning of the session, everyone is given $50 in five dollar bills. This is their pot. Over the session, the manager of the session will pull out what seems like good ideas and make them biddable. At the end of the solution, everyone bids on each solution in turn. Each winner of the bid is then tasked with the responsibility of developing that idea into a workable design proposal, within say two weeks (note that if two people made the same bid, then each would develop their own proposal). The proposals are given at the end of this period then the key stakeholders vote. The top voted proposal gets 2/3 of the resulting pot (what was bid), the second gets 1/3. As importantly, the winning proposal also becomes the responsibility of the person who won the bid.

There are several reasons why this approach works. The bidding process serves to eliminate those ideas that are likely to be non-starters, and also provides a way to gauge who is likely to be most enthusiastic in championing the idea. By separating the bidding from the voting, it gives the champion an opportunity to develop the idea, to turn it into a sales pitch. People can use other people to help develop it, which lays the foundation for a team, and it gives those that didn’t win an incentive to think about future potential projects and approaches. It also creates a backup plan, in case the first falls through for some reason.

Design is the process of conceptualization and clarification, the first necessary abstraction of a process that may continue for years. It is ultimately an act of exquisite individuality.

GETTY

Germination and Design

It can be argued that this approach is more agile, but in reality, it provides a way for the stakeholder who is most likely to be responsible to do the heavy duty conceptual lifting to signal that they are willing to take on that responsibility. That conceptual development, or germination, is known as design. Note that design here does not necessarily equate to UX or UI development, though it is likely that this may be a part. Rather, it is a conceptualization of workflow, mechanics, requirements, and so forth. The design proposal needs to have enough deeper engineering to show that it is feasible, but also have enough high level content to ensure it is meeting business needs.

Again taking this back down to the creative process. Most writers, when faced with the task of writing a longer work in particular, prepare some kind of an outline about the points that they need to hit (or the story that they need to tell). In a work of fiction, this outline establishes when characters appear, the conflicts that drive their particular story arcs, and how those conflicts get resolved, either partially or fully. It is possible to write a book without doing this, but the likelihood is high that without the story itself worked out in the author’s head, the project will bog down quickly.

This is usually where a lot of the initial details are also worked out – what a given person looks like, what their backstories are, what tags or tells they have, and so forth. None of this is written in stone. I had one protagonist in a book I wrote who repeatedly went from being dark-haired to a redhead and back again in various drafts (she eventually ended up as a redhead). This didn’t significantly affect the flow of the story, but did require that I pay attention to descriptive content as I was preparing the final manuscript.

The end product of the germination phase is a roadmap or bible for how the project will go. This roadmap is not how the product itself will appear when done, but serves as a living document that reflects the changes that are made in subsequent stages.

The prototype is the first stage in actualization. It is seldom pretty and polished, not because it will be thrown away, but because it proves the feasibility of an idea. The first prototype is often essential in that regard.

GETTY

Prototype

The next phase in the process is to build a working model. In the case of a novel, this working novel is the first draft of that novel. In the case of a software project, it’s the prototype. The expectations on prototypes are, and should be, low. The prototype exists to see if the design will work as expected and to see where it runs into problems. Such prototypes are typically written or implemented all at once with relatively little little in the way of involvement from outside parties until it is completed.

The team that puts it together is typically fairly small, and usually your A-Listers. In a publishing scenario, this is typically done primarily by the author, with the editor periodically checking in to help resolve bottlenecks or checking to make sure the story doesn’t veer too far in an unrecoverable direction. In a software development team, it is usually two or three dedicated programmers (one handling data, one UX, the last integration), a DevOps person managing infrastructure, the architect and a tester. The art is “programmer’s art” – something that will likely give management the heebee-jeebees but serves to delineate the visual boundaries of the app.

What’s important about the prototype is that it gives developers both a test bed for ascertaining where problems may creep up and a way to better visualize what the final product needs to be. Many of the issues that arise come about due to the asynchronous nature of data availability. This prototype is also a good place to test and clarify the design for data interoperability between systems. This can also be facilitated by moving towards a context-independent data model, where the metadata about the data schema is in fact a part of the data itself, and where commonality exists at the foundational layers of the model itself.

This makes for somewhat more complex applications, but such applications are also considerably more robust in the face of model changes. Indeed, this is as good a place as any to stress the fact that data architecture is more critical than physical infrastructure, especially with contemporary applications that are increasingly enterprise data-centric. Put another way – we have largely standardized on the mechanisms to do services, to the extent that engineering makes up a comparatively small portion of the integration complexity. However, data complexity, the need to transform from one ontology or data representation, has only increased.

Actually, that’s not completely accurate. Until comparatively recently, the engineering complexity of integration had made the issue moot – the amount of time to build the requisite connections in the first place was high enough that there were comparatively few such translations that could be done. It was only after the issues of moving data around in a consistent fashion was mastered that the problem of ontological mapping became obvious. Perhaps machine learning may be a solution there, but what this means from a development standpoint is that such prototyping may also involve the related act of prototyping consistent target ontologies.

So going back to the literary prototype: the first draft of a novel is usually pretty bad. Characters are still sketchy, there may be inconsistencies in the storytelling or continuity errors, certain sections may be overlong or overshort, and the denouement of the novel may be weak. Authors with a lot of experience may get away with their novel being perfect in the first draft, but few will be satisfied with it.

Iterative processes are fundamentally fractal – the output of one iteration is the input of the next. This implies that creativity is a systemic function.

GETTY

Iterations

It is only after the prototype is done that it makes sense to even talk about iterative development and apply agile methodologies, though you can use a limited form of scrum during the prototyping phase as a project management tool. The first prototype will go a long way towards helping a client understand what technology is being proposed, even if they may feel that its ugly. Indeed, one trick that UX designers use is making interfaces that are deliberately sketchy so that the client doesn’t get fixated on the parts that are often easy to change but have minimal impact on functionality. In layout this process is often called Greeking (though it usually uses fake Latin) to display intermediate content without it being seen as final.

It is at this stage that the give and take of iterative development really takes off. This is the stage where you can begin testing for scaling (with the caveat tha the issue of scaling should have been a major component of the design phase). Some functionality can be added or removed at this point in the process as well, though with every iteration, this becomes costlier.

This is also the stage where data hygiene can be tested for external sources, and where data governance gets implemented. Indeed, one aspect of the iteration process is that as the application moves closer to completion, the ones who are going to be maintaining and interacting with the application should start being incorporated into the overall process. Quality control moves out of regression testing and into systemic testing with each success iteration.

Note that such iterations are very seldom two weeks long. You can think of it with the idea that each iteration essentially “breaks the build” in some manner, sometimes by reprioritization and rescoping, sometimes because testing reveals underlying flaws that only become evident once scaling becomes a factor. A four week cycle actually works out pretty nicely, because it balances the conflicting needs of iteration planning trying to get actual work done.

In the literary world, this is where other things begin to get scheduled. A release date is set for the novel. Copy editors perform additional proofs on it to fix mechanical errors, the editor becomes far more involved with the actual flow of the book, and if the book is going to traditional press, artists, designers and production specialists are queued up. In the IT world, this is about the time that the team leads start doing presentations to upper management, and is also about the time that the first users of the application being developed are trained up on it. These users provide usability feedback, and they are often the people who end up training other people in the organization on how to use these tools once the system is fully deployed.

Publishing is simply an agreement to take the results of the current iteration and make it available to others. The iterative process may continue indefinitely even with a publishing event.

GETTY

Publish

There’s a cold hard truth that most seasoned authors will tell you: There is no magic moment when you realize that you are done. There is only a point when you have put all of the necessary follow through steps to geting published in motion and you have to send what you have or it will impact schedules. There will always be changes that you wish you had made but didn’t, characters that didn’t quite work as well as you hoped but are adequate to the task of supporting the story, sections that were too wordy or too sparse, but that doesn’t matter, because at some point the curtain must go up, the book must go to press, the software must be released.

This is a decision that is usually not made by the architect or designer but by the product owner. In my earlier post on the Studio Model, I made the note that the architect is the one that provides the vision – the writer or director as the case may be. The product owner is either the client or a liaison for the client, and he or she represents the interest of those paying for the product.

The move to purely electronic distribution has changed the production model dramatically as well. A novel can be updated in real time. A video on a streaming site can similarly be modified, and games now make updates an integral part of the product – new characters, new scenarios, and new capabilities can appear, while more problematic issues quietly fade away. The deployment of enterprise applications is following the same patterns as stand-alone applications are phased out.

What this means in practice is that the urgency to deploy feature-complete applications is dropping with time, since there are fewer impediments in the way of system-wide, instantaneous deployments. This also makes the argument for designing with an eye towards modularity so that new capabilities can go online in a true plug-and-play manner with relatively little impact to overall system performance or usage. These kind of meta-considerations generally need to be built into the system early, because the cost of laying them in late in the cycle is simply too prohibitive.

In many respects, this changes the shape of software development. We are going from point releases to continuous innovation, and from building specific applications to building generalized platforms that can then be “customized” with modules for specific needs. I’d content that in this case, Agile may come into play for developing those modules, but again, its role is very much constrained to deploying enhancements.

Where does software go when it dies?

GETTY

End of Life

No software system lasts forever. The disadvantage to modularity is that such modules are perforce extensions to the system, and as such take their toll in terms of performance, complexity and scalability. Design pattern analysis may discover that some modules are invoked far more often than even “native” functions. New hardware and environment functionality may either warrant a systemic upgrade or force one, depending upon the changes.

Again, this makes the case for keeping a very clear wall between data and code, and to moving towards a semantically (contextually) neutral representation of information, even if it is somewhat more awkward to work with. In effect, especially with semantic systems, the operant data is contained in the database, not the application. Streamlining the application (or migrating it to different platforms) thus becomes much simpler. All too many systems are so tightly coupled to the database that migration requires refactoring that data as well as all the access points. This is a compelling argument for knowledge graphs, as for the most part the coupling between data and application is weak at best when using them.

As graph oriented databases become more entrenched in application development, I expect that this decoupling (and consequently storing more of the working system state and metadata in the graph) will also have its impact upon methodologies in general. It’ll probably still be called “Agile” – the term has become largely synonymous with continuous delivery models even when they bear only passing resemblance to Agile of yore – but regardless, it will almost certainly have an impact on software development.

Euler’s infinity symbol evokes the tail-eating serpent of myth.

WIKIPEDIA

Eating Its Tail

I’ve always been fascinated with the myth of the Ouroboros – the giant serpent of Norse myth that survived by eating its own tail. Euler, one of the greatest mathematicians of all time, used a symbol for infinity that conjured the self-eating snake, as well as evoking the Mobeus Strip:

I bring this up because I think its a powerful metaphor for thinking about where the creation of IP is going. When does creativity happen? It happens when people collaborate, but also when people reflect internally. Conceptualization is almost always an introspective action – groups seldom conceptualize well. Refinement is almost always an extroverted action, because groups provide fresh eyes and perspectives that serve to locate weak points as well as identify value.

It’s incumbent upon our institutions to recognize this. All too often companies become fixated upon removing the I in TEAM, because the premise underlying many organization is that individuality is disruptive. Yet it is that very individuality that ultimately provides the spark of conceptualization to the creative process, the unique perspective that a person brings to solving problems.