Search

I was recently asked about the tension between “instructing” and “coaching” in a coaching context. My impromptu thoughts.

Using an instructive approach can efficiently give context, direction, a sense of definiteness, reassure a worried coachee, and may be the most comfortable coaching paradigm for people at certain orders of development. I’m thinking of linear thinkers, perhaps, who “want answers.” It might also be OK for a more advanced thinker, a gifted autodidact, or a fellow teacher or coach, who is comfortable with the development environment, addicted to creating their own learning cycles–who just needs a hint of the path and they’re off to the races. Instruction seems necessary if you’re in a situation where you have a limited time and you have some fixed goal you need to meet in that time (though I can’t imagine any coach seeking out such constraints). The downside is that there isn’t much room for the coachee to participate in the meaning-making; little co-learning; which means less learning for the coachee and the coach, too (!). Your coachee will be mostly “recording” data during the session in order to (hopefully) reflect, process, and apply later; and maybe as a coach you’re not operating at your growth edge either–you maybe be a bit of an automaton rattling off wisdom. So you lose some learning opportunities. There is a consequence to the relationship, too, because an instructional style can be a distancing move.

The coaching approach is preferable if you want to create a space for working and learning together, to partner in understanding what is going on in the general assessment and to conceive of, develop, implement, and build on applications of the knowledge in that assessment in the coachee’s social context. Coaching is also a better transition to a self-sufficient coachee: you’re thinking with them, and going through experiments and applications with them. Because they’re more actively involved, they’ll have a better chance of building habits, skills, and awarenesses that can continue after the coaching sequence is over.

I think you will ultimately blend both approaches as a coach. There are parts of even an extreme-coaching-style coaching process that require a kind of meta-narrative that can feel like instruction (here’s what we’re going to do), and there are also parts where you need to step out of coaching and give context (here’s what this means; this is what I say to folks when we get to this part). And even if you’re leaning instruction, it would be unusual that you don’t invite some kind of input and engagement from the coachee. No matter how much you are in control, it would be strange not to respond to or allow to develop a question or, better yet, spontaneous recognition on the part of the coachee, and that’s coaching.

An additional thought: I think it is actually very difficult to resist instructing, to get out of the comfortable seat of your knowledge and control and be available to the coachee’s perspective . . . or perhaps it is better to say to be suspended between your knowledge and the moment and the coachee’s perspective. Edgar Schein calls the problem “content seduction,” and advocates against it in his recent book Humble Consulting. This is the master move that gifted and experienced teachers and coaches learn at some point, but I don’t really see people doing that right out of the gate, and it feels like it requires a developmental stage. 11 on the Lectica scale, or 4 on Kegan.

Which style am I inclined to use? Coaching with instruction in reserve. My plan is usually to frame the session around key points and themes that emerge from the data we are gathering. I float these points for discussion when it feels natural–often I don’t need to, because the points tend to float themselves, because the coachee sees them, too–and then I approach them each from a perspective of mutual inquiry. “I noticed this. Does this seem interesting to you, too? What is your take? How shall we think about this?” There will be places I will need to instruct. What does a particular term mean? Where are we in whatever process we are following? What is our next step? So I’m prepared to say something at those points. (Although often I don’t need to: even the instructive pieces seem to “say” themselves.)

I was thinking today about the influential book How the Way We Talk Can Change the Way We Work by Lisa Lahey and Bob Kegan. It suggests ways that slight shifts in tone or nuance or perspective can more or less instantly transmute a difficult or problematic context into a productive one.

The shifts come in the realm of language. Lahey and Kegan suggest you can move easily from a way of talking that’s less productive to one that’s more productive. There are multiple pre-fabricated language movements you can make. My favorite example? Complaint.

With very little effort, the language of complaint (limiting) can be modulated into the language of commitment (inspiring). How? Well the leverage point or hinge is to know that both languages have buried in them a sense of values, a longing, an ethics, a desire for a certain way of life, a need to be connected or valued. In the language of complaint these virtuous components are kind of hidden or implied, but in the language of commitment they are the message itself.

For example, let’s say I don’t feel like my boss gives me enough opportunities to take charge of a project, to show what I can do, to stretch, to lead. If I focus on how bad that makes me feel, and if I don’t talk to her about it directly–“My boss won’t let me try anything new, she doesn’t value me, etc”–that’s the language of complaint. But the point here is that wanting to be trusted with leadership roles, that’s a positive thing, that’s a virtue buried in the complaint–and that’s worth talking about. It shows a path towards a different kind of relationship with your boss, one your boss might even like. Or at least be willing to try out with you. Rephrasing in terms of commitment would look something like this: “Hi boss! I would really like to have a chance to lead a project. I feel I can do a good job for the organization, and it would feel good to see the organization supporting my growth. I realize there’s some risk here because I’ve not led a project before. Can we discuss it?”

The second option, though it has the same, as it were, problem-DNA (not getting to lead a project) as the original phrasing, has a different solution-DNA: it posits a completely different world view. One where organizational and individual growth are both possible. As opposed to one where the organization is seen (by the complainer) to proscribe the individual’s development possibilities.

The shift is as simple as using different words! Ok, it’s more complicated than that. Of course, you’re thinking, there is a different way of thinking going on in the two languages. A different way of thinking, a different way of being with people, a different comfort with risk, a different role for the self, a different assumption about what should happen at work . . . a lot of things. It is a language shift, because you are changing the words you use. But much more is shifting, too. In this way it reminds me of downhill skiing pedagogy. When you learn to downhill ski, you are often taught (among other things) to just look where you want to go–that is, you turn your head to face the place you want to go–whereupon your legs and feet and hips and skis and the slope all align as it were magically to get you there. This language shift is like that. You shift your words, and the rest clicks in. The point is you get there.

I will speak to one other point, which seems important, if tangential. One of the things governing the language of complaint is fear; the language of commitment exposes fear to sunlight, and that can be scary. When we complain, something is bothering us. We don’t feel good. But, importantly, there’s the potential of a worse feeling resulting from any action that keeps us from doing anything about it. In our example, the complainer doesn’t like not being trusted to lead. But if he talks about it with the boss, he might find out that the boss really doesn’t think he’s capable. That would be hard to bear. Worse still, if he asks to lead, he might get to lead! And then there’s a chance he might publicly fail. And that would be the hardest to bear of all. Hard enough to bear that even the specter of the possibility of having to experience it keeps the complainer comfortably tucked in his language of complaint, even though it’s no fun either. It’s a known and manageable discomfort.

It would take quite a little bit of introspection for our complainer to catch himself in this loop and work his way out; Lahey and Kegan’s “language” shift offers him an easy get-out-of-jail-free card. He can look back from having successfully led a project and wonder how he got there.

Abraham Maslow studied self-actualized people–highly evolved people, you might say, advanced in their thinking, sophisticated in their humanity, expressive, expansive, generous, loving, confident, healthy, gifted, alert–and what made them special. In particular he focused on the way they perceived.

He thought they knew things in a different way, which he called B-Cognition, short for Being-Cognition. In B-Cognition, the individual perceives the object as if the individual were part of the object. A loving, universalizing, interrelated way of knowing. Knowing the object so well that you discover in it yourself, or links to yourself, and through those links, you intuit more links–to everything.

A way of looking or knowing that encompasses the object’s existence and your own existence and so is also a kind of being, hence the name. A way of knowing that radiates love, joy, contentedness, acceptance, appreciation, forgiveness to those in contact with the individual.

The great people manage to exist in B-Cognition; the rest of us get in there now and then: in the process of artistic creation, listening to music, in meditation or in mindful moments, walking in the woods, in a moment of “flow,” or generally, in moments of being teased out of routine cares by things.

Maslow distinguishes B-Cognition from D-Cognition, which we all use all the time, to my everlasting chagrin. This is Deficit-Cognition, perceiving in a way that separates the looker from the looked-at. Judging, categorizing, assigning relative value, assessing relevance, bracketing off, determining usefulness or beauty, investigating logical truth, etc.

D-Cognition is the lens through which we see each other and the world: “To what extent is this thing useful to me?” we are asking at some level every time we perceive anything. Or perhaps the question we ask ourselves has another form, too, coming from a position of anxiety: “Will this thing impede or injure me? Expose a vulnerability?”

If you pay attention to the flicker of thought in your mind and in the faces of others as you meet them in the street or in the office (imagine doing this!), you’ll see D-Cognition at work. Instantaneous judgements and rankings and assessments and associated thoughts and anxieties well up with every glance, no matter how fleeting.

I think D-Cognition is basically the only perceptory apparatus of the workplace, which is logical, I suppose, because the prevailing idea at work is that we are practical, efficient, and attuned to the bottom line, and we need to judge, judge, judge, judge. Or be judged.

In aesthetic and academic circles I think there might be a little more room for B-Cognition. A scholar writing about Wordsworth, for instance (I picked him on purpose!), I hope, is (or was at some point) motivated by a B-Cognition-like experience of (or with) the text. Of course she then writes about it and has to defend her writing against other scholars and other interpretations and in creeps D-Cognition.

Maslow’s study of perception connects with other similarly-oriented ways of thinking. My personal saint and philosopher, Henri Bergson, always sought “pure perception,” for instance, which was to be achieved by intuition, a penetrative, organic, knowing-from-within, like B-Cognition. I remember writing in my Master’s thesis decades back about the experience of using intuition on a text and hypothesizing that at some point down in the trenches of that perception you were seeing yourself or seeing an interplay between yourself and the text that changed both. Some kind of quantum effect.

B-Cognition is also a good way to describe the goal of mindfulness and meditation, very popular now (and deservedly so) in our frazzled, overloaded, hyper-material, people-argue-with-each-other-on-TV, tabloid-y culture. These activities, coming out of the Buddhist tradition, focus your attention to your inner experience of life in the moment; and one of the key points, as you come to know yourself, is to come to know yourself as existing in a kind of suspension of selves, one big oneness. Mindfulness chips away at the unhealthy personal and interpersonal effects of D-Cognition and aims to get you to the place where you can radiate in all directions the kind of contentedness and love that Maslow’s modern Buddhas did.

B-Cognition and mindfulness also align with Constructive Developmental Psychology, which I’ve mentioned a few times, and in particular with the fabulous 5th stage of Robert Kegan’s hierarchy of epistemological sophistication. This is the stage where your interest in being a “self” fades and you begin to take very seriously other selves and relations between selves. You laugh happily at your own fallibilities, which you would never do if you were trying to keep your you-ness intact. And of course they align with all those wonderful, inscrutable, contradictory, healing messages from thinkers and artists working along the same lines. Walt Whitman, of course. Maybe something in the Cubists. Etc.

I like the path Maslow took — starting with a psychological investigation more or less according to the way of Western science (although feeling perhaps more like archaeology than psychology?), he ended up confirming what he was seeing by drawing similar connections to thought in non-western-scientific containers: religion, philosophy, aesthetics, literature.

One last point that I think is key. In B-Cognition, we have the data of D-Cognition, plus much more. It is not that we suddenly lose our ability to discern or to think; B is not intellectually inferior to D. Those D-data are all there, but contextualized, re-membered, put back together, held together with contradictory information, resolved, understood in a different way by an epistemology at a higher order of complexity. A small piece replaced in a big puzzle.

For myself I’m about getting more B-Cognition to the people. At work, in life. On a personal level, on a local level, on a national level. B-Cognition of others, and maybe more importantly, of themselves. Appreciation of B-Cognition. Restitution of wholeness and relatedness in the deconstructed and compartmentalized lives of people.

Alan Kay give a talk called “Is Computing a Liberal Art?” yesterday at the 2012 NITLE Summit. Here I discuss his key idea: that systemic thinking is a liberal art, and I explain a corollary idea, that textbooks suck.

Kay is attuned to how ideas evolve and are instantiated in the culture and the mind. For him a key piece in this process is the relationship between ideas and the categories we have for them; the relationship is this: if you don’t have a category for an idea, it’s very difficult to receive that idea.

Kay says we’re born with 300 or so preexisting categories that the species has evolved to know it needs to think about to survive, and we’re wired to be looking around for thoughts in those categories (food, shelter, pleasure, etc.). But the story of the last few hundred years is that we’ve quickly developed important ideas, which society needs to have to improve and perhaps even to continue to exist, and for which there are no pre-existing, genetically created categories. So there’s an idea-receiving capacity gap.

Education’s job should be, says Kay, to bridge this gap. To help, that is, people form these necessary new idea-receiving categories–teaching them the capacity for ideas–early on in their lives, so that as they grow they are ready to embrace the things we need them to know. Let me say that in a better way: so that as they grow they are ready to know in the ways we need them to know.

Said he, “If you have a new idea come in and education won’t teach people it from birth, you get a pop culture.” Pop culture! A harsh but fair critique of our society. More on that pop culture below.

For now, what are the ideas or categories, or what capacity for ideas should we now be teaching? Kay has one major thought in mind. He wants us to cultivate the ability to conceive of, work with, create, understand, manipulate, tinker with, disrupt, and, generally, appreciate the beauty of systems. This he hails as perhaps the most important of all the liberal arts.

It is the zeitgeist of the last 100 years that everything now appears as a system that was but a piece of a system before–or everything is now multi-dimensional that was linear before—thinking of the body as a system, the environment as a system, economics as systems, computers as systems. It’s why we talk about gamification so much–because a game, or a simulation, thought of as a thing we might create (rather than a thing we only act within), is a visceral example of systems thinking. (If this sounds familiar to readers of this blog, it’s because I’ve written about seeing systems before, in The Age of the Gums, or in Errol Morris and Spirals of Learning, or in Pieces of an Ecology of Workplace Learning, or even in The Conduit Metaphor, for instance. It might be all I write about.)

Seeing systems is an epistemology, a way of knowing, a mindset. As Kay said, “the important stuff I’m talking about is epistemological . . . about looking at systems.” It’s the Flatland story–that we need to train our 2D minds to see in a kind of 3D–and Kay’s genius is that he recognizes we have to bake this ability into the species, through education, as close to birth as possible.

One main point implied here is that we’re not talking about learning to see systems as an end point. Systems thinking is to be conceived of as a platform skill or an increased capacity on top of which we will be able to construct new sorts of ideas and ways of knowing, of more complex natures still. The step beyond seeing a single system is of course the ability to see interacting systems – a kind of meta-systemic thinking – and this is what I think Kay is really interested in, because it’s what he does. At one point he showed a slide of multiple systems–the human body, the environment, the internet, and he said in a kind of aside, “they’re all one system . . .” Compare that to the advanced stages in Bob Kegan’s constructive developmental psychology: “At Kegan’s sixth and final stage . . . there is a dawning awareness of an underlying unity that transcends human and environmental complexity.” (That from Philip Lewis’ work on Kegan, The Discerning Heart: Just happened to read that on the Metro on the way back to the hotel, as I was passing through Arlington National Cemetery).

Kay’s complaint is that higher education does not cultivate the particular epistemology of systemic thinking. We don’t teach integrative ways of knowing; we instead dwell within our disciplines, which dwelling you can see as being trapped within an arbitrarily chosen system. The point is to be able to see connections between the silos. Says Kay, the liberal arts have done a bad job at “adding in epistemology” among the “smokestacks” (i.e. disciplines).

Ok, so we’re not teaching systemic thinking. So what? What happens if you don’t teach people systemic thinking?

Then, Kay says, you’re allowing them to be stuck in whatever system they happen to be in, without thinking of it as a system. What happens when you’re stuck in a system? You don’t understand the world and yourself and others as existing in constant development, as being in process; you think you are a fixed essence or part within a system (instead of a system influencing systems) and you inadvertently trap yourself in a kind of tautological loop where you can only think about things you’re thinking about and do the things you do and you thus limit yourself to a kind of non-nutritive regurgitation of factoids, or the robotic meaningless actions of an automaton, or what Kay calls living in a pop culture. He sees this problem in higher education, where even faculty, experts in their own fields, are uneducated, in the sense that they can make no meta-connections among the fields, such that (as he said) hardly anyone exists who can understand the breadth of thought in a magnum cross-functional opus like the Principia Mathematica. And yet our future will be built on such integrative meta-connections as Newton’s.

By way of conclusion, I’ll now tell you why textbooks suck, according to Kay. A downside of being epistemologically limited to thinking within a system is that you overemphasize the importance of the content and facts as that system orders them. If you’re a teacher, you limit your students to processing bits according to a pre-ordained structure, to being a program, if you will, instead of learning to write a program. It would be better to use the system itself as the information students act upon when they construct their knowledge, and to find a way to get students to build new systems and even systems of systems. We teach students vocabulary within one set of grammatical rules, with the rules as the endpoint, say, but if we were disciples of Kay we would allow students to make grammars of grammars and languages of languages, with spirals of increasing complexity of thought looping into infinity and no endpoint in sight. That’s the order of consciousness Kay is after. Most textbooks, however, are on the stuck-within-the-system and vocab-and-grammar level. Which is why they draw Kay’s ire.

Errol Morris, the famous documentary filmmaker, says the purpose of a documentary is not to document things as they are, but rather to find and animate a compelling mystery. Not a mirror walking down the road, but a magnifying glass stopping on the road and probably even leaving the road. The point is not to reinforce a stable model of the world but rather to add new data to that model. Maybe to add so much data or data so strange that the model itself has to be remodeled.

That seems to be the particular genius of Errol Morris: to discover wonderfully inexplicable complexities right where everyone is fast and desperately trying to demystify and settle things and close down, rather than rev up, curiosity, as we once sprayed dioxin on dust to beat it down. After the trial, after the tabloid furor ends, decades after the war is over, he brings his questioning gaze.

His mysteries seem to re-ravel, if you will, a sleeve of care. To start with a single fiber that the following of attracts more substance to itself, like a grain in a supersaturated solution, and forms loops and lattices, working itself back into a crystal, or a sweater, or a shroud.

Finding simple things that don’t fit the model, and unpacking them until they are so complex and beautiful the mind strains to encompass them might be the very inductive, Deleuze-like, hallmark epistemology of the age. Everywhere we see ecosystems where we used to see simple causes and effects. Maybe civilization evolves by a constant epistemological pendulum, from reduction to production, from resemblance to representation (as Foucault said), from induction to deduction, from E-Pluribus to unum, like music coming out of an accordion, and so on.

In any event, I wanted to point out that Morris’ re-raveling is how we learn important things. If you imagine that learning is improvement with a self-consciousness about it, such that learning includes the experience of seeing yourself learn, then it’s easy to understand that your improvement, since it feeds on itself, grows sort of like money in the bank, where the interest adds to the principle which adds to the interest, and the graph of growth gets steeper and steeper. Or to put it another way the learning gets increasingly complicated and the rate of the increase in complexity gets increased. Or to put it another way, the thread becomes a row of loops becomes a flap of fabric becomes a 3-dimensional sweater. Or to put it another way, the line becomes a kind of spiral of Archimedes, slouching towards complexity shuffling step by shuffling step, and looking with every lunge more like a chapter title page out of the Book of Kells. As if you are always moving from a certain kind of Flatland into a world of plus-one dimensions.

Kurt Fischer, a cognitive scientist at Harvard, developed a scale of universal cognitive development that models this kind of growth—showing learning progressing from simple ideas to relationships of ideas to relationships of relationships and so forth. Importantly, key steps include the whole of the previous level as the first building block. I will insert a pic if I can find one.

Robert Kegan’s work on adult development is similar. Adult minds, if they’re in the right environments, says he, go through a series of epistemological changes—from the “socialized mind” to the “self-authoring mind” to the “self-transforming mind,” where the key starting point characteristic of every level is that you “see” the previous epistemology. You see as an object the thing through which you previously saw the world, or your subject—you form, that is, a relationship with the thing that was previously you—you are two ideas now linked, instead of one, etc.

We could look, too, at the double-loop learning of Argyris: which is characterized by not just reflecting on the performance per the established goals, but which includes reassessment of the goals themselves (!). Or the collaborative learning praised by Lee Shulman, which is distinct from cooperative learning, and in which you and the people you’re learning with figure out why you’re there, what your product will be, how you’ll go about producing it, and what the individual roles will be—all simultaneously, as in a Jazz improvisation: you have to improve to even know why you’re there.

The core experience in all these is the excruciating or exhilarating feeling of stretching your perspective to fit a torrent of nonconforming data, then looking around for new data (including data about yourself looking at data) and doing it again. What’s perhaps unusual about Morris and people like him is a compulsion to inundate himself and us with this nonconforming data. Most people don’t seem as inclined to jump out of the pond at any opportunity to make themselves evolve legs; he is, though. Driven by a kind of faith or fanaticism that there will be a there there as the line grows into a complex spiral. Many theres are probably there simultaneously.

This mystery-as-epistemology is a neat thing on a couple of levels. For one, it’s a humanism. The belief that there are in you, me, and every aspect of the world unfathomable multitudes of complexity and wonder—and that that’s ok–not just ok, but, even, that that’s how we ought to be, and that the highest evolved action might just be to go digging for this stuff—this is deeply reassuring. Much of life seems to involve the opposite: sweeping things under the covers, assuming veneers of normalcy, and dealing with the inevitable neurosis that arises from the conflict between your inner complexities and your epistemologically circumscribed outer self. To do the opposite, for once—to honor the complexity—is nice.

It’s healing, in fact. These mysteries repair the workaday world. Justice system, war, politics, religion–things that are supposed to order the cosmos, answer questions, and regulate–also seem to leave destroyed people and confusion in their wake. A restoration of ambiguity after these kinds of simplicities is a wonderful thing. And if it ends up you need ambiguity to learn, well then so much the better.

When you think about doing new things, there are a few phases. Four, by my count. First comes the part where you conceive of the thing to do–call it the idea phase. In the beginning there was the word, etc. Then there’s a phase where you actually do the thing you conceived of. The doing phase, which is number three. These two phases are self-evident I think to most people, and I’m not going talk about them here, although I note they get really interesting as you peer into them (How do you actually get that idea? What is it you’re doing, when you’re doing, anyhow? Is there any thinking happening in there during that doing? Etc.)

Less obvious than these is a post-doing phase, phase four, where you reflect on how the thing went and look for ways to improve before you try it again. This phase is crucial because with it comes the feedback loop that is at the heart of all learning and improvement, and that turns your isolated action into something that can grow in meaning and value indefinitely and form associations with other things and attract people and change them and be changed by them and on and on in wondrous convolutions and permeations of beauty influencing beauty forever. Having a loop is really the only way to (eventually) achieve goodness and approach perfection, in my opinion, contrary to the semi-conscious belief of many that excellence precipitates from nothing with no precedent. That good teachers are born, not made, etc. I am not sure you can be or do absolute good; but you can improve relative to yourself, and you should focus on that.

I could talk a lot more about this reflection or feedback phase, as I love it dearly, but I won’t, because I would rather draw attention to a phase between the idea phase and the doing phase–which makes it phase two–a phase that is in my opinion the least well known, and least respected, and most suspected, but it’s important, and it’s poised for a comeback, and it’s worth thinking about.

In phase two, which is hard to name, you go from idea to endeavor. And to bridge that chasm you do a certain kind of applied abstraction, or practical dreaming, or ethical scheming. A spiritual machination, maybe. You continue the generative feeling of the creative thinking mode that started the whole thing and produced the wondrous idea you’re working with, but you begin to arc that generation towards your actual physical, local, empirically-confirmed environment with its tangible stuff and laws and real people and moods and everything.

First you start by asking my favorite kinds of questions: “OK, about this new idea. If we did this, just what would it look like?” Or, “Imagine we did this–how would it feel?” Etc. The answers usually come in little pieces that you build slowly into a larger picture that becomes clearer and clearer and more palpable and more real.

And as it becomes clearer and clearer, look out. Experience teaches me that this is the place where people start to get nervous. The idea was no threat as long as it was just a crazy idea. But now it’s growing into reality–particularly if you’re doing a good job of answering the “what would it look like” questions–and it’s starting to bump into people’s assumptions about life. It’s amazing how easily the defensive mechanisms are triggered in this regard–as soon as the slightest whiff of palpable novelty is intuited, up go the hackles. Why? Who knows–the imagined thing could change the existing power dynamic, we could be asked to do something we’re not good at, the things we think we care about might suffer, someone might say we’re incompetent, it might take more energy than we currently choose to expend, it might put us out of a job, etc.

Usually you don’t even know what is so threatening about the idea. Often the toes being stepped on are so buried in the sand that the articulated objection spurred by them seems disconnected and comes across as irrational. Did I say sometimes? It might be more than sometimes. I’m not attacking this quality of self-preservation (see Kegan and Lahey’s Immunity to Change for an examination of it and a praise of it and a way to work through it), I’m just noting that this is where it comes in.

In any event, after this nervous and visceral, slightly animalistic reaction (which happens to us all, I might note, me as much as anyone), this part of phase two often salvages itself by what David Perkins calls “bracketing,” or asking people to put aside objections to just float along in the happy land of possibilities for a bit longer. This simple move is surprisingly effective–who wouldn’t ride with Willy Wonka on the boat a bit just to see what happens? It’s also akin to the magical cape of the bullfighter. “I’m not going to argue with you about that thing you think,” you’re saying. “It might be right, who knows. I’m just asking you to imagine this very interesting thing over here . . .” Wave of cape. Bracketing comes in handy: without out you can’t keep going.

Keep going, that is, to the bricolage stage, where another fun thing happens: you start to look for ways to interweave reality and your idea. Outlets to plug your idea into; bits of spare fabric in which to clothe it. You ask “What do we have lying around that might be put to use? What existing knowledge, procedures, resources, ideas, experiences?”

Here to my eternal delight we get to have a Rumpelstiltskin moment and to transform mundane things into nifty things. Nifty because they buttress your new idea. Here we find resources forgotten, ideas never hatched, people’s skills untapped, cheap back-door strategies, etc. And we see how we can put them to use. It’s as if the unappreciated constellations reform themselves into new provocative shapes right on the faded star map and right in front of our eyes. This transmutation, repurposing, reuse, resuscitation, re-constellation of old stuff is just fun–addictive really–it might even be the main reason people ever want to do new things. Why? Maybe because it means the world is generative, restorative, salvageable; that there’s eternal capacity for creativity, growth, development. That we’re not actually after all trapped, doomed, predetermined, constrained, and locked in a pit of inescapable despair. Maybe because if you can re-associate the stuff around you, it means you’re alive. I’m not sure.

Anyhow, the end of phase two is marked by another particular kind of question that I love. This is the classic “What’s the first step?” Or the “What achievable thing can we practically do, now?” Key here for me is the now part–that is, doing that accessible thing right then. There does seem to be a kind of clock ticking. And there is the sense that if you don’t act, that bracket that temporarily held back all the objections to the idea will start to loose structural integrity like Star Trek shields, and will no longer be able to fend off the glittering blob of worry pressing in through the windows and under the doors.

But I won’t follow that thought, because here we are at the end of phase two. Of course once you do something, even just the first accessible step, you’re technically in phase three, doing, which I said I wouldn’t talk about. So ends my blog post: think about this phase the next time you set about doing something new, and see if you can’t see it at play.