Agile permaculture: an introduction (part 1 of a series)

As I mentioned recently, I’ve been wanting to talk about Agile software development methodologies and how they relate to permaculture – Agile permaculture for short – for yearsandyearsandyears, and it finally seems like time to do so.

Over on Making Permaculture Stronger, Dan is making an inquiry into permaculture design processes, and how much design is actually done up front vs emerging as you go. Turns out, while the books and classes tend to say “design up front” as the official process, in reality people tend to start implementing before the design is finalised, and allow the final stages to emerge after the first steps are already in place.

The reasons for this are obvious when you think about it: a permaculture property (be it a farm or a backyard or a community garden) is a complex system with many interacting parts, not least of which are the humans that use it. Over time, different issues may arise: a particularly dry summer, a change in the price or availability of materials, a new member of the household, an injury that stops you climbing ladders… and so, your plans that you made up at the start may need to change.

On top of that, there’s the fact that we can sometimes be paralysed by choice and find it hard to really make decisions about what we want. It’s natural to say, “Let’s rough in the outlines and see how it looks, then decide on the detail later.” Or, “Let’s start with the veggie beds outside the back door, and think about the back paddock in a year or two.”

My current permaculture garden, in its earliest phase: a few square metres of veg beds outside the back door.

Several of David Holmgren’s Permaculture Principles implicitly recognise the continuous nature of creating a permaculture system:

Observe and interact

Apply self-regulation and accept feedback

Creatively use and respond to change

Yet there’s tension here too. I’ve heard people say that it’s good to observe a landscape for at least a year, to see it in every season, before breaking ground. In a climate like the one I live in in southern Australia, you could even argue to extend that through at least one strong El Niño cycle – perhaps five years – to understand the effect of droughts and flooding rains on the land.

But at the same time, we want to obtain a yield. We have to eat, and it seems silly to hold off on planting a few veggies until we have a complete understanding of our local ecosystem under every environmental condition, as well as a perfect master plan for our lives in which nothing will ever change.

My background is in software development, which is another field where people try to develop and maintain complex systems over many years, all while dealing with shifting goals and changing contexts. The accelerating pace of technological change, especially since the Internet became big, has made the software community think hard about how best to design and implement systems that can deal with all this.

Merry Christmas, here’s a pile of floppy disks

Reader, if you are my age or older, you probably remember buying software from a physical shop. It came in a shrinkwraped cardboard box, and when you opened the box, there would be a manual printed on paper, and a stack of floppy disks. Windows 95, for instance, came on thirteen 3.5″ floppy disks, and was released in August 1995.

The major version of Windows before that was Windows 3.1, released in April 1992 – three years and four months earlier. This was a typical release cycle for shrinkwrap software: 3 years, give or take a bit, between major updates, with small bugfix releases in between.

Shrinkwrap software was primarily sold for the PC market from the 1980s to the early 2000s. The other two major types of software in the pre-Internet age were custom applications designed for a single customer (for instance, a rocket control system built for NASA) or turnkey commercial software sold to big businesses, such as payroll or inventory management software, which usually came with expensive consulting and support contracts to integrate and customise it for each enterprise’s needs.

Welcome to the waterfall

In each of these types of software, the lead time from concept to delivery was usually on the order of years, and it was developed by sizeable teams made up of programmers, designers, project managers, business analysts, testers, documentation writers, and many others. Software represented a major investment, so you’d want to get it right. In aid of this, companies developed various processes to make sure that the software projects ran smoothly.

The most popular of these was the Waterfall Model, introduced to the software development world around 1970. It consists of steps which flow, one into the next, like a series of rapids down a river. At each step a deliverable is produced and signed off on, and then you’d shoot over the rapids into the next phase, with no return possible.

There were plenty of cracks showing in this system when I was at university in the early 1990s. A number of variations were suggested. The first was that there should be an opportunity to loop back to an earlier step of the waterfall if things aren’t working out.

The waterfall model is sometimes drawn with the possibility of backtracking a step if needed (source)

“Rapid prototyping” was another popular idea, especially suited for designing software with graphical user interfaces. The software developers would build a rough models of the software’s interface to get the customer’s feedback, then throw them away before starting on the real thing. (Of course, the temptation was to continue work on the prototype rather than throw it away, which often led to software with poor foundations.)

My software engineering lecturer seemed pretty taken with something called the Spiral model, in which the project loops around and around with a series of increasingly mature prototypes, building up a library of supporting documentation as you go, until the very end where you go through what look quite like the waterfall steps to bring the project to final delivery.

A series of prototypes followed by waterfall-like final steps.

Governments and large organisations tended to use even more complex Waterfall-based methodologies, sending staff on training courses to get certified in their use. I remember seeing posters with complicated flow diagrams on project managers’ cubicle walls, showing the process their organisation favoured, along with Gantt charts to show when all the stages would occur.

What’s wrong with waterfall?

The most fundamental feature of the waterfall model is that you finish each step before moving onto the next. Most importantly, the requirements (step 1 of the waterfall) and design (step 2) needed to be finalised and signed off on before moving on to implementation.

If you’re working to a finite timescale, you’ll soon find that the more time you spent on the early stages, the less time you’d have for implementation and testing. There’s always a trade-off: skimp on early stages and reduce the quality through poor design, or spend so much time on early stages that you have no time to build a solid product.

To think of this in permaculture terms, imagine the following scenario:

You’re a professional permaculture designer. You’re called to see a client, who says they want their newly-purchased acreage to be producing 80% of their nutritional needs by August 2020, three years from now. Most importantly, you won’t get paid unless it succeeds.

You’ll interview the client and write up their needs, along with a site analysis, which they’ll sign off on. Based on this – and without any further questions or clarifications – you need to produce a design. The client will sign off on that in turn, then they’ll hand over to a team of WWOOFers to build it. You don’t get to see the site again, or have any communication with the workers, until the three years are up.

How do you make sure you’ll get paid when August 2020 rolls around?

The longer you spend observing, analyzing, and writing the most intricately detailed design documents, the less time the implementers actually have to build soil, plant trees, or integrate animal systems and see them start to produce a yield before your time is up.

On top of that, the strict hand-off between the designer and the implementers means there’s a natural antagonism, in which each party wants to spend as much time as possible on their stage of the work, and will tend to blame the other if things go wrong.

If you can foresee these tensions, you’ll realise there’s a strong risk that the project will fail. To protect yourself, first of all you’ll make sure that the plan is so simple and straightforward that the implementers can’t muck it up. Don’t put in anything weird or new – just stick with what works.

Next, you’ll want to set up a contract with lots of arse-covering clauses saying that if implementation doesn’t perfectly match what you specified, or there’s some unforeseen circumstance, it’s not your fault. The client, of course, has exactly the opposite views. Hope you’ve got a good contract lawyer!

I don’t know of any permaculture projects that are quite this dysfunctional, in this particular way (though if you have any stories, leave a comment!) Nevertheless, it’s clear that if you followed the strict linear process laid out in many permaculture books and courses, this is what you’d end up with.

In summary, the problems of a strict waterfall methodology are:

Fundamental uncertainty: we live in a changing world and we have imperfect knowledge.

Limited time for analysis and design: we need to finish designing quickly so we can start implementing.

Risk of errors: we have to call the plan “finished” at some point, but it might still be wrong.

Schedule overruns: the more time we spend designing, the more we push back the start of implementation. The less time we spend designing, the more time we spend dealing with problems. Either way we delay our yield.

The blame game: “Your design is wrong!” “No, your implementation is wrong!” (This can happen even inside one person’s head.)

Conservatism: to avoid errors, schedule overruns and the blame game, we stick to tried-and-true solutions rather than innovative ones.

No way back: if the plan is wrong, and the implementation fails, we can only throw it away and start over again.

And, at the heart of it all:

Cognitive dissonance: we know there’s something wrong with the process, but we try and fool ourselves it’ll work anyway.

Is there a better way?

I’ve probably talked for long enough now, so I’m going to leave you with a teaser for the next post.

In 2001, a group of software developers who were pushing back against the Waterfall model got together and produced a manifesto for a new way of developing software:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and toolsWorking software over comprehensive documentationCustomer collaboration over contract negotiationResponding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

This was the Agile Manifesto, and it changed the way software was developed. More in the next episode!

Share this:

Reader Interactions

Comments

Thanks Alex – great to see these ideas taking shape in a form that can be shared! I look forward to the next instalments…

In relation to the Dave Jacke sequence noted above, I’d like to add a little clarification. This is not a response or critique of what you’ve written, but for the benefit of others who read the blog. I’m wanting to tease out a distinction that I’ve often heard blurred or plain misunderstood – especially by people who haven’t explored Dave’s process (Yes, I know it’s not Dave’s – he learned it at The Conway School of Landscape Design [http://www.csld.edu/]) but let’s call it Dave’s because it is associated with him in permaculture circles).

In my travels, I’ve repeatedly noticed people using the terms Design Concept and Concept Design interchangeably, but they are different in significant ways. Let Dave say it…

“The first stage of the design phase is the formation of the design concept. The design concept is the ‘big idea’ or organizing notion of the whole design for our site. Our goals statement tells us our mission, and our base map and site analysis and assessment tell us the context within which we will achieve that mission. The design concept defines our vision for achieving that mission in that specific context in its most essential or fundamental aspect. Ideally, all the design details flow from this vision and harmonise with it, support it, and manifest it” (EFG Vol I p. 233)

So Dave uses Design Concept to mean a few sentence summary of goals possible on the specific site. Other people use the term Concept Design where Dave uses the term Schematic Design – meaning a ‘bubble diagram’ level of assembling elements in a rough arrangement. Schematic Designs are quick and general; it’s easy to generate multiple schematic designs where the elements are arranged in many different ways.. this can be presented to a client without huge investment in creating detailed, to-scale representations.

With my very limited knowledge of Agile, I’m wondering about whether Schematic Design us the point in the current Waterfallesque process at which permaculture could adopt some ‘Agile attitude’ and start implementing and testing parts of the design with the customer, while simultaneously working on a ‘deeper’ round of detailed designs?

For instance, there may be an obvious place for a dam to be located. It can be built while we are still deciding exactly what to do with the water…

I’m wondering about whether Schematic Design us the point in the current Waterfallesque process at which permaculture could adopt some ‘Agile attitude’ and start implementing and testing parts of the design with the customer, while simultaneously working on a ‘deeper’ round of detailed designs?

That certainly wouldn’t hurt, but Agile is actually a far more profound mind-shift than that! It’s going to take a few posts to tease it out though, so here’s a spoiler: in Extreme Programming, coding (i.e. implementing) is at the very core of everything you do. As much as possible, you do all other activities through the lens of implementing. So for instance, you “communicate in code” by making the design as transparent and readable as possible in the implementation, you judge the success of your work by creating automated tests alongside (or ideally before) the thing itself (rather than seeking feedback after it’s all finished), and you try out design ideas by trialling an actual implementation. XP is sometimes mockingly referred to as “Ready, Fire, Aim!” … and yet it works.

I recently saw an exhibition at the Art Gallery of NSW (I think) called “not quite straight”. It curated many of the Permie like buildings erected in the Nimbin/Byro Bay Area in the 70’s through early 90’s. The photos documented houses that grew like topsy over many years and reflected a long tradition of building based on ever-changing goals, material availability and labor. The exhibition curator, an architect, observed, that these buildings were a marked departure from previous approaches where a house was ‘conceived’, a plan executed and a building completed.

This article reminded me of the design principles in these buildings. Nature never conceives in a vacumn. Everything is an adaptation to that which went before. Only the Abramaic ‘god’ ever started with a canvas consisting of nothing but ‘the word’. From a design perspective, nature is constrained by the past, and the energetic limitations of the future. A successful adaptation has to be quick to survive. Designs with long payback periods will fail despite being better in the long run.

In computing terms this is often referred to as a system characterized by the greedy heuristic. It is often sub-optimum but it succeeds quickly and satisfices. In my opinion, it would be better to articulate this design principle in the way we teach permaculture, rather than to disingenuously advocate the more formal and abstracted notions of ‘performative permaculture’. It would be more honest, more useful and better aligned with the principles of bio-mimicry

This is a fun and useful new project you have embarked on! Look forward to the next post 🙂 As you know we use agile at the open food network and I’ve often thought about learning for permaculture. Kirsten and I are about to embark on a complex broad-acre farming project in NE Victoria and we will be bringing our agile brains to the task. (PS found this through erinaceous at reddit)

I was revisiting Nikos Salingaros’ stuff today and thought I’d share it with you. He wrote this odd little piece on Agile and Resilient Architecture here http://www.metropolismag.com/design/future-architecture-must-be-agile/ . If you’re not familiar with Salingaros you should check him out. He’s one of Alexander’s associates and collaborators. He’s got this slightly out there but interesting course that’s largely about Alexander’s design method’s. https://www.youtube.com/watch?v=ZXbaE-XP5Cw&t=19s It’s a good resource especially if you’re intimidated by the sheer weight of Alexander’s nature of order series.

As a permaculturist it’s great to hear more about Agile Software Development! I’ve never really resonated with the design processes I’ve previously encountered, sorta forged my own way forward. It seems like we’re headed somewhere similar.

One thing I have an issue with is is a question of who is doing the “permaculture programming”? Say a professional designer swoops in, does their thing, and leaves, isn’t that like leaving the end user to finish up the programming? Wouldn’t it be better to teach the end user to do the “permaculture programming” themselves? Sort of a question of the relationship of the designer to the space…

Subscribe via Email

Subscribe via RSS

Affiliate links

Blog posts may contain affiliate links for Booktopia, an independent Australian bookseller. If you buy something after clicking on one of these links, I get a small kickback, which helps support this blog.