Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter Sekrimo writes "This article discusses an interesting advantage to writing documentation. While the author acknowledges that developers often write documentation so that others may better understand their code, he claims documenting can also be a useful way to find bugs before they ever become an issue. Taking the time to write this documentation helps to ensure that you've thought through every aspect of your program fully, and cleared up any issues that may arise."

The documentation itself is probably not the important bit. The thing that a lot of programmers seem to do wrong is getting the correct ordering of the thinking and coding steps mixed up. Writing documentation first means that you have to do the thinking before the coding, and that eliminates a whole load of problems.

Too many developers I know think that you can slap something together (calling it 'being creative' or 'innovative') and fail to do a professional job which ends up requiring massive amounts of maintenance, if it works properly at all. Those who reall

I read this as 'When I don't know what I'm doing, I hack at it until I get something working and then I document what I did.' Fair enough. At that point you are documenting it at a fairly detailed level (hopefully). This happens when software development is done as 'Art'.When software development is happening as 'Science', odds are you can at least outline your intent and design before you start coding the solution.

The theory was, write the documentation, then code to the documentation.

In practice, that isn't sufficient to reduce bugs significantly, for several reasons.

1. As you develop something, you find that "getting from A to B" sometimes requires going via D instead of C;
2. Other times, you realize that the documentation doesn't completely capture the requirements, and you need to visit both C and D, and maybe Z
3. Still other times, you realize that A is entirely superfluous.
4. "Can you add/change this feature?"

The initial specification should never be too specific about implementation details - that's a mistake that too many people fall for, going with the illusion that they actually can nail down every detail of a non-trivial problem and just throw the spec at "code monkeys." They don't understand that a specification should only say what, not how. Writing "code documentation" before writing the code is writing the "how".

So they can end up with something that meets that spec, but doesn't work either as intended or just flat-out is wrong.

Unfortunately, code, then document (when and if you get around to it) is the reality because, unlike theory, reality is messy.

Functional specs are very usefull (if done even halfway right). Technical specs are a waste of time unless you assume by default that you're dealing with incompetents, in which case you're better off saving yourself time, money and aggravation and hire a better developer.

So I do write a lot of functional specs. Even now, in an agile environment, with HUGE time pressure and multi-million penalties for delivering late - just finishing up my final 40 pages (90 pages total, in this 4 week sprint). Why? Because not doing so will make the project much later. Good architecture (system design) specs will make the project about 20% more likely to deliver on time (citation if I can find it again) even if the programmers don't follow the guidelines (interesting, right?). This matches with my experience: if you write decent designs, you are more likely to find the pitfalls before they can bite you in the ass. If you cover all the bases and make sure the business has provided for all scenarios before you get there, your project will run smoother.

So docs may not prevent all the bugs. But it does prevent a large number of nasty stuff before it gets to the stage where it turns into a defect.

So don't get to specific about the implementation. That is what comments in the code are for (well I suppose they are form of documentation as well). The design docs help you eliminate the most dangerous type of bug, "the logic flaw", almost any other type of problem can be patched. If you get the basic assumptions wrong you wind up throwing away all the glue.

The glue is the application. Anyone one can toss together a little atomic procedures to do X or Y, know which ones are needed in the first place,

You're not ever supposed to get into implementation details in a spec, unless it's to describe externals - such as an existing database or table that new stuff needs to work with.

The words "MAY", "SHALL" and "NOT" should pepper the initial spec.

1. Users MAY be logged in;// logging in is optional
2. Users SHALL log in before doing X;// only logged in users allowed to do X
3. Users SHALL NOT remain logged in for more than Y time before being re-validated.
4. Prices SHALL be displayed per unit, and p

I find if I write the documentation for a routine before I start writing it (and/or it's tests), I'll simplify the interface a lot. It's all too easy to code to support different largely irrelevant options, but if you actually have to describe how to use the code, you quickly discover it's too hard to write about them all.

This process works because I'm not typically working to a "spec" (as indeed most of us are not these days).

This process works because I'm not typically working to a "spec" (as indeed most of us are not these days).

And there's problem # 0.

People are too lazy to write a decent spec. Or, in so many cases, they don't even know how. Many have never even SEEN a proper spec. I've seen people who think that a bunch of screenshots is a spec, or that some database layout is a spec (though the latter is closer, but since it describes the actual implementation, it still fails). Or they'll spend a week with a UML tool

I thought everyone knew that documentation describes what you intended code to do, rather than what it actually does.

Just as often, while writing documentation on code I just wrote, I've thrown up my hands, and thought "this is so ridiculous and embarrassing that I can't be associated with it", and I went back and re-wrote it to do it the right way. The act of documenting it revealed you left too much undone, or too many situations un-handled.

Any time your documentation reads like you are describing a game of twister [wikipedia.org] you just know the code can't be worth documenting, or even keeping.

But as for finding bugs, I don't know. You may document exactly what you intended, and thought the code did, but still be wrong because of some corner case. Documenting it isn't likely to reveal all of these situations any better than the code itself. A

Just as often, while writing documentation on code I just wrote, I've thrown up my hands, and thought "this is so ridiculous and embarrassing that I can't be associated with it", and I went back and re-wrote it to do it the right way. The act of documenting it revealed you left too much undone, or too many situations un-handled.

This is by far more common for me as well. I'll find myself describing some really brittle setup process or some common operation that seems to take a thousand steps, get embarrassed over how bad it is, and then go fix the code. More than once, I've written sophisticated automatic parameter selection code because it was easier than finishing the documentation I had started explaining to the user how to set up parameters properly.

But as for finding bugs, I don't know. You may document exactly what you intended, and thought the code did, but still be wrong because of some corner case. Documenting it isn't likely to reveal all of these situations any better than the code itself.

Handling corner cases is accomplished by identifying what they are and specifically implementing code to handle them when necessary. In order to identify them, we must think about what the code is doing. As programmers, there are three times we do this:

- When writing the code- When testing the code- When writing the documentation

It happens that we tend to pick up different corner cases in each case. This happens because different corner cases require different ways of thinking to identify them, and while

Interesting sidestep: consider Literate Programming (http://en.wikipedia.org/wiki/Literate_programming). Donald E. Knuth advocates the approach of document, AND code, then compile the documented code. I have always been fond of this and try to use this in all my programming.

Just comment preconditions, postconditions and mock up the pseudo-code, then extend it.

I'd absolutely love it if Eclipse and Visual Studio supported this. However, most people don't know Literate programming so I doubt it's going to be i

I love Literate Programming in theory. In practice, I find the source code of a Literate program ugly. I'm not sure a WYSIWYG would help. I believe the concept is a good concept, and what holds it back is the lack of a modern realization of the concept that makes the LP easier to follow and easier on the eyes.

There's no question of Donald Knuth's genius. If there's anyone who has ever threatened to exceed the Shannon limit [wikipedia.org] for information density, it's him. But, aesthetically, the source of a Literate

I love Literate Programming in theory. In practice, I find the source code of a Literate program ugly. I'm not sure a WYSIWYG would help. I believe the concept is a good concept, and what holds it back is the lack of a modern realization of the concept that makes the LP easier to follow and easier on the eyes.

True. If I was working for Microsoft Research, I think this would be a nice start for a research proposal. As it is, I can only dream:)

When I was involved in writing one of the first packet switching systems in Europe (AT&T, Belgium, 1979), we found a brilliant way to fix bugs was to explain the bug (and thus the operation of the program) to someone. They didn't have to do much, just nod and look interested.Then usually about halfway though, the hapless coder (eg me) would go "Oh shit"... and the listener cold then leave.We called it the "tailors dummy" approach to debugging.

The one I heard was try to explain it to a big cardboard cut out of a person. Halfway through you explaining it out loud to it you'll have a eureka moment even though you'd literally be explaining it to a big piece of paper.

Tell that to the PHB that wants the code to ship YESTERDAY--bugs or no bugs!

To me, proper programming is and exercise in minimalism--get the most work you can out of the simplest and least least amount of code.

Then comments/documentation isn't so critical.

But that takes too long and time is money to the average PHB. So you get apps that barely work and must be 'updated' to work better and to be an income stream for the company that put said software out.

To me, proper programming is and exercise in minimalism--get the most work you can out of the simplest and least least amount of code.

You're really on to something there but there's at least one other part to this minimalism. You shouldn't be asked to implement stuff nobody is going to use. (A little different than your last sentence.) I mean at multiple companies I've been asked to implement something that either I realized at the time or found out later nobody actually used.(Or worse would actually be dangerous to implement) What's annoying is that either a PHB or "Person who thinks they're a project manager" decides we absolutely have

Writing is a quite different cognitive activity than "thinking". Writing about things provides distance and helps overcome the limitations of working memory that can prevent you from seeing the same problem by just "thinking". Writing documentation produces very different results than just thinking about the code.

Writing is a quite different cognitive activity than "thinking". Writing about things provides distance and helps overcome the limitations of working memory that can prevent you from seeing the same problem by just "thinking". Writing documentation produces very different results than just thinking about the code.

Then how about this?
Writing (different kinds of) documentation forces you to think in different ways. *Forces* as in you can't cop out mid-sentence without noticing.

Pretty much. And, while he was going on about subtle concurrency errors, did anyone notice just how awful his sliding window filter actually was?

The code was simple enough, sure, but you could have gone much faster if you kept a running total in the class itself. And, instead of sliding the whole array over by one, just replace the element at array[idx % SIZE]. (You'd have to add a persistent index to the class too.) Those two changes would speed up the code by a factor of N, where N is the length of th

That's true. But the process of writing it down is often the first time that a coder thinks of what the code is supposed to do. If code comes without documentation the programmer might not have thought about what the code is supposed to do at all.

That's why some of us were taught to write the interface documentation before starting on the code.

If you really filled 30 positions in 2 months, your problem is likely in hiring shitty programmers. A most companies I've worked at, we made offers to 10-20% of the people who interviewed. Unless you're doing 5+ candidates a day, and all offers are accepting, you're murdering that rate. Some of that may be better offers or more efficiency, but it sounds like you're hiring a lot of mediocre people to fill seats.

Explaining your work is a great way to demonstrate that you actually understand it. As the article illustrates, perhaps the most critical part is going back and verifying that the code matches the explanation.

Sit around trying to make sense of the requirement
Write some code that they think does what the requester really meant
Spend most of their time fighting the code management system and getting it to build cleanly
Pull an all-nighter just before the deadline so it doesn't crash when fed correct input
Toss it "over the wall" to the integration team
Refuse to answer any questions about it as they're not "too busy" on the next project

Have a nice feeling of satisfaction that they never have to do support on old

I start with code, because I think in code. Writing out a couple lines of pseudocode explains much clearer to me what I intend to do. Then I add comments to explain why I am doing things and then I flesh out the code from pseudo- to real code.

Explaining your work is a great way to demonstrate that you actually understand it.

My standard development process is: [unit-test each function before writing the next one] My rationale is precisely that: I'm not really sure I know what I'm doing until I've described it, then figured out how my idea might fail.

Forgive my ignorance, but doesn't everyone do this?

Writing unit tests (code) is not the same thing as writing prose.
I find it doesn't trigger the same thought processes at all.
For one thing, doing it like you describe, you lose focus on anything bigger than the function:
the classes and the cooperating groups of classes.

For many (but not all) things, I try to write the documentation first, starting at a high level and drilling down. In fact, I've been known to mock up several prototypes of tough problems solely in English. It really does force me to think out the architecture. It doesn't, however, prevent magical thinking about how the low-level implementation will actually work out, and so isn't foolproof. Quite often, though, I'm able to skeleton-out my code in comments first, and then hanging the actual code off the

Explaining your work is a great way to demonstrate that you actually understand it. As the article illustrates, perhaps the most critical part is going back and verifying that the code matches the explanation.

Explaining your work by writing it down is fine, but if noone reads what you have written, it isn't as useful anymore. Hence, it is not the documentation part but rather the reviewing part that helps. Hence, what really does the job is code reviewing, rather than documentation. If you document it before reviewing, thats even better.

The key thing here is that the person who wrote the code isn't likely to find bugs in it, as that person "knows" how the code works. The important thing is to make someone else l

I have caught a number of problems documenting my code. When you describe what it is supposed to do and you realize that it really doesn't do that then you can correct as such. I would say I have found more incorrect behavior rather then show stopping bugs. However if we had shipped the product with the code the way it was we would have probably called it a bug so it is probably the same either way.

If you are trying to save time you can always use a DbC system for some of your "documentation" of what the function is intended to do and have that become an actual error check on your code. You can even use the contracts to automatically generate unit tests for you. It's also harder for documentation to fall out of sync with code since it is part of the testing and flags an error if it isn't kept up to date.

Oh, come on, Literate Programming has been around for 30 years! Knuth made exactly this argument in his 1984 essay entitled, surprisingly enough, "Literate Programming!" Wikipedia asserts in it "Literate Programming" entry: "According to Knuth, literate programming provides for higher-quality programs, since it forces programmers to explicitly state the thoughts behind the program, making poorly thought-out design decisions more obvious. Knuth also claims that literate programming provides a first-rate documentation system, which is not an add-on, but is grown naturally in the process of exposition of one's thoughts during a program creation. The resulting documentation allows authors to restart their own thought processes at any later time, and allows other programmers to understand the construction of the program more easily."

Congratulations to Slashdot for posting about some kid rediscovering an ancient technology by a revered master of the craft. What's next? "Snot-nosed highschooler discovers GOTO is a bad idea?"

Actually, he's talking about something entirely different to literate programming. LP advocates writing a single document that contains both documentation and code, whereas what this guy's actually advocating is basically finding a reason to take a second pass at looking at the code, which is what happens when you document it separately. You'd get the same benefit from, for example, having a policy of self-reviewing code after completion, or in TDD the refactor phase of the red-green-refactor cycle.

Manager, at the beginning of a project: "Forget the documentation! Just get it to run!"

Manager, at the end of a project: "Where's the documentation! You were lazy, and didn't write any!"

Documentation is at the ass-end of a project. The Manager's Manager wants to see something running. He doesn't accept paper as a currency. So documentation will always get low priority. And that ass-end will be hanging out and swinging in the breeze.

Someone could do a scientific study that proves that documentation cures cancer.

There are two types of manager (and indeed two types of company) out there. The kind you describe, where everything needs to be done yesterday, damn the protocol, is one kind. Ones where everyone sticks dogmatically to bureaucracy and obsesses with "project gateways" is the other.

When you're stuck with the former, the common reaction is "for god's sake, if they won't let me follow procedure properly the code won't be any good at all!". The reaction to the latter is usually "for god's sake, if they obsess ov

It's easy to simply dismiss this as poor management (and it is), but solving the problem is often a bit trickier than replacing the manager. The biggest problem is that a really large segment of the software industry does not know how to connect business goals with software development activities. It's not just the managers who are clueless, but also the programmers. Many programmers do not consider business goals further than, "If I do things *right* then it will be better in the long run" (for various

in reality, in modern changing by half year update platforms.. you first write it to see if it can be written. then you ship it.

it's sad. but wtf, in which gigs can you actually document what you're going to do beforehand? where you'd have the specs at the start of the project - "this is what we want and we checked it can be done".

I suppose with financial backends etc db stuff you'd know that.

do you know what kind of "for end user" documentation I really hate though? the lying kind. press here for more i

I've always wanted to write "tests" in a literate way. For me tests are a way to document the behaviour that I expect and various assumptions along the way. If the behaviour changes, or if I do something that disregards the assumptions I made, I want the tests to fail.

I feel that I *should* be able to organize this in a literate way. If I want, I can even write human language documentation in the same place. The testing framework provides the same purpose that Web (or its equivalent) would -- it allows

The problem that I tend to run into is that I have too much coupling between the implementation and the tests. As I refactor code, the discussion I make with the tests drifts and I end up having to refactor the tests considerably more than the production code. To combat this, I have made a practice of separating my unit tests from my larger scale behavioural tests, but I still end up with a lot of churn whenever I'm refactoring.

Have you ever solved this problem?

It could be the problem is this: if you want proper code coverage, you typically need to write quite a few more unit tests than code. However, with literate programming, it seems you only have to write one comment for each piece of code. So I think the number of comments you have to write is significantly smaller than the test cases, but I'm not 100% sure.

Titled They Write the Right Stuff [fastcompany.com] it looks at the coding practices at the company that wrote the control software for the space shuttles. If you want to know about documentation as a bug-finding tool, this is pretty much the holy grail.

Consider these stats : the last three versions of the program -- each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
...
Take the upgrade of the software to permit the shuttle to navigate with Global Positioning Satellites, a change that involves just 1.5% of the program, or 6,366 lines of code. The specs for that one change run 2,500 pages, a volume thicker than a phone book.

It really depends on how many passes over the specs they made, and how separable the sections were. If you have to hold all 2500 pages in your head at the same time in order to spot deep inconsistencies, few (if any) humans will ever succeed. But, if the documentation was well segmented, such that you only needed to hold couple dozen pages of knowledge in your head at one time to understand a section, then you stand a chance, with sufficient reviews, to comb all the nits out.

There is an old joke: "The definition of promiscuous is somebody who has more sex than you do". From reading TFA and some of the comments on slashdot, I get the feeling that the definition of documentation is equally subjective and self-serving for developers. Some developers think that writing documentation means adding comments to code. Others feel it involves writing Javadoc/Doxygen-style comments at the start of every class and method, and then generating HTML from that. Yet others feel that documentation hasn't been written unless it involves an architectural description.

When I am working on my own open-source projects, I feel that documentation isn't complete until I have written a few hundred pages of text that aim to be cover most/all of the following: (1) API reference guide, (2) programming tutorial, (3) user guide, (4) architectural guide, and (5) suggestions for "future work" that I hope other people will volunteer to do. Yes, I recognise that I am a bit extreme in the amount of effort I put into writing documentation. However, it does enable me to elaborate on the thesis of TFA: attempting to write such a comprehensive amount of documentation often highlights not just coding bugs, but also architectural flaws. This causes me to work in an iterative manner. I implement a first draft version of the code. Then I start documenting it, and when I encounter a part of the software that is difficult to explain, I realise that I need to re-architect the code base a bit. So I do that, and then get back to writing documentation, which causes me to notice another difficult-to-explain issue with the code. Working in this manner is slow, and I suspect it wouldn't work in a business with time-to-market pressures, but I find it gives excellent results in my own, non-time-pressured open-source projects. I touched on this issue [config4star.org] in the documentation for one of my open-source projects.

Are you sure you're really receiver-focused when you write all that stuff? Most people don't want to read that much text to use, say a configuration parser. If it takes people 10 hours to dig through your documentation and 1 hour to actually write the code, you're probably not doing it right. Sometimes less is more.

Reference documentation is a bit different because people to some extent can just go directly to what they need. But in my experience, most people just want something they can copy-paste into the

You saved me the time of writing a similar post. Writing that kind of documentation makes robust, maintainable, supportable, reusable and long lived software (or FPGA design, etc). It's not a chore, it's a pleasure because it results in a polished product that people want to use. Without it you just wrote a fart in the wind: it stinks now and is gone tomorrow.

I was taught that in my first year CS studies, when we were required not only to write programs, but to clearly document the algorithms used. Are there really people that write software and do not know this? Well, I guess there must be. Supports my claim that the only real problem the human race has is too many idiots.

Ratio of CS educated programmers to random people picked off street and cleaned up: 1 to 100.

Yeah I'm making that up, but in the course of my career I've worked with hundreds of developers and I think I've only encountered 1 or 2 CS graduates in my work. I meet more of them socially than I've ever seen in the workplace doing real work (not managing the IT dept.).

This is another good documentation tool, and a way of avoiding bugs. It is surprisingly hard to do.

If you can't think of a good reasonably short and descriptive name then you don't understand the concepts as well as you should.

I only use variable names like i,j,k for loops. I use x1,x2,y1,y2 and similar names only for numeric values. This is applicable when I am implanting math algorithms. If I have a lot of similar variables differing by their last digit and I'm not doing equations, I know I am writing code that I won't be able to read later.

I tend to declare one variable per line, and describe what I am using it for as a comment. If I have a lot of variables I split them into groups, which I separate by blank lines.

I try and avoid reusing intermediate variable names, unless they are in different lexical scopes. It is fine to have similar name inside loops that doing similar work, but make sure that you are not confusing concepts when reusing variable names thie way.

I have been working on algorithms, and have stopped and spent an hour or more thinking about what to call the variables. I do this when I get confused. It always pays off. When you have a good descriptive name and you see it in it's use context, you can actually see the mistakes before you make them.

1. When naming entities, pick the SAME names that the business people use when they talk about their domain. Do they call that thing you're working on an "XYZ Thingy"? Then that's the name you should give that entity in the code, formatted according to your conventions for names. Why? Because then your code aligns with the requirements, models, and documentation, no translation needed.

I would go further: Thinking about the language of your project can be incredibly useful and clarifying. If you can boil down your problem's concepts into a consistent and fairly precise language, then you stand a better chance of implementing without too many thinkos.

Aligning that language with the business processes is definitely important, if your software is implementing business logic. Not all problems have a business process to align with. But, they have something they interface with, so try to be

I'm currently writing up API documentation for a large code branch which was never properly documented (and wasn't written by me), but now needs merging into trunk. I've found several serious bugs in the code as a result, all from trying to explain to the client how to use the API. These bugs were actually blindingly obvious when the behaviour of the code had to be explained.

I've also found some horrible design issues, where various settings the code allowed were contradictory or meaningless, or one setti

What this guy is talking about is a do-it-yourself code review; better would be to get coworkers to review it with you. It doesn't matter what technique you used to write the code in the first place, get a couple of fresh eyeballs to read and try to understand it.

As an unrelated comment, someone who thinks extra large gray on white fonts look good shouldn't be making web sites. At least he didn't put each paragraph on a separate page.

There is simply no incentive to write bug free code or even to make a conscious effort to reduce the bugs. Given the incentive structure in most programming shops one should be amazed the bugs are as few as they are today.

Think about it, if I implement a feature that has absolutely no bugs, no problems, no one complained and it is all hunky dory all the way. How much praise will you get for it? How many of you have written in your annual self assessment, "I implemented a critical feature foo in 2009 that

if I implement a feature that has absolutely no bugs, no problems, no one complained and it is all hunky dory all the way. How much praise will you get for it?

If you don't get recognized for that you need to find a new job.

if you had a choice of a complex, difficult to test algorithm (say, using AVL tree based on two custom hash functions on a data set) to give you a 10% speed up versus a clean simpler algorithm

Unless there was a critical need for that extra 10% you would be foolish not to go with the simple approach. Cost, schedule, and reliability are almost always more important than an incremental performance improvement.

I've witnessed writing documentation / seudo lines of code in comments on what a section is suppose to do is a quick way to ensure all pieces come together; frequently able to leave those comments afterwards to describe the following 1-N lines below it.

Documentation describes what you intent a piece of software to do. It doesn't assure that the documented piece really does that, but it can help catch design flaws if you realise that that's not the functionality you wanted.

Not code documentation, but end user documentation was my gig for a while. At one point there were more Bugzilla entries from me (entered as I tried to use and document the software) than from the whole 15 person QA team put together. It was one of the experiences that drove me into software testing and then pen testing.

More on topic for the article, my current employer implements extensive unit testing before any code is written (no really, I know lots of places say they do this, but it actually happens a

For some methods of documentation this is very true. For some programmers that care about their work its very true.

But if you don't care about your code, you probably won't care about the documentation. In this case, I agree its False.

If you know the documentation you just wrote is a bag of lies but you turn it in anyway, because you know that the PHB won't understand it and couldn't check if it was true if his life depended on it, then you might as well junk the code you just wrote as well. Chance are it will break the minute you walk out the door.

I must point out that I will never apply that link in practice... having had a friend end up working somewhere I'd worked four years previously and discover my name in the documentation. Always leave the documentation in the form you'd like to have found before you did whatever it was.

I once got hired back to a place I worked for 3 years before that, and had to answer questions about the code I designed when I worked there before... fortunately I always document at least the basics diligently but it still made for some sweaty moments. So yeah, it could be your friend. But it could also be you after 3 or more years:)

In the article someone commented on the difference between business needs and developer needs. This is why I favour doc generating systems like doxygen.

Part of the problem might be brain organisation, some people think very spatially and words get in the way. The code and fix as fast as you can does work if you are good, lucky, or fast. If so you probably do not want to doc every iteration.

Plus he mentions concurrency and the confusion it may cause for other developers re-using the code, but doesn't address the obvious divide by zero that will occur if getB is called first even in a single-threaded application.

Reality? The reality is that a design document written before coding starts is likely to never be accurate enough to perform the kind of annotation you're talking about, because as soon as coders sit down with it to actually implement stuff, they'll realise the design missed some crucial point of logic about how the application should work. And as soon as the code is demonstrated to the customer, the customer will point out misunderstandings about the design. And as soon as you start changing requirement

I'm not sure that Architecture is just about the non-functional requirements. In my opinion, the Architecture doc is about ALL the constraints on the solution from an implementation point of view, not just the technical constraints.

Your list does make sense though. Perhaps I should reconsider my opinion.

"It is clear that the documentation for an API makes a massive difference to the usability of the API. I have yet to be convinced that documentation of the code enhances that maintainability of it."

Rather than enhancing the maintainability per se, the documentation helps with letting you know what needs to be maintained. In particular, a block of code may look perfectly error-free without looking at the documentation, but you only realize something is wrong when the documentation doesn't match what the cod

I understand, but at least do this:- keep a change log. Even when I'm under extreme pressure I write down at least what the customer wants, and what I'm going to do about it. What did I change where.- keep a decision log. Every time you have to interpret a request or design spec in a certain way, write it down.