Posted
by
kdawsonon Tuesday June 17, 2008 @04:09PM
from the and-don't-tell-me-to-use-emacs dept.

cconnell writes "I am working on a PhD in software engineering at Tufts University. My interest are the general principles of good software design, and I am looking for links/references on this topic. The question is: What design/architecture qualities are shared by all good software? Good software means lacking in bugs, maintainable, modifiable, scalable, etc... Please don't tell me 'use object oriented methods' or 'try extreme programming.' These answers are too narrow, since there is good software written in COBOL, and by 1000-person teams for DoD projects. I am looking for general design principles. If it helps, I am trying to build on the ideas in this article from some years back."

If you're looking for great software design principles, start with the greats: Liskov, Fowler, Martin. Go to Object Mentor's published articles [objectmentor.com], click on "Design Principles", and start reading.

These handful of articles changed the way I look at software, particularly OO software, but not necessarily restricted to OO, and highlighted the importance of controlling dependencies. All of the Gang of Four design patterns use one or more of the principles laid out in these white papers. I would go so far as to

You can also end up with weird, inefficient code, because the specs are poorly written and no one is allowed to have enough oversight to realize it.

That's more of a management problem, I suppose, but I've all too often seen "glue" methods that were expanded beyond their scope due to the fact that the designers of Method A and Method C were never allowed to meet, and the people who came up with glue Method B were forced to all sorts of unholy kludge make them work with each other.

I would disagree about insulating coders from the "noise from corporate" in many situations.

If you are doing development for a small organization say, 1-500 employees, you as a programmer, are not likely to have a whole lot of insight into the business rules of some department on the other side of the building. Playing opperator for specs having management relaying messages from the accountant isn't going to help your situation.

IT has a great place as a strategic process improvement center for most companies. Everyone uses IT resources now. Accounting, shipping, sales, collections, lease/loan departments, etc... You, as a programmer, have a chance to see into the life of every department in your organization. You have the oppertunity to see process inefficiencies and recommend improvements. People as a whole, like the path of least resistance. If Jim in sales is used to entering his deals into the company's sales system, then Jill from Accounting prints out the sales report and types it into the accounting software, and finally Sally gets a copy of the bill and packages up the order in the ware house where she enters the information into the inventory system... All of these people will keep doing the exact same dual entry because that's the way they are used to doing it. But being in IT and getting to see these processes, you can see the obvious problems, the likelihood of error, and the wasted time.

But you need to get out of the IT cave and get into action with the other departments.

On the other hand, if you like coding and hate people, you can always get into a code ware house where an absurd number of programmers do naught but code off of specs with no input, no chance to design, no chance to see the larger picture...

I agree with what you're saying but would put a more pessimistic slant on it. A programmer shouldn't take a personal, face-to-face, cube-to-cube interest in what goes on in other departments hoping to make improvements on business processes. A programmer should do that because otherwise the specs will be wrong, the design will be flawed, and the tests will be lacking.

You will be told, "There are people whose job it is to talk to all the stakeholders, understand their disparate needs, and assemble the requirements." This is correct. There are also people whose job it is to design the system, other people who are supposed to implement parts of the system, and other people who are supposed to test the system. Odds are that many of these jobs will be assigned to incompetent, clueless, and/or underworked people. Odds are that even the competent people will miss things that might cripple the design later. It improves your odds immensely if you do some reconnaissance work on your own.

Don't duplicate the work done by others (unless you figure out that a particular person is useless.) If somebody talked to the manager in group X, then you should talk to the second in charge. If someone talked to a supervisor, talk to the grunts. Read their requirements first, then go casually chat with them as if the requirements were correct. Observe their looks of panic and horror. Then start taking notes. Requirements are often collected manager-to-manager and leave out vast swaths of essential functionality.

After you feel good about the requirements, review the test plan. If anything is obviously missing, mention it. If anything looks suspiciously underspecified, chat innocently with the testers about it. "I bet testing internationalization is gonna be a bitch. How the hell do you even type right-to-left text in Windows, anyway?"

This is your job as a coder -- to have the backs of the people you work with and save them from themselves. If you're lucky, someone is doing the same thing for you. The guy who wrote the test plan might look at your issue tracker to see what features you plan on implementing. Maybe you missed something. The guy who wrote the high-level specs might look over your design docs to see if you've made any obvious mistakes with respect to deployment requirements.

A lot of people see this kind of behavior as being absolutely contrary to fundamental engineering principles. On the contrary; it is sound human engineering. It shouldn't even be a political problem. The only people who won't appreciate the feedback and correction are the ones who are consistently shown to be incompetent.

I would agree with you in large situations with more significant IT departments. But in the small business sector (up to 500 employees) your looking at an internal IT staff of roughly 4-12 people, likely split into Developers and Network/Support, with two or three of the positions being low/mid management (2 technical supervisors and an IT manager who reports to an exec in Accounting or some other non-CIO executive). With the split between Programmers, Network Admins, and General IT/Network Support leaning

I know someone who had far worse numerical statistics than I did (as in a whole point lower GPA and around 150 points lower on the GRE) and somehow got into a better Ph. D. program (we're both going for PhDs in CS). He said that his professors told him a good deal of getting into a good Ph. D. program (perhaps this doesn't apply to just getting into one at all) was name-recognition of his recommenders, and he went to a large university with lots of recognizable faculty, while I went to a small one with only

[...]you can honestly gain about the same degree of skill as an average Ph. D. program will impart with about three months of research.

Nonsense. As a PhD student myself, I believe that the PhD program in question would have to be crap and your advisor an idiot for this to be true. If this is the case for you, you are not in the right place and/or you need a new advisor.

Consider whether you really need to subject yourself to one.

But this is a good point. You should only undertake a PhD if you find it intrinsically edifying - all other reasons are secondary.

I think the parent would be better contacting slashdot admins to mine the postings rather than an article thrown out there.He's asking the wrong question the wrong way for the level of work he should be doing. Probably because he's got "book" experience, and not 10 years of work experience. That said, you won't find many people teaching at university that would do any good answering his questions either. They may be good at their jobs, but not at multiple project managements... the ones that are really g

To go even further on this path, abstraction and frameworks have improved my code quality and reduced my time to production.Abstracting code as much as appropriate allows you to reuse a significant portion of your code base. And with designing a framework for your applications that utilize that abstracted functionality to allow for a modular design of the actual business logic will greatly improve almost all projects.

The business layer should have no idea what the database is or how it works.

Excuse me, how do you find links? Asking people who are most likely to know seems a perfectly natural way to do research. Get off your high horses people.
He didn't ask you to spoon feed it to him, he just asked for pointers. Obviously you have nothing to offer so why did you bother to post!
I've never seen so many asshats in my life complaining about how he does his research.

You mock it, but not only does this pass for research, but it's business advice on innovation: seek outside your boundaries for answers. Businesses frequently ask outside experts to participate in design and innovation activities, and only for a stipend. Proctor and Gamble and 3M are both renowned for this type of activity.

It's certainly not the only strategy to create or discover new ideas, but who knows when a post by J. Random Luser might contain that cool spark that sends you down a new path of thi

What? So... you can't ask for help in getting your PhD? My my... There are QUITE a large number of doctors out there that are apparently sharing their degrees with colleagues, friends, family, and... strangers on the internet....

I suggest you stop using the services of anyone accreditted with such a degree.

I was sort of thinking this, but I was also wondering what possible value the information he got from this site could be in what should be a well-referenced work. Writing a thesis and backing it up with quotes from random people on the Internet doesn't seem like the wisest decision.Perhaps he should spend his time interviewing acknowledged experts in the field or at least studying papers written by them. Hell, even interviewing students in his local CS department would be better than basing an argument on

I was sort of thinking this, but I was also wondering what possible value the information he got from this site could be in what should be a well-referenced work. Writing a thesis and backing it up with quotes from random people on the Internet doesn't seem like the wisest decision.

This isn't necessarily his/her intention. The OP could just be looking for some general ideas to get going (or to rule out bad ones before proceeding). I see this as a hypothesis-generating activity, not one in which he/she'd expect to get hypothesis-validating information.

This isn't necessarily his/her intention. The OP could just be looking for some general ideas to get going (or to rule out bad ones before proceeding). I see this as a hypothesis-generating activity, not one in which he/she'd expect to get hypothesis-validating information.

Given the researcher's CV includes teaching software engineering at Boston University for several years, and being a project lead for Lotus for many years before that, I imagine he does already have many ideas and sources already. However, I imagine Ask Slashdot could provide at least two useful things for a PhD: direct data on what the "popular view amongst the technically-minded" is about what makes software better, and a wide and easily-cast net for picking up any links or texts that are in use that he might not be aware of.

His PhD seems to be a late-career attempt to crack the big philosophical nut, rather than an early-twentysomething scratching around for an idea. So this Ask Slashdot question seems to be an attempt to search every corner for data, however unlikely, rather than a lazy lack of effort.

Given the researcher's CV includes teaching software engineering at Boston University for several years, and being a project lead for Lotus for many years before that, I imagine he does already have many ideas and sources already. However, I imagine Ask Slashdot could provide at least two useful things for a PhD: direct data on what the "popular view amongst the technically-minded" is about what makes software better, and a wide and easily-cast net for picking up any links or texts that are in use that he might not be aware of.
His PhD seems to be a late-career attempt to crack the big philosophical nut, rather than an early-twentysomething scratching around for an idea. So this Ask Slashdot question seems to be an attempt to search every corner for data, however unlikely, rather than a lazy lack of effort.

You sort-of made my point for me, yet implied I called the guy lazy. Not so.

My use of the term "hypothesis-generating activity" is not derogatory. In fact, I think his use of Ask Slashdot, while likely to have a low SNR, is an interesting non-scientific way to gauge what other SE-types are thinking.

As you say, he can gauge the "popular view amongst the technically minded." No one would ever cite a Slashdot discussion as scientific evidence for anything. However, he can mine the results for ideas that

Oh, FFS. Whenever you're doing any serious research, talking to your colleagues -- and yes, damn it, when you're doing software engineering, a lot of/.ers are your colleagues -- is how you form good ideas and organize your thoughts. This is true in any field. I've noticed that a lot of hotshot geeks like to imagine themselves as Lone Geniuses Bringing Great Ideas Into The World Through Sheer Brilliance And Force Of Will. Guess what? The LGBGIITWTSBAFOW approach works reasonably well for small software projects and one-off research papers. For anything bigger, such as a PhD thesis, it's a recipe for failure. Every computational tool you use in your daily work started through collaborative research.

The submitter is clearly not asking anyone to write his thesis for him. He's gathering ideas, that's all. If you have something useful to contribute, speak up. The fact that you choose to snipe at him instead ("I'm appalled at the quality of post-secondary education that this guy has supposedly received") pretty clearly indicates that you have neither the experience to understand what he's doing nor the expertise to contribute to or comment on his work.

I think that you would agree that if he researches this question everyone who he pulls answers from doesn't really 'own' part of his PhD. Otherwise the only way he could 'earn' it would be to pull answers out of his ass instead of figuring out what really where the best.

How is polling the opinions of a diverse group of people with expertise in the field not "research"?

Sure, if he uses your ideas you will get referenced. Reading the ideas of others is an extremely important part of what research is. Just because I used Maxwell's equations in my PhD thesis, doesn't mean that it's his work and not mine. I cited the sources, and was entirely honest about which bits were mine and which were from somebody else. Then, in a seperate section, I discussed the importance of my contributions.

The work of others typically contributes the majority of the volume (and some of the value) of any PhD thesis or research paper.

He isn't asking that you write his paper for him. He is looking for information and hoping that a colleague will point him to something interesting. I don't see anything wrong with that. Someone here might know of some research or project that would be very helpful that the OP could use.

Sorry to see that you're so jaded. No idea why you were modded up as 'insightful'. More like troll.

Good code avoids putting variables or functions unnecessarily in the global namespace. This means that the likelihood of name collisions is less likely so your code project is more likely to play nice with other code projects.

It's also good practice to try and make all of your code non-reentrant [wikipedia.org] and threadsafe [wikipedia.org]. As processors sprout an increasing number of cores, it is important to make sure your code can take advantage of the extra power.

It's also a good idea to COMMENT your code and DOCUMENT your processes. There's nothing worse than stumbling across something you wrote 10 years ago and having no idea how it works.

From what I can see, the real answer, process. Having a documented process that you follow to ensure that code is free of bugs, and that code is readable. How you accomplish those things isn't exactly important. For making sure code is readable and maintainable, you can have formalized code walkthroughs, or you could just have another coder read it over before it is accepted into the project. Ensuring that the software doesn't have any bugs is another issue. You should have a repeatable test environment, whether it be unit tests, or even just a list of actions peformed by an actual person, in order to check that everything is working correctly. Some approaches work better than others. But the real important thing in the end, is to have a defined process, and ensure that it is being followed.

A process need to evolve and when you spend more time on defining the process than solving the problem, you're in trouble.
I read somewhere, that a project is defined by a work on a problem which has not been solved previously.

Process is important, but there are some common mistakes that can happen in organizations focused on process.

One mistake is thinking that a heavy process is a better process. For the 1000-person team working on a DoD project, a heavy process is necessary, but a 4-person team building a system that is not mission critical will just be slowed down and demotivated by a heavy process. Much of the ideas of agile development are about replacing heavy process elements with lighter ones (not throwing process out of the window, as some people seem to think).

Another common mistake is thinking that the details don't matter. For example, a code review can be very useful, if used in the right way. To get useful comments, you need reviewers who have experience with the language, the problem domain, the application etc. The coder must be willing and have time allocated to make changes based on the review comments. Preferably, before the code review, the code has already been run through a static analysis tool and the violations have been fixed, so the human reviewers can focus on the nontrivial problems in the code. To summarize, just having "code review" as part of your process manual does very little to improve the quality of the code, but a well executed code review is very useful.

Another easy trap is to have a detailed process manual, which describes a process different from the one actually used. To avoid this, you have to ensure that the developers are familiar with the written process, but you also have to change the written process when it turns out it is suboptimal in practice.

Always remember that the perfect process does not exist. It all depends on the context: some processes work well for experienced programmers but not for inexperienced, some processes work well for simple code bases but not for complex, small teams vs big teams, mature vs experimental code, team in one location vs distributed development, fixed deadline vs release when it's done etc.

Finally, know the limits of process. Even with a great process, you cannot get good code from bad programmers or get the code done before an impossibly tight deadline.

I'd recommend the book Object Thinking [amazon.com]. The methods can be applied to any programming language. I'd also recommend that anyone working on the project have a strong background in computer science and experience to go with it.

Use object oriented methods or, in the alternative, try extreme programming. Refactor whenever possible. Dissect and redistribute. Make sure the team is cohesive and factionalized. Compensate for all scalable factors on a frequent basis, using randomization approaches. Never, and this is not set in stone, allow the project to objectify to the point of opacity. This cannot be overemphasized: you can never add too much manpower to software tasks.

From my experience, I think the biggest thing is trust. The managers need to trust the developers to do what's right, and listen to the developers when they make suggestions on how to do it better. The developers need to trust that the managers won't get in their way, but will keep them on track and keep them insulated from distractions. Developers need to trust each other, that everyone's code works well etc, and trusts each other enough to ask for help when they need it. Once everyone trusts everyone else and can work well together, the project will be more successful.

If you work with people you can trust, you will probably end up with a good project. However, that trust (or lack thereof) doesn't come from nowhere. If you have coworkers that you don't trust, then it probably has a lot to do with experiences that they have put you through, which have ended up in less than satisfactory results. Developers that turn in ugly/buggy code, managers that are constantly changing requirements, or trying to add an every increasing number of features, these are the things that ca

In my experience the single most important part to a software project is good requirement gathering and analysis.
As for development, every program that i know uses some concept of divide and conquer. Breaking up a large problem in to a set of connected smaller problems simplifies writing good code. It's easier to write small bug-free modules then it is to write a large program all at once.

It would be interesting to find the cutoff point where a problem should be further divided and when it is discreet enough. Also, it would be interesting to know when a developer begins to introduce bugs or less optimized code. Like after x many lines or like y many hours.

It would be interesting to try and quantify code elegance. I forget who said it but there's a saying "code that looks good is good"

So... What Open Source Software fits your bill?Seems to me "Good Software" is simply software that addresses the problem it was designed to address in as economical a way as possible.

All these other facets are different beings entirely. Scalability? That's part of the problem domain, has nothing to do with how good your software is.... If your app only scales to one user, and you have..... one user.... You have created "Good Software." Software that other people can read is a often considered "good", but

No, real world conditions provide the opportunity to view the raw possibilities of SNAFU at close range.The problem with theory is that theory is nice and clean and abstract, while practice is gritty and dirty and tangled. No theory of software design is going to account for the battle of the egos that goes on between teams who all think that other teams aren't doing half the job that they would be doing.

Then you start moving into budget and manpower issues, and the very real issue that deals with actual va

It's been my experience that big software development companies almost invariably spend so much of their time worrying about those horrible real world conditions that it rarely occurs to them that those conditions didn't just happen by coincidence and that they could take steps to avoid the problems in the first place. Smaller shops tend to be much better at this.

Before anybody dives in and lectures me on scalability, let me say that IME the difference has a lot more to do with the kind of unprofessional,

Then you might ask "what makes a good developer good". Well's that's not so easy to answer.

Indeed, that one is hard.

An interesting starting point on that conversation might be Weinberg's classic The Psychology of Computer Programming.

I've only read the original edition, haven't had a chance to get my hands on the 25th Anniversary revision yet (yeah, it's been out for a decade... I've had other things to do, and unfortunately buying and reading that has been down low on the list of priorities)

You need three things1: A bottle of your favorite alcoholic beverage.2: Three seasons of Southpark.3: Make that two bottles, and some snacks4: An internet connection.5:...

Um, I forget where I was going with this, which pretty much sums up my first year. Ah, I have such fond memories of trying to find papers when I was starting. Now I'm writing up I can look down upon you first year phd noobs and laugh, oh yes, perhaps even 'heartily'.

Well anyway, there's always google scholar. Worked for me, its an extreme

A software project is only as good its documentation. Look at successful open source projects -- many of these have excellent project documentation that tells you all about the architecture, structure, features, coding practices, standards implemented, data formats, data validation information and so forth.

WHat kills so many projects is a lack of good documentation -- if no one can figure out how to pick up the ball and code a feature or a bugfix or whatever, then the code will wither. This applies even to closed source projects -- one of the things that screwed Vista over, for instance is that much legacy code was in need of rewrite -- except there no one new what the code did anymore.

Options should be where they belong. Configuration/installation should offer as many options as are needed and then more. (Since when is an "advanced installation" really just one or two options?) Remember that not all of us use a mouse all the time so keep keyboard navigation easily available. Read some of the UseIT articles. Keep the layout consistent throughout the application.

There, those are some design fundamentals that I think are important. As for the code, don't be afraid to comment. Someone, like

Most good software I've seen follows the KISS principle internally: Keep It Simple, Stupid. Pieces of it know what they're supposed to do and they do just that. They don't mix in functionality for several things. They don't have embedded knowledge of how they relate to the rest of the system. They've got clean, modular interfaces that let you test just that one part to make sure it's doing what it should and not doing what it shouldn't, without having to haul in large parts of the rest of the system. They either don't make assumptions about what the rest of the system will hand them or they've got those assumptions clearly documented in the interface and they test that their input conforms to those assumptions and produce a clear error if it doesn't. Eventually some pieces will have to embody the design and logic, understand how all the individual pieces fit together to make the system work, but that's their job: to orchestrate the work being done, not to actually do it.

Another indicator is that good software is designed with the certainty that it will change, that it will be extended and altered over time. Good software has that assumption built in. Bad software, by comparison, is often flagged by statements like "Don't worry, we're never going to change that." or "We don't need to worry about doing that.". Software designed not to change or be extended is either bad software or rapidly becomes bad software once it hits production.

And no, nothing particularly new there. It's been this way for about 50 years.

After years of debating on usenet and C2.com wiki, I've concluded that software engineering is largely a psychological process if one is looking beyond just performance issues. Due to Turing Complete principle, there are many many solutions to any given problem (same output). The solution that people prefer seems to be the one that best fits the way they think or the way they think about the domain (problem-space). The problem is that until we can dissect an entire working brain, psychology varies per perso

I've personally written one medium size project(around 100k). As the project scaled I found there were a few things that really made a difference.
Modularity with high boundaries and simple interfaces between modules. What I mean by this is that as a program grows more complex if you reduce the number of dependencies maintainability increases greatly. Also, if it's possible to use the standard data types(i.e. deque instead of MyReallyCoolIntegerDeque) for communication between modules it's much easier

I strongly suggest you see if you can get a few weeks of academic internship with these [lockheedmartin.com] people. Also know as 'Those who write the right stuff [fastcompany.com]. They actually do know how to write software.

Other places to look for: Linux Kernel team. Donald Knuths Tex/Latex.Or, believe it or not, Blizzard Entertainment. They actually are the only entertainment software company I know of with a proven track record of extremely high quality software compared to others in the field.

But any core team of non-trivial low-level open source software technology will do actually. Python core team, PHP core team, your favourite Linux IO crew, Apache, OpenLaszlo, KDE, Haxe, Blender,... whatever. And while people will start bickering that Apache or Blender code is oh so crappy in this or that area, rest asured that all projects of that kind, *incuding* the aforementioned *all* have core team members who are very well aware of the downsides of their software. And thus can help you out in your pursuit for details on professional software developement, because they also know the pitfalls.

Bottom line: Join some tight crew of people that build stuff everybody uses or many people rely on to work. Hang with them for a month or two, then you'll have a better idea how exactly to approach your topic.

I've been programming for almost 15 years and have spent alot of that time very gradually refining my own design principles. Here are my basics:

1. Modular designs. Modular code is generally more maintainable and more scalable.2. Self-documenting code. If you read my code, you can understand whats going on just by the code. There are very few comments, because very few are needed.3. Occam's Razor: The simplest solution is often the best/most correct solution. Over-complicating things often leads to maintainability issues later on.

I am currently working on a project that requires me to share code with several other people. None of them have needed much direction when picking up my code and re-using it because I've used sound design principles when writing it.There really is no single answer that handles all situations. I use some more specific principles when doing different types of projects, depending on whether I'm doing Database design, web development, stand-alone applications or complex application systems.System design is very subjective, every person seems to have a different way of doing things. One thing I always ask myself is this: Will I be able to work with this code 6 months from now? If the answer is no... then I have work to do to improve the design.

Self-documenting code. If you read my code, you can understand whats going on just by the code. There are very few comments, because very few are needed.

I've heard this espoused before, but I think there are a lot of caveats. Generally speaking, my experience is that comments are needed in direct proportion to the complexity of the code. Code can't always be simplified. For example:

a) What you're trying to do is extremely complex (i.e. physics simulation code)b) It needs to be highly optimized, which often comes at the expense of readabilityc) The language itself is not highly readable in the first place (assembly is inherently difficult to read, while C# reads almost like English)

I find myself writing much more verbose comments whenever I tackle somewhat complex problems. Comments are not only useful in describing "how", they're important in describing "why". Code simply can't inform a reader why a particular algorithm was used in place of another. It can't describe various things to look out for the next person to modify the code which you yourself may have run into. And, it can't give nice high-level overviews of expected usage patterns. These are most crucial in your most complex code.

It's absolutely impossible to avoid complexity at some level when you're trying to produce complex results. Perhaps it's the case where comments are largely unnecessary based on the language and type of code you're producing, but a lack of comments would be a real hindrance in the environment I work in.

I'm intrigued by this question, because I would assume that by the time you've reached this level (i.e. have a Master's in CS or something related) you would already have an idea as a starting point. Furthermore, I thought that the first part of any PhD-level research was an intensive Literature Review [wikipedia.org].

So, in other words, you should search LexisNexis, EBSCO, etc., and find some journal articles that talk about this. Read some books like Gang of Four or Mythical Man Month. Lastly, do your own data gathering. Find a bunch of Post-Mortems and start to put your own patterns together.

Oh, wait, all that would require work.

Seriously...I teach college-level courses and have multiple graduate degrees...and I'm continuously amazed at the quality that schools put out nowadays.

Agreed. Get thee to a library my friend. If you can't cite the 10 most important and 10 most recent journal articles on the question, what the hell are you doing asking for help on Slashdot? Who's your supervisor?

2) I am doing separate literature searches, in various ways. I also wanted to get input from practitioners in the field, since much good work in software engineering does not come from the academic community.

Get some good journal papers and go from there. Please, please take 'articles' with a grain of salt, no matter where they come from, but particularly online articles and to a lesser extent 'proceedings' (I note Stafford lists her journal papers with her conference ones, and this can be problematic)

In some countries you have to submit a project in order to enroll into a doctorate programme, in others you become part of an ongoing project and your work will be a spinoff from that. Either way, I can't see how you are already working on your PhD and still making these sorts of questions.

1) is where most software and management people both fail.managers refuse inexpensive updates to keep the software robust because there is "No R.O.I." until they then buy or have written a new "wonder package" at much higher expense than the deferred maintenance.Meanwhile developers (and users) design systems that are nice but so expensive they really can't be done.

2) Software, unlike buildings, is constantly changing- and usually it is equivalent to adding new floors on

Of course, with a full pages of documentation for every 10 lines of code, and an average daily output of roughly a dozen lines of code, their process is much more time consuming and expensive than can be supported by most development budgets.

That along with some comments and a clear writing style to include readable indentation levels, consistency of style, reasonably descriptive variable names, data structure fields, or object member names...

And as much as you might try to write a "beautiful algorithm," some of the best ones are simpler, have less feature creep, and can be used along with other simplistic ones that will achieve the same results.

I would say that in many cases the answer to your question depends on the purpose of the software in question.

Common for most (i don't know about "all") succesfull software products is the fact that they are implemented using one or more Software Design Patterns [wikipedia.org]. For instance you will find that Model-View-Controller (MVC) [wikipedia.org] is extremely widespread in administrative software and similar database-driven applications, while it is probably not really usable in many other types of software (like graphics editors

One of the most useful principles I've found for making "good" software is to design very clean, very powerful interfaces. Focusing on "modularity" often puts the focus in the wrong spot, namely on the center of the module. The point is that the details there *shouldn't matter* because you can abstract away all sorts of fiddly detailed functionality.

It is difficult to make clean and powerful interfaces, however. You really have to understand the nature of the problem you're trying to solve in order to pick the most natural groups of functionality. Very often, if you're trying to get something done in a reasonable amount of time and don't need to maintain the code for that long (though beware--you'll find yourself using, a decade later, programs that you thought you'd rewrite "next month"), it's better to code something quick and specific.

The cleanliness of an interface basically boils down to how little information you can pass to it, and how little information you need from it, in order for it to do what you want; and to what extent all information and data goes through explicitly defined interface elements (e.g. an interface in Java). (Here I'm drawing a distinction between data, e.g. the content of a character stream, and information, which is "hey, there's a character stream here, go work on it".)

The power of an interface basically boils down to how many different high-level operations can be constructed from mixing and matching components of the interface. For example, compositing operations tend to be powerful (e.g. take A, take B of the same type, perform some operation to produce C of the same type from A and B).

There are lots of other generally useful strategies, but I find this one of the most overlooked, especially by otherwise really talented coders (who can tend to make interfaces more complex because they are talented enough to work with something that complicated).