Before taxonomizing open-source business models, we should deal with
exclusion payoffs in general. What exactly are we protecting when
we close source?

Let's say you hire someone to write to order (say) a specialized
accounting package for your business. That problem won't be solved
any better if the sources are closed rather than open; the only
rational reasons you might want them to be closed is if you want to
sell the package to other people, or deny its use to competitors.

The obvious answer is that you're protecting sale value, but for the
95% of software written for internal use this doesn't apply. So
what other gains are there in being closed?

That second case (protecting competitive advantage) bears a bit of
examination. Suppose you open-source that accounting package. It
becomes popular and benefits from improvements made by the community.
Now, your competitor also starts to use it. The competitor gets the
benefit without paying the development cost and cuts into your
business. Is this an argument against open-sourcing?

Maybe -- and maybe not. The real question is whether your gain from
spreading the development load exceeds your loss due to increased
competition from the free rider. Many people tend to reason poorly
about this tradeoff through (a) ignoring the functional advantage of
recruiting more development help. (b) not treating the development
costs as sunk, and By hypothesis, you had to pay th development costs
anyway, so counting them as a cost of open-sourcing (if you choose to
do) is mistaken.

There are other reasons for closing source that are outright
irrational. You might, for example, be laboring under the delusion
that closing the sources will make your business systems more secure
against crackers and intruders. If so, I recommend therapeutic
conversation with a cryptographer immediately. The really
professional paranoids know better than to trust the security of
closed-source programs, because they've learned through hard
experience not to. Security is an aspect of reliability; only
algorithms and implementations that have been thoroughly peer-reviewed
can possibly be trusted to be secure.