Proceed with caution

Proponents of keeping all "implementation details" private are usually motivated by the desire to maintain encapsulation. However, keeping everything locked down and unavailable is a misunderstood approach to encapsulation. If keeping everything unavailable was the ultimate goal, the only true encapsulated code would be this:

static void Main(string[] args)

Is your colleague proposing making this the only access point in your code? Should all other code be inaccessible by external callers?

Hardly. Then what makes it okay to make some methods public? Isn't it, in the end, a subjective design decision?

Not quite. What tends to guide programmers, even on an unconscious level, is, again, the concept of encapsulation. You feel safe exposing a public method when it properly protects its invariants.

I wouldn't want to expose a private method that doesn't protect its invariants, but often you can modify it so that it does protect its invariants, and then expose it to the public (of course, with TDD, you do it the other way around).

Just don’t do it

This may just be a difference in emphasis from the other answers, but I'd say that the code should not be refactored strictly to improve the testability. Testability is highly important for maintenance, but testability is not an end in itself. As such, I would defer any such refactoring until you can predict that this code will need maintenance to further some business end.

At the point that you determine that this code will require some maintenance, that would be a good time to refactor for testability. Depending on your business case, it may be a valid assumption that all code will eventually require some maintenance, in which case the distinction I draw with the other answers here (e.g.Jason Swett's answer) disappears.

To sum up: testability alone is not (IMO) a sufficient reason to refactor a code base. Testability has a valuable role in allowing maintenance on a code base, but it is a business requirement to modify your code's function that should drive your refactoring. If there is no such business requirement, it'd probably be better to work on something your customers will care about.

(New code, of course, is being actively maintained, so it should be written to be testable.)

36 Reader Comments

The idea that there is some kind of universal norm on what constitutes good code and bad code is nonsense. In the real world each organization that creates or maintains software works out its own strategy. That strategy is likely to depend heavily on the kind of function the software implements, the skill level of its programmers, and the kind of competition it faces.In the real world when software is being developed to control some kind of system or device, conceptual understanding of the technology is valued far more than understanding of programming techniques. The quality of the programming has to be good enough. In one phase of my career, I worked in an environment where the product was dominant in its area and very profitable. That meant the company could afford to devote substantial resources to trying to improve its quality in part by improving the processes used for each cycle of its development. Even with that effort, the reality was that few programmers were quick to move beyond whatever tools they became used to in their earliest experience.That reality changed as competition emerged from a company that is currently dominant in networking products. Its strategy apparently was to give responsibility for a relatively large function to a single engineer and tell them to get the job done. Success was well rewarded. At least in one case, the approach to less than the best success was for the company to give out the programmer's home phone number co customers so he could be easily accessed in the middle of the night to fix problems.

Congratulations to the developers who have jobs that give them the time to randomly refactor code so they can test it. Presumably this code was working before they decided to refactor? If not then I say go for it. Otherwise I say that you clearly have time to take on more work.

Making code testable is obviously the ideal and proving decent test coverage of the API is a great bonus but you had better be sure that the code you decide off your own back to go and fix does not have any spurious bits of behaviour that other parts of the system are actually expecting (I'm not saying thats right but history is littered with examples of software that broke because people fixed the way code was actually implemented rather than necessarily changing its inputs/outputs).

EDIT: I like the fact that the guy who actually acknowledged the situation only got one vote whereas the 'idealists' got over 100 between them.

If you have no tests, how do you know it's "working" code? Just get the damned tests written already. Then if you later want to refactor, you'll have some unit tests to tell you when you broke something.

If you have no tests, how do you know it's "working" code? Just get the damned tests written already. Then if you later want to refactor, you'll have some unit tests to tell you when you broke something.

Because the program is working. Don't forget that unit tests were invented to automate testing thus freeing up time in QA and for the programmers. That doesn't mean that unit test are suddenly some magic bullet as you still need to have a test that covers the scenario that actually breaks the system. As it's functionally impossible to test ever code path in a reasonably complex program for every possible input simply because the number of possible paths quickly approaches infinity. Testing in general will only ever find some of the bugs. In fact there is a whole class of bugs where the code is working perfectly but the user is still seeing unexpected results. This usually comes down to threading, but it can also be down to the users using the system in a different way to the way everyone expected.

You should always aim for testable code. If making your code testable breaks your encapsulation then you probably need to break your objects/functions down to smaller parts to allow you to maintain encapsulation.

Development time is a concern, however if you are talking about software that must be supported years down the line then the upfront time costs will be repaid many times over down the road.

If you have no tests, how do you know it's "working" code? Just get the damned tests written already. Then if you later want to refactor, you'll have some unit tests to tell you when you broke something.

Because the program is working. Don't forget that unit tests were invented to automate testing thus freeing up time in QA and for the programmers. That doesn't mean that unit test are suddenly some magic bullet as you still need to have a test that covers the scenario that actually breaks the system. As it's functionally impossible to test ever code path in a reasonably complex program for every possible input simply because the number of possible paths quickly approaches infinity. Testing in general will only ever find some of the bugs. In fact there is a whole class of bugs where the code is working perfectly but the user is still seeing unexpected results. This usually comes down to threading, but it can also be down to the users using the system in a different way to the way everyone expected.

The reasons why unit tests will not eliminate manual QA is as follows:1) Unit test coverage is almost never 100%, even if you think it is you're probably wrong. Getting 100% coverage is difficult for large projects.2) Unit tests are not integration tests, some problems only arise when multiple systems are expected to work together, especially when the failures are performance based.

Now integration tests can be automated too, however even then you probably won't have 100% coverage. Chances are there are going to be requirements that simply were overlooked in the documentation and the development of automated tests. Things like that make 100% coverage difficult.

I believe there is always time to stop and refactor when code interacts with older code. I improves everything downstream. If that then helps improve testing it probably also improves reuse so it becomes a win win.

I am in that situation now with a large web services project started in .NET 2 days and constantly growing. Now with hindsight and resharper you start to see weak points so makes sense to get in there and make good as you are there anyway.

Man, I made an account just to reply to this. I've managed multiple QA groups, and have been involved in QA for about 15 years. I wrote a comment that ended up being way too long, but the summary is this:

1. Not modifying your code for test purposes may be okay on some dinky app that is never touched again after the initial release, but it is not okay on some major project that spans multiple platforms, teams, and years. I can't count the number of times I've been handed a completely broken build because some dev modified code written by someone who left the company five years before.2. I see QA and Support costs rise while innovation decreases significantly over time on teams who do not focus on testable code.3. I've never seen a large project where this isn't stressed implement a good unit testing foundation, and I've stopped believing anyone in dev who says they will go back and do it later after the release or towards the end.

I know there are limits to unit testing, and that should be kept in mind. I'm just saying that the mentality should be far more on the pro testability side based on my experience. In fact, one of my first questions is about this whenever I'm interviewed, and I can pretty much end the interview if they don't list unit testing as a driver of development.

Retrofitting tests on a large project that has little to no tests is difficult, expensive and time consuming. Modifying the code to support tests is beneficial because it will increase your test coverage and decrease coupling between modules.

Having tests in place is a good thing, even when your code does not change because:1. the compiler you use (i.e. its optimisation and code generation) will change;2. the underlying platform implementation will change from version to version (the APIs in Windows change return codes, and have different behaviour on invalid values);3. the assumptions you make about a platform may change (e.g. Vista changed write access to an application's Program Files directory);4. the runtime environment (C#/Java/Python/...) may change behaviour from version to version (e.g. .NET 4 or 4.5 changed the way structured exceptions were processed);5. the system may act differently from device to device (e.g. the handling/creation of temporary files on Android);6. if you are parsing protocol data or file formats, the code may be exploitable when malformed data is passed to it, or break when a new version of the protocol/file format is released;7. ...

In the example given, the refactoring may not be nececssary -- expecially if this is a library method, as you can use that library in multiple test programs that will have different execution assemblies that you can then use to control the tests.

Likewise, for protocol/file parsers you could create a simple command line application that you could have write the data in a plain text format that could be compared against a reference output file using a diff tool/library.

If you can write the tests without modifying the code under test, don't modify the code. If you have to modify the code, evaluate the high-value targets and test those first. That is, code that has the most bugs associated with it, is the most complex/difficult to understand, or code that is going to change to add new functionality.

For the code that is complex, take the time to study it, document it and slowly add tests to it. Ensure that you are covering as much of the behaviour you can (including the failure cases -- null values, empty strings, inf/-inf/NaN floating point values, etc.).

This is a mainly new thing, when visual studio 2005 came with unit test framework suddenly everyone was into unit testing, and with that suddenly they found the problem that the units were usually too coupled. Then they found mocking but as the mock frameworks only worked on interfaces, they found that they had to modify their code all over the place to cope with this limitation.

Martin Fowler discusses it in his article Mocks aren't stubs, comparing these 'mockists' with 'classical' unit testers and how interface driven classes and dependancy injection have become much more important to support code that uses these mock and test frameworks.

What is interesting to me is whether a change back to the way code was written will grow with the release of Microsoft's Fakes framework, that lets you mock concrete classes, including static objects. You just have to read the arguments whether to unit test private methods to see how limitations of the rest frameworks have had a major impact on how testing is performed (ie. They can't drive private methods, so many people decide that testing private methods are not important!)

I've always been intrigued by the way 'traditional' TDD turned into testing every little method rather than a way to almost design code components before coding them, and how that has turned into the term BDD to distinguish it.

To answer the question then, no you should never change the way you code just to pander to some framework's limitations. Especially not when there are much better frameworks available that don't have these limitations.

Yes, it's a bad practice to modify code. If the code works, then don't touch it. The best possible outcome is nothing will happen. The worst possible out come is terrifyingly bad. For example you could introduce a bug that causes widespread customer data loss.

However, if the code doesn't work properly then go ahead and rewrite it. And if you're writing it again anyway, then sure - try to do so in a way that makes it compatible with testing.

For example, I'm a huge fan of static class methods. But I avoid using them these days, specifically because they're hard to work into a good test environment.

I see a lot of comments about testing code after its built here. For me, TDD and BDD means that I'm thinking about the functionality and how it should behave first. Then you write tests to describe how it should work. And *then* you write code to make the tests pass. If you find you are writing code that doesn't support a test, you've either missed a test case or you are wasting your time on features not defined or needed.

I see it as a way to keep yourself laser focused on the goal of writing testable code that has some assurances that the parts that should work actually keep working after every change made later. It also reduces the time needed to develop/verify the tests because you do it as you go.

To the original question on SO, I say "if you are adding tests after the fact, its not TDD." It's just retroactively adding tests. Refactoring to help the testing harness is a risk-based choice with no right or wrong answer. Simple web app? Do it! Core transaction processing logic of a large app? Not so fast. Weigh the risk of breakage vs the benefits of covering it with tests...

If you have no tests, how do you know it's "working" code? Just get the damned tests written already. Then if you later want to refactor, you'll have some unit tests to tell you when you broke something.

Don't be ridiculous. Just because there are no automated tests doesn't mean it's never been tested. If a piece of code is in the wild and has been executing happily thousands of times per day for years, then it absolutely has been tested and it does work.

If you write a test that finds a bug nobody has ever encountered in the wild and then you fix that bug, there's a high probability your fix will cause some other bug that wasn't covered by your tests, and maybe the company will loose customers.

It's impossible to write a test that covers every situation. The only "real" test is to put the thing out there and see if customers find any problems. Anything that has already been in customers hands without any complaints is absolutely "tested" even if there are no proper tests.

Man, I made an account just to reply to this. I've managed multiple QA groups, and have been involved in QA for about 15 years. I wrote a comment that ended up being way too long, but the summary is this:

Your points are all good, and I agree with you.

But I think your experience working on large products is blinding you somewhat. Most projects in the world do not have anybody doing QA, let alone an entire team that needs somebody to manage it.

Coding practices need to be adjusted to suit the project you're working on.

If I was in an interview with you, and you asked me how I fit unit testing into my project, I would say it depends. I work on a broad spectrum of different projects and sometimes I write no tests at all, while other times I spend days working on tests and only minutes writing the code being tested.

Some of the things I work on, I would be a quarter of the way through writing tests when the manager steps in and says "why haven't you finished that job yet? You're supposed to be half way through the next project by now!"

I understand the value of a good test environment, I personally built the entire test environment at the company I work for and take every opportunity to push my colleagues to use it. But the reality is we can't do it for everything, or the company we work for would go bankrupt and we would all be looking for a new job. We only do tests for parts of the code where stability is critical.

Still, I don't care what the code is: if you have some code that has been working for a long time, doesn't have any known issues, you should not change that code in order to make tests possible. Go a head and write tests, but don't change the code. And if you find any bugs then be *extremely* reluctant to fix them.

Don't refactor code unless it has an actual real world problem. A perfectly good horror story is a while back I found an error in some code that does tax calculation on invoices. It was a really stupid error, almost as bad as "1 + 1 = 3", but due to a weird coincidence of how it was being used the bug didn't happen very often. There was even a unit test in place which appeared to suggest that "3" is the intended answer even though it doesn't make any sense to anyone who knows how tax should be calculated, sadly the guy who wrote it didn't work for us anymore so we couldn't find out what he was thinking.

After much discussion we fixed the bug, it was only two or three characters of code changed, tested it for days, rolled it out... three months later we found out our change had introduced a new unrelated bug and it had been undercharging customers for that entire three months and the client was extremely pissed off.

Hindsight is 20/20, and what we should have done is stuck a debugger warning in there "don't ever use this method, it's broken" and then created a replacement for it, which is only used for new code.

Rewriting for testability is a good thing. Changing VISIBILITY, however, is a TERRIBLE thing to do.

Public methods establish a contract that has to be maintained FOREVER (in code years, which are approximately equal to dog years, but are about 20% longer). So introducing public methods, or promoting private ones to public ones is a Very Bad Thing To Do (VBTD).

There are a couple of easy solutions that allow invocation of private methods, without changing visibility. The following solution works in most major languages (Java in this case):

Yes, it's a bit cumbersome; but it probably should be. It's better to test behavior rather than implementation, as a general rule. And calling a private method implicitly tests implementation instead of behavior. It's tempting to use a conventional name other than TestInstrumentation. Nobody is going to use a method named TestInstrumentation.someMethod in production code. (I worked with a programmer once, whose nickname was Nobody, for exactly that reason.)

You can also use reflection to call private methods in some languages, as well. A C# example:

The substantial disadvantage of the latter approach: you don't find out how many test cases you broke when you change a private method signature until you execute the full test suite (typically around the time that you break the build, in less advanced development cultures that aren't using staged commits and continuous builds). With the TestInstrumentation technique, you find out at compile time how many test cases you broke when you changed the signature of that private method. Which makes the choice pretty obvious really.

In the case you give, I'd normally advocate doing what you suggest. It lets you use the do-something code in other ways without having to write another copy of it, and you can use the single-assembly function in a loop to get the original all-assemblies functionality so you lose nothing. Your proposal yields more flexible code without sacrificing anything.

In general I think code should be designed from the start with testability in mind. I wouldn't compromise other aspects merely for testability, but I'd design in as much testability as I could without compromising functionality, stability or performance. You will need to test your code, and the more help you have from the code itself the easier testing will be. You will generally need to maintain and enhance code in the future, and code that's not designed with that in mind is code that's designed to ignore a basic requirement.

Rewriting for testability is a good thing. Changing VISIBILITY, however, is a TERRIBLE thing to do.

Public methods establish a contract that has to be maintained FOREVER (in code years, which are approximately equal to dog years, but are about 20% longer). So introducing public methods, or promoting private ones to public ones is a Very Bad Thing To Do (VBTD).

There are a couple of easy solutions that allow invocation of private methods, without changing visibility. The following solution works in most major languages (Java in this case):

I agree with the statement that you should not be changing the visibility of methods for the purpose of testing, however your approach is still testing the private methods. Private methods are part of the implementation (just as are private member variables) and should not be tested.

In this case, you need to call the public methods in a way that will call the private methods appropriately. Otherwise, you are unnecessarily binding the implementation details to the test code, thus making it harder to refactor that code.

You can make use of code coverage tools to see if your tests are invoking the private methods.

This also includes tricks like declaring the test classes friends of the class being tested.

If you have no tests, how do you know it's "working" code? Just get the damned tests written already. Then if you later want to refactor, you'll have some unit tests to tell you when you broke something.

Don't be ridiculous. Just because there are no automated tests doesn't mean it's never been tested. If a piece of code is in the wild and has been executing happily thousands of times per day for years, then it absolutely has been tested and it does work.

If you write a test that finds a bug nobody has ever encountered in the wild and then you fix that bug, there's a high probability your fix will cause some other bug that wasn't covered by your tests, and maybe the company will loose customers.

It's impossible to write a test that covers every situation. The only "real" test is to put the thing out there and see if customers find any problems. Anything that has already been in customers hands without any complaints is absolutely "tested" even if there are no proper tests.

the Italics are mine... what if in the Scenario you have mentioned the code in question has an Undiscovered flaw is a security Exploit , and you need to fix that exploit ? it may affect the other code, as you state. so, in the case of the security exploit, do you leave it, and hope, or fix it and hope that it would not affect the code. that is why you need testable code, or change it for testingalso, you are right about modifying code on a Production system, don't do it, because you don't know what you will affect, thus you need testing...

If you have no tests, how do you know it's "working" code? Just get the damned tests written already. Then if you later want to refactor, you'll have some unit tests to tell you when you broke something.

Don't be ridiculous. Just because there are no automated tests doesn't mean it's never been tested. If a piece of code is in the wild and has been executing happily thousands of times per day for years, then it absolutely has been tested and it does work.

If you write a test that finds a bug nobody has ever encountered in the wild and then you fix that bug, there's a high probability your fix will cause some other bug that wasn't covered by your tests, and maybe the company will loose customers.

It's impossible to write a test that covers every situation. The only "real" test is to put the thing out there and see if customers find any problems. Anything that has already been in customers hands without any complaints is absolutely "tested" even if there are no proper tests.

Not true at all, sometimes customers don't realize there is a bug, and sometimes they think the error is their fault or just accept it without reporting, some bugs are so elusive and obscure that they go unreported for years.

When we are considering software used to calculate values often times the customer will simply trust the math of the software over their own. Bugs like that can cost the customer money while going unreported. Bugs with algorithms applied to large datasets are easily buried within the mass of data. Customers will likely miss these errors while still paying the price for them.

Unless you unit test your algorithms and calculations you will not be able to claim 100% compliance with requirements. All that you will be able to say is that no one has reported anything, and as described previously that isn't very meaningful. Of course you can still test things manually, however manual tests are tedious and due to human error are often unreliable and inconsistent.

Answer: It depends. If you have other questions, that will probably be the same answer.

Dependency injection, like the refactoring example here, is good. Probably worth modifying the code. If the fan-out of the changes isn't too bad - everything that calls this code has to change, and that's probably more bad than the good you get from DI. Same with private constructors and factory static methods - great practice, but if all the code uses new and a ctor, then it's bad to have to touch everything.

You just have to decide on how much time you have to break and improve working code.

Code is never static. Mods/updates will always happen during the lifetime of the program. Debugging must be in the code for periodic testing by the programmer or future auditors. The code is usually turned on/off by a program switch or control as needed.

Code is never static. Mods/updates will always happen during the lifetime of the program. Debugging must be in the code for periodic testing by the programmer or future auditors. The code is usually turned on/off by a program switch or control as needed.

It's funny because I'm a young programmer currently dealing with this dilemma. From my experience I think there are external factors that need to come in to play, especially if your company doesn't own the product (thinking websites here and web agencies which I currently work for). I would also say that whether any products you are tying in to are testable also determines it.

To my first point I cite personal experience. A project I am currently working on is somewhat of an awkward situation. I am almost an augmented developer for a company that owns most of it's products (weird even for my line considering we normally do our own work and pass it back to the client, not get fully enveloped into their ecosystem). The issue with this client is that they don't want to add time for testing, which I've also never done before due to the web being a little bit more of a flexible system with tie-ins than most software. So it's safe to say it added a lot of time up front that they were frustrated about (but yet wanted). I just think if you're going to do testing, you can safely near double your estimation on time, which a lot of companies don't want to pay for.

To my second point, this can also be a pain. A product we use and sell as a bundled CMS for users has it's own item system. Each and every "item" in this system is created and stored in the DB. So to test against this system (especially when doing custom work to expand the CMS that we normally don't do), you have to do a lot of item manipulation, creation, and deletion. The problem with this is when testing it you a) need to have access to the DB, which since it's web it is unavailable when running tests (requiring a wrapper), you also need to not clog up your database with test items (clean up and more wrappers), and you need to mock items which are not by any mean simple objects (very complex, so more abstraction).

In the end, while I have found that testing has added structure to my code, between deadlines and mocking against this complex CMS I find it superfluous and near useless. The tests are meaningless because the logic that needs to be tested I most likely using Linq or other embedded code that makes it somewhat untestable, against a database with complex items where no one has put in any work to add abstraction.

I think testing on a product you own, that isn't web based would be a whole other ball game, but from my experience unit testing for web sits (especially large ones), seems to be more of a drawback than a useful style.

If a web application breaks during development, it's useful to be able to make the runtime exception trace and the HTML clearly visible. You don't want this to occur on a live webapp. Making a webapp produce debug output is better by changing one line than by changing several. Or you could send it an extra parameter, but it's more secure if an attacker can't learn about what's going in in a live application by sending this switch a parameter.

I recently found a critical function in our code that was both highly complex and had no unit tests. Yes, it seemed to work, and there were no known issues. But it worried some of us. So I refactored it not just for testability, but for clarity as well. After writing a few dozen tests against the new code, I found a lot of defects. Scratching my head, I found the defects were inherited logic from the old code.

We then discovered our system had edge cases that were indeed being hit and resulting in subpar result quality.

I say if a piece of code smells, but seems to work, go ahead and do it.

It seems there are two issues here:1. Should code be written for testability? Absolutely. You should spend far more time testing code than you do designing/implementing it. Moreover, it's generally a lot cheaper to overpower the hardware than make the software more efficient. Also, much of what helps with 'testability' is simply good coding practice.

2. Should code be modified while testing? Again, yes. I've worked with some of the most rigorously tested code that ever sees deployment, and the unit testers modify the code all the time because it's necessary to test the edge cases. While they don't just willy-nilly rewrite code, there's nothing wrong with stubbing out a complex function with one that generates the necessary extreme returns. In fact, there are numerous situations where you're force to do this because the sub-functions can't possibly generate the extreme returns.

Where I work, the problems with test code are really two different things:

One, the test code will require some sort of exotic resources (databases, mapped drives, etc) which don't emulate production enough to make the test code actually validate anything except that the test code works. Most of the time, it does not. The fact that testing fails in multiple places/ways has come to be accepted as normal and nobody pays it any mind. As a result, actual issues are glossed over in testing and debugged later when it is moved into prod.

Two, anyone who actually wants to do actual testing (and what the heck is wrong with you anyway?) is extremely tempted to do what we lovingly call "testing in production" where test code is put into prod and run, in theory in such a way that the client/users don't notice. This fails all the time and test jobs send emails to client+dog, or job code loads test data to a prod database (both of these tend to get the phones ringing), or we have people do this sort of testing in prod run whatever they are doing and then they forget to take the test changes back out. Forgetting happens a LOT.

So later when we try to do whatever the normal prod task is, it breaks usually in a dramatic and completely weird way that nobody can sort out initially. Phones start ringing. Panic sets in. Somebody eventually remembers what the hell was going on and coughs up a copy of the correct prod system and we spend the day doing damage control. I say "day" but somehow these things always seem to blow up at 2AM EST and get dumped on the poor sleep-deprived on-call staffer.

The second still happens all the time as the production files are not exactly locked down. You're not supposed to touch it but everybody has full access and it's easy to make it do whatever you want. I suspect clients might freak out (TM) if they had any ideas, but honestly, we keep passing SAS70 audits (or whatever the new name is) and the auditors are always happy about our access controls and procedures. Yep. Scares the socks off me too.

Ok, ... noob question here, but you and your buddy aren't debating whether the code should be tested or not, just how, correct?

IE: code should be tested by modifying it directly to encapsulate testing vs. using an outer testing method that doesn't involve messing with the code.

I'm of the mind-set that test cases should get built outside of the code. You have error handlers in your code, so that's a bit of a test in and of itself (but it's to bypass or handle issues you've already tested for and found, not meant to deal with testing in and of itself.

This question is sort of like asking "which way is better ... having each dept have it's own budget coordinatior, or the accounting department handles budget coordination for the entire company?" As others have said, it's really up to how the company in question wants to do it. Both ways have merit. If one way has been traditionally used at the company, then I would stick with that. Otherwise you could be pushing for something that would require tons of work just to implement. I doubt most companies would want to spend the time/money having some developers totally reconfig the code to be able to have it test a different way if it's currently testable in a way as-is. Sounds like a bunch of busy work based on a pissing contest over "Best Practices" (which are sometimes subjective).

My view on writing code for testability has changed over the last year. In the past I would have said to just leave it, as it's working. Now I say that it's worth refactoring, so long as you're careful not to introduce new bugs in the process.

With the example piece of code, I can see two ways of refactoring things:(1) Move the body of the loop into its own method, taking a Type as an argument. This works well if there is nothing outside of the argument that needs to be updated.(2) Move the loop itself into its own method, taking a collection of Type instances as an argument. This is needed if there's some state outside of the loop that needs to be updated. It also allows a hand-crafted set of Type instances to be passed in during tests.

Where I work, we have a rule that code must be 80% covered by unit tests. The other 20% is code that's too trivial to test, i.e. would not pass code review if it contained errors.

But I think your experience working on large products is blinding you somewhat. Most projects in the world do not have anybody doing QA, let alone an entire team that needs somebody to manage it.

Coding practices need to be adjusted to suit the project you're working on.

.

Sorry, maybe the use of "dinky app" was too limiting. I agree that some projects might not be worthy of some sort of mass overhaul to focus on testability, and there are exceptions to every rule.

That said, I feel that we are well past the days where unit testing should not be implemented wherever there isn't a significant risk, and if there is that much risk then you have to wonder if there isn't something else wrong. Not that there always is