What’s the right balance between code consistency and code improvement?

Striking a balance is always hard but sometimes you need to upgrade and improve.

This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.

Consistency vs. best practice: they are two competing interests any time a dev is working on legacy code. If LINQ hasn't been used previously, should it be used today? "To what extent are patterns part of code style," Robert Johnsonasks, "and where should we draw the line between staying consistent and making improvements?"

Robert Johnson continues: "With the hypothetical LINQ example, perhaps this class doesn't contain it because my colleagues are unfamiliar with LINQ? If so, wouldn't my code be more maintainable for my fellow developers if I didn't use it?"

Find the rightest right

In a case like this, you have two programming "best practices" that are opposed to each other: code consistency is important but so is choosing the best possible method to accomplish your task. There is no one correct answer to this dilemma; it depends on a couple factors:

How beneficial is the "correct" way?

Sometimes the new and improved best practice will dramatically increase performance, eliminate bugs, be far easier to program, etc. In such a case, I would lean heavily toward using the new method. On the other hand, the "correct way" may be little more than syntactic sugar or an agreed idiomatic method of doing something that is not actually superior. In that case, code consistency is probably more important.

How big of a problem would inconsistency create?

How interconnected is the new code with legacy code? Is your new code part of a library? Does it create an object that gets passed to many parts of the program? In cases like these, consistency is very important. Using a new API, or a new way of doing things in general, might create subtly different results that break assumptions elsewhere in your program. On the other hand, if you are writing a fairly isolated piece of code, inconsistency is less likely to be a problem.

Also ask yourself: how large and how mature is your code base? How many developers need to understand it and work on it? Agreed-upon, consistent standards are much more important for larger projects.

Does the code need to run in older environments that may not support the latest features?

Based on the balance of these issues, you have to make the right choice about which route to take. I personally see little value in consistency for consistency's sake and would prefer to use the latest, best methods unless there is a significant cost to do so.

Of course, there is a third option: rewriting the existing code so that it uses the best methods and is consistent. There are times when this is necessary, but it comes with a high cost.

The latest and greatest is usually the greatest

Staying consistent has little value in my perspective; continuously making improvements is a must.

Your colleague's position really impedes innovation. The consistency argument gets you into a situation where you can use, for example, LINQ only if you migrate all code to use LINQ. And well, we don't have time for this, do we?

I'd rather have inconsistency where some code is still doing foreach over ArrayLists and other parts use LINQ on IEnumerable, instead of sticking to the oldest way of doing things until the end of time.

Gotta' keep up

Unfamiliar with LINQ? If so, wouldn't my code be more maintainable for my fellow developers if I didn't use it?

The C# language is still evolving. If people didn't learn the changes from C# 1, they would be missing out on:

Generics

Partials

Anonymous methods

Iterators

Nullable types

Auto-properties

Anonymous types

Extension methods

Ling

Lambdas

Asynchronous methods

This is just a small selection of common features found at the Wikipedia article. The point I'm making is that if the developers don't learn, the codebase will stay in a vacuum. One would hope that your developers do continually improve and the code base evolves, supported by a complete test suite. Do you remember how bad it was with .NET 1 to manually implement properties?

LINQ makes life easier. Use it where you can. You might motivate your team members.

So, to what extent are patterns part of code style, and where should we draw the line between staying consistent and making improvements?

Improvements are gradual. To me, it makes no sense to keep old style code, which is less readable and potentially less maintainable. Your team members should at least be able to work out what's going on, even if they can't write it.

33 Reader Comments

The decision on whether or not to refactor depends heavily on whether the code is under test and the extent of test coverage. Refactoring untested code can be risky, but refactoring under test is productive and fun.

Ensuring that the developers working on the project understand the code is critical. This can be done through training, either through courses or through presentations and tutorials by the developers who know about the new techniques during team meetings.

It is also important to have coding guidelines and best practices in place that can be referred to and used in things like code reviews.

If you want the code to be consistent, you need to have developers go through the code, review it and update it to meet the new guidelines. This needs company/management buy-in.

With coding guidelines/best practices, I like to document the rationale behind the guidelines. For example:

-----Guideline: Prefer to use expressions directly instead of within if/else statements.

Instead of code like:

Code:

if (x == y) return true;else return false;

prefer code like:

Code:

return x == y;

Rationale: The expression evaluates to true/false accordingly, so the if statement is redundant. This adds unnecessary complexity to the code, which can make it difficult to follow.

Exception: If the expression is complex, it is ok to use an if statement for part of the expression, or (alternatively) place the sub-expression in a temporary variable.-----

The same can be applied to new/advanced programming techniques such as Linq. That is, you do not want to use Linq everywhere. Also, there can be common pitfalls for using the techniques in different situations (e.g. there could be a performance issue with a particular usage which you can note in your coding guidelines).

If you are going to introduce new concepts, it would be best to have a small team looking into them. It will be their responsibility to look at the toolchain support for these (compiler versions, etc.) and look at the migration path (supported platforms, code breakage from upgrade, etc.). They should also look at this at a wider level on how it interacts with other features being introduced (e.g. does Linq work well with Code Contracts).

Code cowboys are often prone to deem old code "stale" or "boring" or even "inefficient". Yet the old code generally has great qualities: it has been proven, it's solid, it's well known. It doesn't mean that you shouldn't use new technologies, but do it wisely.

First thing to avoid is having cowboys silently include their own new technique in their own part of the code. Each new technology should be assessed : what is it? how does it work? how does it fit into legacy code? does it improve productivity? security? how hard is it to maintain? is it perennial? Then you should define a plan for your team to use the technology: first use it for one small feature in the next release, and evaluate it. Make the whole team participate. If it's good and accepted, you should train your team to this new technology. Then define where and how it will be used in the next releases, when and how each part of the code could be refactored, etc...

It always blows my mind coming from day to day work in Ruby, Python or Perl, then looking at how C# and even more so Java developers handle new things. That is, I see this all the time, new technologies are so rarely adopted into codebases unless something forces their hand. LINQ has been in the C# language for 5 1/2 years now. There's a lot of really cool things that Microsoft keeps adding to the language that get mostly unused because of this huge resistance to anything new, ever.

In other words you're not generally better off in any sizable established code base just changing something 'because'. If you can't articulate a strong technical reason (beyond simple preference for new things) then you probably don't need to change things, and you probably should continue to use existing techniques.

Honestly, a lot of the 'new technology' in languages, libraries, and tools is not all its cracked up to be. In some cases there are real genuine clear advantages. In many others the advantages are at best murky and situational. Eschew especially changes which advertise to reduce a bunch of code to some smaller amount of code. This can only happen via one process, abstraction. Abstractions are great, up to a point, but they can also hamstring you, often in ways which aren't at all clear at the time. Lines of code aren't the issue, complexity is. If progress in software engineering technology was measured in compactness of code we'd all be using APL and software engineering would have reached its apex in 1973.

Its fine if you have 'more code' (say in the LINQ vs using some database API case). LINQ might provide real gains and in new code all things being equal that's great. OTOH I can show you lots of code I've written that gets right down to the lowest database API level possible, for perfectly valid reasons of performance, correctness, and even maintainability.

Is the question really not "should I buy into Microsoft's churn?" This seems to be a uniquely MS problem. They seem to think your skills should have a half-life of about six months. This is the price you pay for buying into MS's software development ecosystem, which is seductively easy with its chipper IDE and user-friendly languages. The dark side is that MS is constantly changing everything, leaving your skills obsolete and your code in legacy limbo. (How's that investment in VB6, MFC, ATL, and COM paying off?) I don't know why they do it, I guess part of it is to make sure no developers have long-term skills and command senior salaries. If a technology didn't exist five years ago, you can't have much experience with it.

The database stuff is a good example, If you made a list of all their database access methods from ODBC through the ADO/DAO days to ADO.NET to LINQ to EF you'd go insane. (Not to mention the ones I've forgotten.) Last I've heard, they're recommending ODBC again.

It always blows my mind coming from day to day work in Ruby, Python or Perl, then looking at how C# and even more so Java developers handle new things. That is, I see this all the time, new technologies are so rarely adopted into codebases unless something forces their hand. LINQ has been in the C# language for 5 1/2 years now. There's a lot of really cool things that Microsoft keeps adding to the language that get mostly unused because of this huge resistance to anything new, ever.

My first job had nothing to do with programming it was in finance. And I think I caught myself being what Steve Jobs would call a "bozo". Bottom line I didn't give the effort that the company deserved, maybe it had something to do with the job being a temporary position, maybe it had to do with not liking the work environment. But those are not real excuses for poor performance.

I think if you are not using LINQ by this point you work in a company full of b-players. Get the hell out is my advice. Not every company is like that and the turnover rate at the big tech companies certainly shows they have no patience for that kind of worker either. You don't innovate by not staying on top of the latest or even older technologies. I work alone now as a consultant and with social media it's so easy to surround myself with a lot of smart people and it really helps to keep me motivated.

Should Linq be avoided in C# for issues of coding consistency? No. I don't think that's a valid reason to disallow use of Linq.

Should use of Linq be discouraged because it's a bad language feature? I think a stronger case could be made for that.

Just because a feature has been added to language, doesn't mean it's a good feature.

An earlier post pointed out all the great features that C# has accumulated over the years: generics, anonymous methods, expressions, iterators, nullable types, var declarations, extension methods (ok, maybe not that one), lambdas, async/await, array initializers, and others.

My guess is that nobody has any reservations at all about using most of those features. Because they are great features, with immediate and obvious application.

Not so true with respect to Linq. Let's be frank. It's not one of the greatest features that's ever been added to C#. And I'm sure that's the crux of the issue here.

As originally intended, the feature showed some promise, as a newer better path to databases. In theory. In practice, that hasn't entirely panned out. And Microsoft's various and far too numerous database strategies have moved on the better, and more useful things. As a database technologies go, Linq has been cursed with distinctly limited functionality, and a somewhat awkward relationship with more robust and powerful database technologies currently rolling out onto .net.

As a coding technique for replacing C# code.... it's a really truly horrible language feature. It needs to be said. Inelegant. To say the least. And cursed with awkward and frequently encountered limitations. Not to say that it can't occasionally be put to good use. It may well be that there are coding paradigms that leverage Linq in useful ways. I can't say that I have seen any to date, that I really care for..

Should use of linq be banned as a matter of best practice? Of course not. But it does need to be said: Linq is NOT a good language feature. And -- I think -- there really should be a compelling reason to use it, if you do use it. But, all in all, I think it's just sufficient to say: not a good language feature. And leave it at that. Anything much stronger than that may discourage creative application of the feature in cases where it is justified.

The good news, we got Expressions out of Linq. Which (along with the significant improvements in 4.0) really are a great piece of technology.

Should Linq be avoided in C# for issues of coding consistency? No. I don't think that's a valid reason to disallow use of Linq.

Should use of Linq be discouraged because it's a bad language feature? I think a stronger case could be made for that.

Just because a feature has been added to language, doesn't mean it's a good feature.

An earlier post pointed out all the great features that C# has accumulated over the years: generics, anonymous methods, expressions, iterators, nullable types, var declarations, extension methods (ok, maybe not that one), lambdas, async/await, array initializers, and others.

My guess is that nobody has any reservations at all about using most of those features. Because they are great features, with immediate and obvious application.

Not so true with respect to Linq. Let's be frank. It's not one of the greatest features that's ever been added to C#. And I'm sure that's the crux of the issue here.

As originally intended, the feature showed some promise, as a newer better path to databases. In theory. In practice, that hasn't entirely panned out. And Microsoft's various and far too numerous database strategies have moved on the better, and more useful things. As a database technologies go, Linq has been cursed with distinctly limited functionality, and a somewhat awkward relationship with more robust and powerful database technologies currently rolling out onto .net.

As a coding technique for replacing C# code.... it's a really truly horrible language feature. It needs to be said. Inelegant. To say the least. And cursed with awkward and frequently encountered limitations. Not to say that it can't occasionally be put to good use. It may well be that there are coding paradigms that leverage Linq in useful ways. I can't say that I have seen any to date, that I really care for..

Should use of linq be banned as a matter of best practice? Of course not. But it does need to be said: Linq is NOT a good language feature. And -- I think -- there really should be a compelling reason to use it, if you do use it. But, all in all, I think it's just sufficient to say: not a good language feature. And leave it at that. Anything much stronger than that may discourage creative application of the feature in cases where it is justified.

The good news, we got Expressions out of Linq. Which (along with the significant improvements in 4.0) really are a great piece of technology.

LINQ doesn't have anything directly to do with database access. The expression tree generated by a LINQ query can be transformed into a SQL query, which has been used for LINQ-to-SQL and Entity Framework, but it doesn't have anything to do with databases per se, other than being somewhat inspired by SQL syntax.

LINQ is just a language feature for collection comprehensions and in a lot of cases more readable than the lambda expression equivalents. It's a matter of preference, but I prefer a LINQ query over lambdas whenever what I need to do is somewhat complex, anything involving anything but the basic Where and Select.

Guideline: Prefer to use expressions directly instead of within if/else statements.

Instead of code like:

Code:

if (x == y) return true;else return false;

prefer code like:

Code:

return x == y;

Rationale: The expression evaluates to true/false accordingly, so the if statement is redundant. This adds unnecessary complexity to the code, which can make it difficult to follow.

This is terrible advice. The compiler transforms it to the same single line of statement, so it doesn't introduce any additional complexity. What it does however is makes it more difficult to debug. With the original way, you can more clearly see which route the if statement is taking. Every programmer learns simple "if" statements the original way, not the latter single line way.

What it does however is makes it more difficult to debug. With the original way, you can more clearly see which route the if statement is taking. Every programmer learns simple "if" statements the original way, not the latter single line way.

Why is returning the result of == directly more difficult to debug than putting it though an if/else statement (genuine question)? The if/else is entirely redundant and to me it seems very much like writing.

This is terrible advice. The compiler transforms it to the same single line of statement, so it doesn't introduce any additional complexity.

What about the developers trying to maintain the code -- especially if you use the same if/else pattern to assign a boolean variable or pass to a function taking a boolean argument (e.g. set_visible).

Quote:

What it does however is makes it more difficult to debug. With the original way, you can more clearly see which route the if statement is taking.

Why not just step outside the function call in the debugger and look at the return value? How much harder is that? If you want a breakpoint, put it on the return statement.

Quote:

Every programmer learns simple "if" statements the original way, not the latter single line way.

Programmers learn simple if statements to do different things, not:

Code:

bool ret = is_visible(object);if (ret) return true;else return false;

You are making more work for yourself and making it harder to follow. Consider if the above used:

Code:

bool ret = is_visible(object);if (ret) return false;else return true;

instead? Can you easily understand what that is doing, compared to:

Code:

return !is_visible(object);

Given that C# has lambda syntax and so does C++11, are you going to insist on writing function object code given that this is what the compiler generates for you anyway? Or iterating over an IEnumerable or iterator by hand instead of using a foreach/range-based for loop? Or try/catch/finally constructs instead of using statements? [1]

[1] If you are using compiler versions that don't support these features, or have an old codebase, sure -- you will do this stuff the long way.

New language features and changes are usually created because there is a real problem that needs to be solved.

Find out what that problem is, find out if you have the problem, find out if the new stuff solves it adequately, wait a very long time*, and if all those boxes are ticked then yes - you should switch to it.

The "wait a very long time" part is something you need to decide individually for each project. The only answer to that is experience. Pull out your crystal ball and predict what the future holds. Will it be buggy? Will something better replace it soon? Will it be abandonned altogether like silverlight?

For example when apple added garbage collection to Objective-C, I'm glad I didn't start using it right away. Because 12 months later they deprecated it, and 12 months after that they released a completeley new and incompatible solution to memory management (called ARC).

The decision on whether or not to refactor depends heavily on whether the code is under test and the extent of test coverage. Refactoring untested code can be risky, but refactoring under test is productive and fun.

I take it you work with dynamically typed languages? Static typing makes refactoring much easier and safer, and doesn't have the same reliance on unit tests.

The decision on whether or not to refactor depends heavily on whether the code is under test and the extent of test coverage. Refactoring untested code can be risky, but refactoring under test is productive and fun.

I take it you work with dynamically typed languages? Static typing makes refactoring much easier and safer, and doesn't have the same reliance on unit tests.

Unit tests are invaluable even for statically typed languages. A while ago I replaced all the iterator-based for loops in my C++ code with C++11 range-based for loops. The tests picked up the single place where I used reverse iterators.

And even for languages like C# and Java, unit tests are useful (e.g. to detect |a == b| vs |a.compare(b)| usage issues).

The "wait a very long time" part is something you need to decide individually for each project. The only answer to that is experience. Pull out your crystal ball and predict what the future holds. Will it be buggy? Will something better replace it soon? Will it be abandonned altogether like silverlight?

The decision on whether or not to refactor depends heavily on whether the code is under test and the extent of test coverage. Refactoring untested code can be risky, but refactoring under test is productive and fun.

I take it you work with dynamically typed languages? Static typing makes refactoring much easier and safer, and doesn't have the same reliance on unit tests.

Unit tests are invaluable even for statically typed languages. A while ago I replaced all the iterator-based for loops in my C++ code with C++11 range-based for loops. The tests picked up the single place where I used reverse iterators.

And even for languages like C# and Java, unit tests are useful (e.g. to detect |a == b| vs |a.compare(b)| usage issues).

LINQ doesn't have anything directly to do with database access. The expression tree generated by a LINQ query can be transformed into a SQL query, which has been used for LINQ-to-SQL and Entity Framework, but it doesn't have anything to do with databases per se, other than being somewhat inspired by SQL syntax.

LINQ is just a language feature for collection comprehensions and in a lot of cases more readable than the lambda expression equivalents. It's a matter of preference, but I prefer a LINQ query over lambdas whenever what I need to do is somewhat complex, anything involving anything but the basic Where and Select.

I completely forgot that LINQ can be used for databases. I use it for many other situations that makes the code a lot cleaner.

I have a personnal experience exactly in the article subject. Years ago I was working as tech leader and architect in a huge (read hundreds of use cases), long term project in C# - originally .NET 1.1, years later migrated to .NET 3.5. All the project persistence was based on ADO, and we had about 80% or more of the system already built this way when the time to framework upgrade came. Some people wanted to take the chance and adopt LINQ for ad hoc queries in our business classes, but I voted against. Why ? Well, I can sum up my reasons this way:

a) we had a relatively high turn over in our development team, so having code written in a uniform way that takes less time to learn and to write is more important than marginal gains in performance or system resources usage;b) not every developer was familiar enough with the technology - this usually can be managed, but as I said, we had a high turn over;c) in other projects, I saw unexperienced people trying to avoid database access with the combined use of "anemic" DAO's and relatively complex LINQ queries over huge collections, leading to performance problems and high CPU and memory usage - certainly not the best pratice;d) finally, during this project lifetime, Microsoft MVP's tried to convince us of adopting several "hot new persistence solutions" such LINQ and EF. And everytime they came with one new, they said how it was far better than the previous one, which used to be the greatest thing since sliced bread until then.

So, in a nutshell: if your project is not too big and you have a problem not well solved and you have the resources and time to experiment, by all means go for it. Otherwise, stick with what you know works well enough.

As a developer, I feel a new feature - if it has become mature and widely adopted in the industry, should be available to the developers if they feel so. That doesn't mean that every new feature available in the latest release of the language needs to be used. But if a developer feels a new feature will be beneficial to the codebase - make the code cleaner, simpler and better, he/she should have the freedom to use it.

Progress is never easy - and there will be folks who oppose any progress. I work with some good people, who just won't upgrade their code. ... even after 5 years. There is always an excuse - no time, time budget, not a part of requirement etc. My experience is that it is mostly folks who are not open to change who always use "Coding Guidelines" as an excuse to stop progress.

Since reading Martin Fowler's book on refactoring I have become a big believer in constantly refactoring code. With each iteration, the code seems to become cleaner and better than the last time. Now, I am of the school of thought that every developer must refactor mercilessly - and that every check-in must leave the code at least a little bit better than when you found it.

I can honestly say that I get a bit more satisfaction from my work since reading the book on refactoring. And I hope I have become a better developer....

I would personally make my decision based on how much time I had to add new code. If the existing code was doing something silly, I wouldn't write new code that continued to do silly things. That would ruin my enjoyment of programming greatly. I would probably be cursing the original coder(s) the entire time.

Hopefully, if someone does something silly, they leave a comment why the silliness is needed. This way other developers will be able to determine whether continuing with the silliness is a good choice or not. A developer should be able to recognize when something silly is being done.

Of course, that's not to say that LINQ is silly. I was trying to address the issue more broadly. Specifically for LINQ, I think new code should use it even if the new developer is unfamiliar. It means that the new developer will be more tuned into the project s/he's working on, which I think is important. It means that whenever the new developer dives into older code, they'll be more efficient at determining what the code is doing.

But of course there are many cases when discarding consistency can be valuable. Guidelines for determining that would be quite difficult to create and probably very language dependent.

The decision on whether or not to refactor depends heavily on whether the code is under test and the extent of test coverage. Refactoring untested code can be risky, but refactoring under test is productive and fun.

I take it you work with dynamically typed languages? Static typing makes refactoring much easier and safer, and doesn't have the same reliance on unit tests.

Unit tests are invaluable even for statically typed languages. A while ago I replaced all the iterator-based for loops in my C++ code with C++11 range-based for loops. The tests picked up the single place where I used reverse iterators.

And even for languages like C# and Java, unit tests are useful (e.g. to detect |a == b| vs |a.compare(b)| usage issues).

I won't usually disagree with the "test first" mantra, especially with regards to a critical piece of infrastructure. However, I think it becomes easier said than done in situations like the one described.

If I feel the need to do a big refactoring on a piece of code, it almost goes without saying that the code was not written with testing in mind. When I rewrite it, I will make it testable if the overall architecture allows for it, but the chances of me having to refactor again are relatively slim compared to the remaining legacy mess.

The decision on whether or not to refactor depends heavily on whether the code is under test and the extent of test coverage. Refactoring untested code can be risky, but refactoring under test is productive and fun.

I take it you work with dynamically typed languages? Static typing makes refactoring much easier and safer, and doesn't have the same reliance on unit tests.

Unit tests are invaluable even for statically typed languages. A while ago I replaced all the iterator-based for loops in my C++ code with C++11 range-based for loops. The tests picked up the single place where I used reverse iterators.

And even for languages like C# and Java, unit tests are useful (e.g. to detect |a == b| vs |a.compare(b)| usage issues).

I won't usually disagree with the "test first" mantra, especially with regards to a critical piece of infrastructure. However, I think it becomes easier said than done in situations like the one described.

Having code under test and applying a "test first" approach are independant of each other. You can have the former without the latter.

Quote:

If I feel the need to do a big refactoring on a piece of code, it almost goes without saying that the code was not written with testing in mind.

Not necessarily.

It could be that you are changing the architecture or implementation to support new features -- for example, I refactored a piece of code that switched mime detection from using libmagic to using the shared-mime-info database.

I also mentioned changing iterator for loops to C++11 range-based for loops.

It could also be that you are using a different data structure/algorithm, or other performance optimisation (e.g. switching an image decoder to using hand-crafted assembly).

Quote:

When I rewrite it, I will make it testable if the overall architecture allows for it, but the chances of me having to refactor again are relatively slim compared to the remaining legacy mess.

Having code under test is not just about ensuring the code still works after a refactoing, it is ensuring that the code still works when adding new features or fixing bugs. By adding tests for each bug fixed, you ensure you have a set of regression tests to prevent that bug re-appearing.

If you have a large codebase that isn't under test, it is harder to introduce tests (which is where the "test first" movement comes from -- so that code always has tests associated with it). You need to start somewhere, and the first couple of code changes may be without the tests to back it up to get it into a state where it can be tested. You can also start testing the classes/functions that do not have any dependencies and build up from there.

What it does however is makes it more difficult to debug. With the original way, you can more clearly see which route the if statement is taking. Every programmer learns simple "if" statements the original way, not the latter single line way.

Why is returning the result of == directly more difficult to debug than putting it though an if/else statement (genuine question)?

He means it is easier to debug because you can set a breakpoint on one of the lines when it's expanded out so you can catch it if you're interested in one of the cases but not both. That's generally only true if you use a crappy debugger without conditional breakpoints, and I hate to make the code crappier 'just in case' it has to be debugged.

However, I have to admit that I tend to prefer debugging-friendly code that does one (logical) step per line as it is also in many cases more readable. Debugging-friendliness is also why I stay away from things like boost that adds so many wrapper classes and functions that debugging becomes a nightmare. Debugging is a huge amount of development time, and anything that makes it slower has to have big time savings in other areas.

Code cowboys are often prone to deem old code "stale" or "boring" or even "inefficient". Yet the old code generally has great qualities: it has been proven, it's solid, it's well known. It doesn't mean that you shouldn't use new technologies, but do it wisely.

First thing to avoid is having cowboys silently include their own new technique in their own part of the code. Each new technology should be assessed : what is it? how does it work? how does it fit into legacy code? does it improve productivity? security? how hard is it to maintain? is it perennial? Then you should define a plan for your team to use the technology: first use it for one small feature in the next release, and evaluate it. Make the whole team participate. If it's good and accepted, you should train your team to this new technology. Then define where and how it will be used in the next releases, when and how each part of the code could be refactored, etc...

TL;DR: assess, train your team, plan ahead

Any code that passes the software tests is "proven." Unless you were foolish enough not to establish full test coverage. In which case, even the old code is not "proven" it's just trusted by experience.

All software requirements should be enforced by a test plan, and so long as you adhere to that and pass it, then your code is good. You can even test code maintainability. The most popular approach to testing that is peer review. If you have to explain it in detail instead of allowing comments, code, and documentation guide the other engineer, then that is a sign that your code could be more maintainable. If you cannot explain it at all, then you've failed. As another test you can apply the SOC (Separation Of Concerns), DIP (Dependency Inversion Principal), and the Open Closed principals, to your code even if it's procedural. If all of these principals pass then your code is maintainable.

Make every goal a requirement and test it consistently. If you do this, you can change anything and everything safely.

Code cowboys are often prone to deem old code "stale" or "boring" or even "inefficient". Yet the old code generally has great qualities: it has been proven, it's solid, it's well known. It doesn't mean that you shouldn't use new technologies, but do it wisely.

First thing to avoid is having cowboys silently include their own new technique in their own part of the code. Each new technology should be assessed : what is it? how does it work? how does it fit into legacy code? does it improve productivity? security? how hard is it to maintain? is it perennial? Then you should define a plan for your team to use the technology: first use it for one small feature in the next release, and evaluate it. Make the whole team participate. If it's good and accepted, you should train your team to this new technology. Then define where and how it will be used in the next releases, when and how each part of the code could be refactored, etc...

TL;DR: assess, train your team, plan ahead

Any code that passes the software tests is "proven." Unless you were foolish enough not to establish full test coverage. In which case, even the old code is not "proven" it's just trusted by experience.

All software requirements should be enforced by a test plan, and so long as you adhere to that and pass it, then your code is good. You can even test code maintainability. The most popular approach to testing that is peer review. If you have to explain it in detail instead of allowing comments, code, and documentation guide the other engineer, then that is a sign that your code could be more maintainable. If you cannot explain it at all, then you've failed. As another test you can apply the SOC (Separation Of Concerns), DIP (Dependency Inversion Principal), and the Open Closed principals, to your code even if it's procedural. If all of these principals pass then your code is maintainable.

Make every goal a requirement and test it consistently. If you do this, you can change anything and everything safely.

For new stuff, this is all true; for legacy code, this doesn't truly apply because those software requirements might have been ignored by users of a function which deviated from the prescribed behavior long enough to canonize what would have been a bug. From there, you have the option to rewrite the code using that function incorrectly, or declare the bug to be a feature. The first is expensive, and the latter degrades the design of the software to the point where testing doesn't really help in finding bugs. And so it is that many developers, being cheap, go the second route, and don't bother with testing that won't solve problems.

I first started picking up LINQ because ReSharper offered it as a suggestion, and I of course want to know what the heck it is I'm telling the system to do. As I started studying the suggested refactor, I learned about many of the other aspects of C# (specifically, anonymous methods, lambda expressions, the Func/Action delegate types, extension methods, etc.) This was certainly good for me.

The shop I was working for at the time, however, hadn't had the wherewithal to do similar studies. When I asked a senior developer for the application I was supporting - mark you, I was working on front-end automation tests with Ranorex, at the time - he was actually unaware of anything about LINQ, lambdas, or any of the cool stuff I found out about. So, I put together a lunch 'n' learn presentation on my free time and presented to my coworkers. A few good questions and good discussion later, I found it was good for my organization too; after building some code benchmarks at my own manager's suggestion, I found the speed was comparable to the 'classic' for/foreach way of doing things.

That long bunch of parable is to say: code consistency and code improvement don't have to be enemies. Code improvements are made to make life easier for programmers, but as with anything in computer science always come with trade-offs.

For more complex each-item operations, I would not use LINQ because the queries get rather nasty, but for simple manipulations or obtaining an IEnumerable subset of some bigger IEnumerable, LINQ is a good, efficient tool for the job. My two cents is now in.

A lot of times you will find all of your LINQ code inside DAO classes. If you make changes to how you handle database calls, you don't need to make changes to the code that uses the DAO to get the data. This also means that code that implements the DAO doesn't care whether or not you are using LINQ or standard ODBC. Your developers don't need to know how to program in LINQ in order to add/change functionality in your app.

The downside to this is the possibility of having fragmented code written in several different programming styles. So it's important to maintain some consistency in that you don't use too many different styles or cause performance barriers. Don't have 2 devs using LINQ and 2 more using ODBC and working on the same release. Have all developers use the same style of programming for each release. Keep it all documented.

It always blows my mind coming from day to day work in Ruby, Python or Perl, then looking at how C# and even more so Java developers handle new things. That is, I see this all the time, new technologies are so rarely adopted into codebases unless something forces their hand. LINQ has been in the C# language for 5 1/2 years now. There's a lot of really cool things that Microsoft keeps adding to the language that get mostly unused because of this huge resistance to anything new, ever.

And python 3 has been out for the better part of 5 years now yet most of the python community still shuns it. Let's keep religious adherence to languages out of this.

I think if you are not using LINQ by this point you work in a company full of b-players. Get the hell out is my advice.

LINQ is the Microsoft brand for their functional collections library + syntax. Other languages and platforms had that exact same functionality *way* before LINQ.

For theoretically advanced programming and language syntax, C# isn't even a serious contender. Haskell, Scala, and F# are better choices for theoretically advanced program expression.

Lastly, a lot of amazing work isn't done by language fanboys who obsess over new features. A lot of amazing apps are still written in C or other crusty languages, because what they do is more important than how many language features the source code uses.