Size does matter: when Devs opt for longer release cycle

We've started development in our tech debt elimination project a few days ago. There's so much complicated work to be done & it involves so much testing effort that there's no other option but release everything altogether in T months (T > half a year).

It makes even more sense if you take under your consideration, that some of this stuff involves changing the actual technology beneath - we're substituting code in X language/platform with code in Y language/platform.

This was supposed to be the best, safest, the most stable & the less risky approach.
IMHO - rather the biggest misunderstanding or the most spectacular bullshit :)

Code that requires full, manual regression testing of whole module / application / system is a burden, technical debt's horn of plenty - you can accept that & do nothing about that or just start fixing things. There's never a better time for that then "now" - use every refactoring opportunity you can to tidy things up.

... but business people don't want to test it until it's 100% done ...

Well, hard to blame them - they will put some effort only if you convince them that there's something in that for them: that's one of the key reasons why you should always aim for adding business value in any piece of code released. The other option is to automate comparative testing on old & new pieces of code (now I refer to moving from X to Y), but in some cases it may be really tricky.

... but it may be unstable & unreliable until we move everything ...

There's another visible proof that:

you're not putting enough intellectual effort in the proper design of outcome architecture

the level of coupling in the to be application architecture is beyond acceptability threshold

your transition approach is faulty

... we're doing it step by step; first whole codebase from point A to point B, then whole codebase from point B to point C, ... and we're there, voila!

You need to have a really good excuse for that model. Why? Because:

the intermediate steps usually give no value, but being the step towards the target

your whole codebase is "not ready" - until the very end: you get no feedback on your overall approach until then (does it scale? is it error-resilient enough? etc.), there's no guarantee you won't need additional steps, etc.

in every step you modify whole codebase, everything - if you're lacking automated regression testing (and as you're doing with legacy, most likely you are): that's a waste

See? Cheap excuses

Longer release cycles may sometimes feel appropriate, but it's usually because of either mental lazyness, very poor quality of the initial codebase, overall lack of the ownership on codebase or limited vision on the approach (we'll work something out once we've already started).

Shorting the release cycle may cause you to perform the additional activities (& increase the amount of work), but it will pay off very soon - such an approach will force you to put an effort in refactoring the architecture towards lower coupling & automated components' testability and that will result in:

Lowering the failure risk

Being able to deliver the new functionality more frequently

Decreasing the range of regression / acceptance tests (& finding out the real boundaries for that)

Reducing the cost of regression testing

Removing the dependencies between the development lifecycle of particular modules

Looks compelling isn't it? This is what tech debt elimination is about - not just moving code-shit-stuff-base from language X to language Y.