After correcting for the methodological problem Uri pointed out, I got again that Lc0 is the strongest engine in openings and middlegames. To have some statistical significance, first at very fast time control, 0.2s/move.
At this short time control, Lc0 performs badly, and I got from 4-mover PGN opening suite the following, playing the full games without any adjudication:

Now, at the same 0.2s/move time control, I played the first 25 moves Lc0 vs SF8, and the rest SF8 vs SF8. The result is dramatically different and advantageous for the side using Lc0 for the first 25 moves:

+55 -34 =111 55.25%
+37 Elo points
LOS = 98.7%

Lc0 is stronger than SF8 (4 cores) in the first 25 moves, adding value to the final result even at this short TC, after the rest of the games (endgames) were finished as SF8 vs SF8. And the difference of the two performances is a whopping 200+ Elo points.

To see how well Lc0 fares in the first 25 moves against SF-dev, I used longer 2'+2'' time control (similar to CCRL 40/4' conditions), and again, first 25 moves were played Lc0 vs SF-dev, the rest (endgames) were SF-dev vs SF-dev (all engines Syzygy enabled). The result in 100 games is:

+20 -15 =65 52.5%
LOS = 80.0%

Not very conclusive, but it seems Lc0 is stronger than SF-dev (4 cores) in these conditions in the first 25 moves.

Another aspect: analyzing the 100 games at 2'+2'', 4 were lost by Lc0 by game-changing tactical blunders (one of them was from fairly balanced position to SF-dev showing Mate score) during those initial 25 moves. So, a combination of Lc0 + Houdini Tactical (for tactical blunders above 100cp) in the openings and middlegames, and SF-dev for the late middlegames and endgames would score:

+20 -11 =69

against SF-dev. Houdini Tactical saw the tactical blunders almost instantly, so the time allotted to it shouldn't be too high. Also, these crass tactical blunders occur only in some 5% of the games in the openings and middlegames. This "combination" of engines, which is clearly stronger than SF-dev, might interest, for example, correspondence chess players. I tried to automate this kind of playing using ChessCombi or Nucleus, without success.

This "combination" of engines, which is clearly stronger than SF-dev, might interest, for example, correspondence chess players. I tried to automate this kind of playing using ChessCombi or Nucleus, without success.

This "combination" of engines, which is clearly stronger than SF-dev, might interest, for example, correspondence chess players. I tried to automate this kind of playing using ChessCombi or Nucleus, without success.

Nucleus I do know, but what's ChessCombi?

It's an older utility, maybe from year 2011 or so, but it is not working as I expected.

From the Readme.txt:

CHESSCOMBI V1
created by Mark Alba

Release: ChessCombi is distributed free of charge.
ChessCombi may not be distributed as part of any software package,
service or website without prior written permission from Mark Alba.

ChessCombi is a UCI chess engine that combines two UCI chess engines into 1.
It can be used in a typical GUI program that supports UCI.
The combination of 2 chess engine can increase playing style but not necessarily produce better results.
The input from ChessCombi is directly fed to the input of the 2 engines.
But the output of the 2 engines however, is screened so that switching occurs.
The ChessCombi V1 is only available in Windows.

You inspired me to run a test to see if Komodo MCTS also plays middlegames relatively stronger than endgames. Current Komodo MCTS (dev) on one thread now can beat normal Komodo 9.1 (by 31 elo in my test). I then reran the test starting with positions with the queens already exchanged. The result was almost the same (+27 elo). So the endgame dropoff is unique to Lc0, which suggests that the cause is the neural network, rather than the MCTS search. It seems to me that the strength of Lc0 is not because the neural network evals are all that good, but is because they are much faster than a similar quality short normal search if they are run on a good GPU. It's as if the fast GPU makes the NN act like a rather poor evaluator with a normal search running at super-speed. Maybe I'm wrong, but that's the way it looks to me. If someone finds a way to use a good GPU effectively with a more normal eval, that might be the "holy grail" of computer chess.

It seems to me that the strength of Lc0 is not because the neural network evals are all that good, but is because they are much faster than a similar quality short normal search if they are run on a good GPU. It's as if the fast GPU makes the NN act like a rather poor evaluator with a normal search running at super-speed. Maybe I'm wrong, but that's the way it looks to me.

Hm, afaik LC0 makes only about 10 Knps to 40 Knps on top GPU with 2 threads on Host.

So it should be the other way around, cos of the good NN evaluation LC0 needs less nodes to search.

You inspired me to run a test to see if Komodo MCTS also plays middlegames relatively stronger than endgames. Current Komodo MCTS (dev) on one thread now can beat normal Komodo 9.1 (by 31 elo in my test). I then reran the test starting with positions with the queens already exchanged. The result was almost the same (+27 elo). So the endgame dropoff is unique to Lc0, which suggests that the cause is the neural network, rather than the MCTS search. It seems to me that the strength of Lc0 is not because the neural network evals are all that good, but is because they are much faster than a similar quality short normal search if they are run on a good GPU. It's as if the fast GPU makes the NN act like a rather poor evaluator with a normal search running at super-speed. Maybe I'm wrong, but that's the way it looks to me. If someone finds a way to use a good GPU effectively with a more normal eval, that might be the "holy grail" of computer chess.

A bit off topic, but do you think there is any risk that lco will overtake Komodo, in the next year. I get the impression most people here say yes. But you are in the best position to know.