Some is good, more is better, too much is still undetermined?

This post is about the danger, and strong temptation, of drawing conclusions when it comes to fitness.

There is a process by which it can be done: It is called science.

But it is a lengthy process, one that is deeply human (and that can therefore err) but also fundamentally self-correcting (thus its immense success, without which you would not be reading this, among other things you do on a daily basis).

There are shortcuts, sometimes pretending to be science, but in fact nothing more than wishful thinking. Common sense, sometimes based on anecdotes, falls in this category.

Science is fundamentally always questioning itself. Common sense and anecdotes appear much more solid, which explains their success.

The main issue, it seems, is that most of us are more comfortable with solid, unequivocal conclusions than with questions.

Take a recent example, from just two days ago.

A small research team published results from an analysis of data on mortality and jogging habits of people living in Copenhagen. So far, this is science.

The title of the paper indicates what was being analyzed. The results suggest a possible negative effect of “strenuous” jogging.

That’s all most bloggers and some journalists needed to draw firm conclusions. That’s news. But it is no longer science.

Those titles, and some of the statements, are strong conclusions, mostly taken from the title of the research paper and probably from a press release stating a few key aspects of the research (thus the similarities between the three).

Go ahead and do it?

The problem is that the research is still lacking in statistical significance with respect to the strongly stated conclusions. The paper itself is not strongly concluding, but stating that the results suggest an increased risk. That raises the question; it is not a firm conclusion.

Of course, to those strong conclusions, some folks with a keen interest in promoting running had to take a dissenting position. That’s what is sometimes called “a debate.” (Note: Not a scientific debate, but one in the public sphere.)

He makes some valid points about statistical significance, but he also acts disingenuously when trying to imply that the methodology is not correct. (That’s what peer reviewers are there for, not some blogger.)

And by pointing out the small numbers, as if they were by themselves cause to not pay attention to the research, he is giving a false impression of what science is all about. By thus strongly concluding against the findings, he is also part of the problem.

Now, pause for a moment, and consider whether you are more comfortable with the strong conclusions, whichever you like better, or with the uncertainty that, perhaps, too much of something might actually be bad.

Because that question is worth asking.

To use an analogy: You need to breathe oxygen to live; air with a slightly increased oxygen percentage promotes recovery; too much oxygen in percentage in the air you breathe and you die.

So it would stand to reason that some exercise is good for you; more exercise is better, but “too much” can be deadly.

It is worth investigating, not denying. What it is not worth doing is becoming sedentary over…

Because even though “is too much bad for you?” is a valid question, the question “is doing some good for you?” has generated a lot of evidence behind a positive answer, even though it is also still a valid scientific question.

That’s what science provides: Degrees of confidence. Never absolute conclusions.