Theories versus logic models

I’d say, in response to slide 28: yes, they are. A logic model is not a theory. I define a logic model in this context as a model that is built from theories and empirical evidence to try and explain one very specific, bounded scenario. I define a theory as a generic constellation of constructs and (e.g. causal) relationships between those constructs. (PN, e.g., is not a theory).

The goal of theory is to derive abstract laws about reality. Their level of abstraction grants them value; gravity works in general, not only in Padova. “Attitudes predict human behavior” is a theoretical statement. “Attitude predicts physical activity in my specific subgroup” is no longer a theoretical statement: whether it’s true or not tells us little about reality in general.

So, the logic model you construe for an intervention, which you base on theory (but where you deliberately omit variables that are irrelevant in your specific situation, even though you know they can be important predictors of behavior), and which you ‘fill in’ using empirical evidence regarding the beliefs (‘change objectives’ in Intervention Mapping lingo), is not a theory. It’s also not something to evaluate in your intervention evaluation.

It’s something to study BEFORE intervention development (step 2 of Intervention Mapping).

Then, once you have your logic model of change (as IM calls it), you move forward and start matching the relevant determinants to theory. If you don’t know in advance which determinants (and which sub-determinants or beliefs) you should target with your behavior change methods, your chances of success are already diminished before you even started.

So, this is not a matter of testing theory. Intervention evaluation is not fundamental/basic science. It’s application of science. You’re under no obligation to contribute to theory – in fact, you have the wrong design for contributing to theory. Your presentation clearly shows why this is the case.

If you want to test theory, design a study to test theory.

(Similarly, if you’re curious about mediation, design a study to test mediation – i.e. a factorial experiment with multiple measurement moments – and I haven’t checked that paper (“what’s the mechanism”) recently, you might need even more.)

People commonly respond to this by expressing exasperation that it all has to be so complicated. I sympathize, but believe that nobody’s served by conducting invalid science because that keeps things fun and easy.

Only learning one or two things from a study, even one with a huge dataset, is fine. Knowledge is valuable, so it’s ok to have to work for it 🙂

Author: Gjalt-Jorn Peters

Gjalt-Jorn Peters works at the Dutch Open University, where he teaches methodology and statistics, and does research into health psychology, specifically behavior change, in general and applied to nightlife-related risk behavior. He is involved in Dutch nightlife prevention project Celebrate Safe, where he is responsible for the Party Panel study. In addition, he maintains the userfriendlyscience, ufs, and behaviorchange R packages. An overview of his academic publications is available here.
View all posts by Gjalt-Jorn Peters