Although the term compliance has been in use for the longest period of time, nowadays it is considered that it is not entirely adequate and that changes in the relation between patients and healthcare professionals must be taken into consideration. Patients’ status is no longer submissive and passive and their role in accomplishing therapeutic objectives does not only imply compliance with medical instructions but active cooperation and agreement with a doctor and a pharmacist. Due to these reasons the term adherence has become more desirable in practice since the 90’s of the 20th century.

Differences between compliance and adherence are thus not only in the semantic sense but of essential nature. While in compliance the focus is on the healthcare provider who has a dominant status in relation to the patient, the concept of adherence is oriented to the patient and cooperation. In relation to this, the flow of information is one-way and the objective is to achieve obedience of the patient, while adherence implies a two-way information transfer and engagement of both subjects.

A key question across domains is, “how are patients/health agents/consumers persuaded to acquire certain drugs and take them as directed?”

The introduction of “concordance” to the literature on medication compliance and adherence—“adherence” is the most neutral, non-ideological, term for patient behavior, in use at least since the mid 1990s.

The question that concordance theorists have really asked is not, “how do we treat patients’ health beliefs more respectfully?” but rather, “how do we persuade patients to follow the advice of their doctors?”

We can frame a rhetorical question across domains: how are people persuaded to take drugs?

Several authors have written even-handedly about concordance and make clear that cooperation between physicians and patients is likely to lead to better, and more appropriate, use of medications. Elwyn, Edwards, and Britten write, “Concordance describes the process whereby the patient and doctor reach an agreement on how a drug will be used, if at all. In this process doctors identify and understand patients’ views and explain the importance of treatment, while patients gain an understanding of the consequences of keeping (or not keeping) to treatment.” Ferner writes, “Usually…the patient, who has most to gain by success and the most to lose from harm, should decide whether to have treatment, and the prescriber should provide information on the risks and benefits to help make the decision.”

It is easy to agree that cooperative, better-informed, and realistically-prepared patients are more likely to adhere to recommended treatments than those who are resistant, ill- informed, and unprepared.

So, “concordance,” with its egalitarian rhetoric, not only portrays physicians and patients as equals but also portrays all patients as equals—while, in truth and in practice, patients are not all equally well-equipped for consensual decision-making, and, certainly, not all physicians believe that they are. When one exits the concordance literature to enter other literature about patients, what becomes clear is that the respect for patients that is invoked as the key resource of concordance is not always available to be tapped.

The more consumers are aware of a drug, the more they will request it; arguably, if they request it, then, being in agreement with their physician on its prescription, they are more likely to adhere to treatment.

Chapter 1

As Bhaskar puts it, ‘Theory without experiment is empty. Experiment without theory is blind’ (1978, 191).

Society is made by, but never under the control of, human intentions.

Evaluation has traditionally been asked to pronounce on whether a programme makes a difference ‘beyond that which would have happened anyway’. We always need to keep in mind that what would have happened anyway is change – unavoidable, unplanned, self-generated, morphogenetic change.

Realist evaluation is a form of theory-driven evaluation. But its theories are not the highfalutin’ theories of sociology, psychology and political science. Indeed, the term ‘realistic’ evaluation is sometimes substituted out of the desire to convey the idea that the fate of a programme lies in the everyday reasoning of its stakeholders. Good evaluations gain power for the simple reason that they capture the manner in which an awful lot of participants think. One might say that the basic currency is common-sense theory.

However, this should only be the starting point. The full explanatory sequence needs to be rooted in but not identical to everyday reasoning. In trying to describe the precise elbow room between social science and common sense one can do no better that to follow Elster’s thinking. He has much else to say on the nuts and bolts of social explanation, but here we concentrate on that vital distinction, as mooted in the following:

Much of science, including social science, tries to explain things we all know, but science can make a contribution by establishing that some of the things we all think we know simply are not so. In that case, social science may also explain why we think we know things that are not so, adding as it were a piece of knowledge to replace the one that has been taken away. (2007: 16)

Evidence-based policy has become associated with systematic review methods for the soundest of reasons. Social research is supremely difficult and prone to all kinds of error, mishap and bias. One consequence of this in the field of evaluation is the increasingly strident call for hierarchies of evidence, protocolised procedures, professional standards, quality appraisal systems and so forth. What this quest for technical purity forgets is that all scientific data is hedged with uncertainty, a point which is at the root of Popperian philosophy of science.

What is good enough for natural science is good enough for evidence-based policy, which comes with a frightening array of unanticipated swans – white, black and all shades of grey. Here too, ‘evidence’ does not come in finite chunks offering certainty and security to policy decisions. Programmes and interventions spring into life as ideas about how to change the world for the better. These ideas are complex and consist of whole chains of main and subsidiary propositions. The task of evaluation research is to articulate and refine those theories. The task of systematic review is to refine those refinements. But the process is continuous – for in a ‘self-transforming’ world there is always an emerging angle, a downturn in programme fortunes, a fresh policy challenge. Evidence-based policy will only mature when it is understood that it is a continuous, accumulative process in which the data pursues, but never quite draws level with, unfolding policy problems. Enlightened policies, like bridges over swampy waters, only hold ‘for the time being’.

Chapter 2

It has always been stressed that realism is a general research strategy rather than a strict technical procedure (Pawson and Tilley, 1997b: Chapter 9). It has always been stressed that innovation in realist research design will be required to tackle a widening array of policies and programmes (Pawson, 2006a: 93–99). It has always been stressed that this version of realism is Popperian and Campbellian in its philosophy of science and thus relishes the use of the brave conjecture and the application of judgement (Pawson et al., 2011a).

The ontology of critical realism, with its identification of numerous causal mechanisms interacting in different ways in different contexts to produce different outcomes, has influenced the development of realistic evaluation, as advanced by Pawson and Tilley (1997). The aim of realistic evaluation is to explain the processes involved between the introduction of an intervention and the outcomes that are produced. In other words, it assumes that the characteristics of the intervention itself are only part of the story, and that the social processes involved in its implementation have to be understood as well if we are going to have an adequate understanding of why observed outcomes come about. In contrast to the assumptions of constant conjunction, realistic evaluation posits the alternative formula of:

Mechanism + contex = outcome

In any given context, there will in all likelihood be a number of causal mechanisms in operation, their relationship differing from context to context. The aim of realistic evaluation is to discover if, how and why interventions have the potential to cause beneficial change. To do this, it is necessary to penetrate beneath the surface of observable inputs and outputs in order to uncover how mechanisms which cause problems are removed or countered by alternative mechanisms introduced in the intervention. In turn, this requires an understanding of the contexts within which problem mechanisms operate, and in which intervention mechanisms can be successfully fired. In other words, realistic evaluators take the middle line between positivism and relativism, in that positivism’s search for a single cause is seen as too simplistic, while relativism’s abandonment of any sort of generalisable explanation is seen as needlessly pessimistic. In contrast to these two poles, realists argue that it is possible to identify tendencies in outcomes that are the result of combinations of causal mechanisms, and to make reasonable predictions as to the sorts of contexts that will be most auspicious for the success of health-promoting mechanisms. The confidence of prediction can be increased through comparison of different cases (i.e. different contexts) in that concentration on context–mechanism–outcome configurations allows for the development of transferable and cumulative lessons about the nature of these configurations.

Evaluating complex interventions involves determining both whether and how they work; such evaluations are theoretically and practically complex. First, the components of care themselves may interact positively or negatively with each other and also may be required in different doses or formats depending on context. The kind of interventions required to bring about change are also likely to be multi-faceted; educational, audit-based and facilitation-based interventions each have evidence to support them. The choice and application of appropriate outcome measures at different levels of change is challenging. Lastly, implementation often occurs within the shifting sands of health services reform and development.

Realistic Evaluation is a relatively new framework for understanding how and why interventions work in the real world and has been recommended as a means to understand the dissemination of service innovation. Analysis focuses on uncovering key mechanisms and on the interactions between mechanism and context in order to develop ‘middle range theories’ about how they lead to outcomes. Accumulation of evidence, which may be qualitative or quantitative, and may also be derived from external sources, leads to a refining of these theories. Realistic Evaluation, therefore, promises to be a useful framework for understanding the key functions of an intervention by examining its relationship with the context.

After the first audit we recommended that prescription scripts should be checked by the prescribing physician and re-checked by the nurse assistant in the clinic. Can be connected with this paper.

we strongly advocate more training of junior physicians to avoid these errors and to understand the potential hazards due to prescription errors. Same advice as this paper.

Computer-based prescribing systems may minimize the risk of errors due to illegible prescriptions. However there is a considerable financial investment and training involved which may be prohibitive for some institutions. From this external paper.

Knowledge of where and when errors are most likely to occur is generally the first step in prevention of prescription errors.