Copyright Ayieko et al. This is an open-access article distributed under the
terms of the Creative Commons Attribution License, which permits unrestricted use,
distribution, and reproduction in any medium, provided the original author and
source are credited.

Abstract

Background

In developing countries referral of severely ill children from primary care
to district hospitals is common, but hospital care is often of poor quality.
However, strategies to change multiple paediatric care practices in rural
hospitals have rarely been evaluated.

Methods and Findings

This cluster randomized trial was conducted in eight rural Kenyan district
hospitals, four of which were randomly assigned to a full intervention aimed
at improving quality of clinical care (evidence-based guidelines, training,
job aides, local facilitation, supervision, and face-to-face feedback;
n = 4) and the remaining four to
control intervention (guidelines, didactic training, job aides, and written
feedback; n = 4). Prespecified
structure, process, and outcome indicators were measured at baseline and
during three and five 6-monthly surveys in control and intervention
hospitals, respectively. Primary outcomes were process of care measures,
assessed at 18 months postbaseline.

Conclusions

Specific efforts are needed to improve hospital care in developing countries.
A full, multifaceted intervention was associated with greater changes in
practice spanning multiple, high mortality conditions in rural Kenyan
hospitals than a partial intervention, providing one model for bridging the
evidence to practice gap and improving admission care in similar
settings.

Trial registration

Editors' Summary

Background

In 2008, nearly 10 million children died in early childhood. Nearly all these
deaths were in low- and middle-income countries—half were in Africa.
In Kenya, for example, 74 out every 1,000 children born died before they
reached their fifth birthday. About half of all childhood (pediatric) deaths
in developing countries are caused by pneumonia, diarrhea, and malaria.
Deaths from these common diseases could be prevented if all sick children
had access to quality health care in the community (“primary”
health care provided by health centers, pharmacists, family doctors, and
traditional healers) and in district hospitals (“secondary”
health care). Unfortunately, primary health care facilities in developing
countries often lack essential diagnostic capabilities and drugs, and
pediatric hospital care is frequently inadequate with many deaths occurring
soon after admission. Consequently, in 1996, as part of global efforts to
reduce childhood illnesses and deaths, the World Health Organization (WHO)
and the United Nations Children's Fund (UNICEF) introduced the
Integrated Management of Childhood Illnesses (IMCI) strategy. This approach
to child health focuses on the well-being of the whole child and aims to
improve the case management skills of health care staff at all levels,
health systems, and family and community health practices.

Why Was This Study Done?

The implementation of IMCI has been evaluated at the primary health care
level, but its implementation in district hospitals has not been evaluated.
So, for example, interventions designed to encourage the routine use of WHO
disease-specific guidelines in rural pediatric hospitals have not been
tested. In this cluster randomized trial, the researchers develop and test a
multifaceted intervention designed to improve the implementation of
treatment guidelines and admission pediatric care in district hospitals in
Kenya. In a cluster randomized trial, groups of patients rather than
individual patients are randomly assigned to receive alternative
interventions and the outcomes in different “clusters” of
patients are compared. In this trial, each cluster is a district
hospital.

What Did the Researchers Do and Find?

The researchers randomly assigned eight Kenyan district hospitals to the
“full” or “control” intervention, interventions that
differed in intensity but that both included more strategies to promote
implementation of best practice than are usually applied in Kenyan rural
hospitals. The full intervention included provision of clinical practice
guidelines and training in their use, six-monthly survey-based hospital
assessments followed by face-to-face feedback of survey findings, 5.5 days
training for health care workers, provision of job aids such as structured
pediatric admission records, external supervision, and the identification of
a local facilitator to promote guideline use and to provide on-site problem
solving. The control intervention included the provision of clinical
practice guidelines (without training in their use) and job aids,
six-monthly surveys with written feedback, and a 1.5-day lecture-based
seminar to explain the guidelines. The researchers compared the
implementation of various processes of care (activities of patients and
doctors undertaken to ensure delivery of care) in the intervention and
control hospitals at baseline and 18 months later. The performance of both
groups of hospitals improved during the trial but more markedly in the
intervention hospitals than in the control hospitals. At 18 months, the
completion of admission assessment tasks and the uptake of
guideline-recommended clinical practices were both higher in the
intervention hospitals than in the control hospitals. Moreover, a lower
proportion of children received inappropriate doses of drugs such as quinine
for malaria in the intervention hospitals than in the control hospitals.

What Do These Findings Mean?

These findings show that specific efforts are needed to improve pediatric
care in rural Kenya and suggest that interventions that include more
approaches to changing clinical practice may be more effective than
interventions that include fewer approaches. These findings are limited by
certain aspects of the trial design, such as the small number of
participating hospitals, and may not be generalizable to other hospitals in
Kenya or to hospitals in other developing countries. Thus, although these
findings seem to suggest that efforts to implement and scale up improved
secondary pediatric health care will need to include more than the
production and dissemination of printed materials, further research
including trials or evaluation of test programs are necessary before
widespread adoption of any multifaceted approach (which will need to be
tailored to local conditions and available resources) can be
contemplated.

The iDOC Africa
Web site, which is dedicated to improving the delivery of hospital
care for children and newborns in Africa, provides links to the
clinical guidelines and other resources used in this study

Introduction

Common illnesses including pneumonia, malaria, and diarrhea remain major contributors
to child mortality in low-income countries [1]. Hospital care of severe
illnesses may help improve survival, and disease-specific clinical guidelines have
been provided by the World Health Organization (WHO) for more than 15 y [2], and as collated
texts since 2000 [3],[4]. These guidelines form part of the Integrated Management
of Childhood Illnesses (IMCI) approach adopted by over 100 countries. However, in
contrast to its primary care aspects [5],[6], implementation
of IMCI at district hospitals has not been evaluated. Paediatric hospital care is
often inadequate in our setting and also in other low-income countries both in
Africa and Asia [7]–[10], with most inpatient deaths occurring within 48 h of
admission [11].

We therefore set out to develop and test a strategy to improve paediatric care in
district hospitals in partnership with the Kenyan government [12]–[14]. We considered a trial of
alternative interventions necessary for ethical reasons and because systematic
reviews indicated uncertainty in the value of multicomponent interventions [15]. Our
evaluation is based on the classical Donabedian approach—assessing structure,
process, and valued health system outcome measures [16]. We randomised hospitals,
rather than individuals, to intervention groups because the intervention was
designed to influence how the paediatric teams provided care. Secondly, the cluster
randomised trial offered logistical convenience in implementing certain intervention
components, which by their nature (training, feedback, supervision) are easier to
administer to groups rather than on an individual basis. To provide data to inform
debate on the plausibility of any cause–effect relationship arising from the
trial data, we also planned that evaluation spanned a realistic timescale, evaluated
possible postintervention deterioration, and assessed intervention context,
adequacy, and barriers to implementation [12],[17]–[20].

Methods

Study Sites and Participants

Eight rural hospitals (H1 to H8) were chosen purposefully from four of
Kenya's eight provinces to provide some representation of the variety of
rural hospital settings encountered in Kenya (Table 1) [12]. Hospitals admitting a
minimum of 1,000 children and conducting at least 1,200 deliveries per year were
eligible for inclusion. Prior to the study, medical records documenting
admission information were written as nonstandard, free-text notes in all eight
hospitals. The Ministry of Health usually aims to disseminate national
guidelines aimed at hospital care to facilities through distribution of some
print materials and ad hoc or opportunistic workshops or seminars. It had not
previously been able to augment this approach with systematic efforts or provide
specific supervision to support paediatric hospital care. Further, none of the
eight hospitals themselves had explicit procedures for implementing new clinical
guidelines.

Baseline hospital characteristics and characteristics of 8,205
paediatric admission events at baseline and during the 18-mo
intervention period.

We collected data from medical records of paediatric admissions aged 2–59
mo to describe paediatric care practices of clinicians and nursing staff
targeted by the guidelines, training, and feedback. The Kenya Medical Research
Institute National Ethics and Scientific review committees approved the study
(Texts
S1 and S2).

Randomization and Masking

Prior to inclusion in the study the eight shortlisted hospitals were visited and
meetings were held with the hospital management team. At these meetings, the
study design, randomization, potential inputs, approach to data collection, and
longevity were explained. All hospital management teams subsequently assented to
their hospital's participation and randomization after internal
discussions. Assent from the hospital's catchment population was not
sought. Staff in all hospitals were made aware of the study's overall aims
to explore ways to improve care and need for data collection through specific
presentations made after randomization at the start of introductory training and
using written information sheets. After obtaining the hospitals' assent we
allocated eight hospitals (clusters) to a full (intervention group, hospitals
H1–H4) or partial (control group, hospitals H5–H8) package of
interventions using restricted randomization. Of 70 possible allocations, seven
defined two relatively balanced groups (Table 1). These allocations were written on
identical pieces of paper, with hospitals represented by codes, and one
allocation was randomly selected using a “blind draw” procedure.
Participating hospitals and the research team could not be masked to group
allocation. However, information on group allocation was not publicly
disseminated and the geographic distance between hospitals was large. We
therefore do not feel that users of the hospitals were aware of or influenced by
the form of intervention allocated to the hospital.

Study Intervention

The intervention delivered over 18 mo (from September 2006 to April 2008) aimed
to improve paediatric admission care by promoting hospitals' implementation
of best-practice guidelines and local efforts to tackle local organizational
constraints. Before the trial commenced, a decision was made to adjust the
timing of the primary endpoint for measuring intervention effectiveness,
aligning it with the end of this 18-mo active intervention period. As part of
this updated approach, monitoring of intervention sites was planned to continue
for 12 mo after active intervention had ended. Funds were not available to
support comparable extended monitoring in control sites. The intervention
components are labeled 1–6 and a–c in Figure 1[21] and
included: (1) setting up a scheme for regular hospital assessment through
surveys conducted six monthly, followed by (2) face-to-face feedback of findings
in intervention sites, and (a) written feedback in both groups. The other
components were: (3) 5.5-d training aimed at 32 health workers of all cadres
approximately 6–10 wk after baseline surveys (July to August 2006) in
intervention hospitals [13], (b) provision of clinical practice guidelines
introduced with training, (c) job aides, (4) an external supervisory process,
and (5) identification of a full-time local facilitator (a nurse or
diploma-level clinician) responsible for promoting guideline use and on-site
problem solving [19]. Supervision visits were approximately two to three
monthly, but facilitation remained in place throughout the 18 mo. The package
for control sites (H5–H8) included five components (1, 6, a, b, and c):
(1) six-monthly surveys with written feedback only, provision of (b) clinical
practice guidelines and (c) job aides, and (6) a 1.5-d initial guideline seminar
for approximately 40 hospital staff. The design thus compares two alternative
intensities of intervention, both providing considerably more than routinely
delivered, although we refer to one arm as the “control.”

Graphical depiction of the complex intervention delivered over an
18-mo period (adapted from Perera et al. [21]).

One of the job aides, introduced to all sites with all training and continuously
supplied to improve documentation of illness, was a paediatric admission record
(PAR) form. This was to replace traditional “blank paper” medical
notes [22].
All hospitals were aware that their records and patient management were to be
regularly evaluated. All job aides, training materials, and assessment tools are
available online (http://www.idoc-africa.org/docs/list/cat/5/subcat/27).

Data Collection

Data were collected at baseline and then at six-monthly intervals during six and
four surveys in intervention (surveys 1–6) and control hospitals (surveys
1–4), respectively (Figure
1). A single survey took approximately 2 wk with all sites surveyed
within a maximum 6-wk consecutive period by employing up to four teams. The
survey tools and team training have been described in detail elsewhere [14]. In brief,
data were collected using three tools adapted from previous work [7],[8] then
extensively pretested: a checklist of structure indicators, patient case-record
data abstraction forms, and a structured parent/guardian interview tool. In the
case of the parent/guardian interview formal, written consent was obtained prior
to data collection with no parent/guardians refusing consent. Ethical approval
was granted for confidential abstraction of data from archived case records
without individuals' consent. Survey team leaders remained the same
throughout the study and teams received 3 wk initial training that included a
pilot survey. Data collectors could not be blinded to allocation, but all were
guided by standard operating procedures and, for case records, a 10%
sample were independently reevaluated by the survey supervisor during each
survey. Agreement rates for data abstracted were consistently greater than
95%.

Case records from a random sample of calendar dates from the 6-mo intersurvey
periods were selected with the proportion of dates sampled adjusted to yield
approximately 400 records based on hospitals' admission rates. On the basis
of prior experience we aimed to conduct interviews with 50 caretakers of
admitted children during each 2 wk survey (surveys 1–4).

Performance Indicators

Primary effectiveness measures were 14 process indicators measured on paediatric
admissions aged 2–59 mo at 18-mo post baseline (survey 4). Secondary
measures were four valued system outcomes of admission and changes in structure
measured at the hospital level. The trial was not designed to evaluate mortality
effects.

Process indicators

Indicators reflected standards defined by the clinical guidelines focusing
on: pneumonia, malaria and diarrhoea, and/or dehydration that account for
more than 65% of paediatric admissions and deaths [13]. These
span assessment, therapeutic, and supportive care. We defined dichotomous
variables for process errors, e.g. wrong intravenous fluid prescription.
However, to summarize assessment an aggregate assessment score for each
child (range 0–1) was calculated by counting the number of features
documented and dividing this by the total relevant for each child according
to guidelines (pneumonia 8, malaria and diarrhoea/dehydration both 6). The
denominator of the score was thus child specific, depended on the extent of
comorbidity, and had a maximum value of 16 due to two shared features of
severe illness.

Outcome indicators

These indicators reflected adherence to key policy recommendations and
included vitamin A prescription, identifying missed opportunities for
immunization, and universal provider initiated testing and counselling
(PITC) for HIV. A fourth was based on a score (range 0–4) reflecting
caretakers' correct knowledge, at discharge, of their child's
diagnosis and number, duration, and frequency of discharge drugs.

Structure indicators

The availability of equipment, basic supplies, and service organization were
evaluated using a checklist of 113 items needed to provide guideline
directed care and representing seven logical groupings [23]. Data were collected by
observation and interviewing senior hospital staff. A simple, unweighted
proportion of the 113 items was derived, the change in proportion available
from survey 1 to survey 4 was calculated for each hospital and the mean
change in intervention and control groups compared.

Sample Size

There were 70 district hospitals in Kenya at the time of the study. Hospitals
from four of Kenya's eight provinces without potentially confounding,
nonstudy interventions and meeting the outlined eligibility criteria were
shortlisted. Data on additional criteria felt to help define the range of
contexts in Kenya were then evaluated, and eight hospitals from four provinces
were purposefully selected to ensure that at least two out of these eight
hospitals met each positive and negative criterion (Table 1), that two hospitals were from each
of the four provinces and represented logistical implications of their location.
The sample size of eight hospitals was estimated using two approaches to compare
performance within each hospital (plausibility design) and across the two arms
of the trial (cluster RCT analysis). Within hospitals, we estimated that
50% correct performance could be estimated with precision (95%
confidence intervals [CIs]) of ±7% with 200 admission
records (50% of 400 sampled admissions), or, ±10% with 100
admission records. The second calculation for group (C-RCT) comparisons
accounted for the clustered nature of the data. The median intraclass
correlation coefficient (ICC) for 46 quality of care variables estimated from a
health facility cluster survey in Benin was ρ = 0.2
[24]. We
estimated, employing this value for the ICC, that 100 observations per cluster
would provide 80% power to detect a 50% or greater difference in
proportions between intervention and control arms at 18 mo follow-up [25].

Statistical Analysis

Data were double entered and verified in Microsoft Access and analysed using
Stata, version 10 (Stata Corp.) according to the prespecified analysis plan.

Descriptive analysis

We present characteristics of hospitals at baseline and of children
contributing admission data during surveys 1–4. Process and outcome
indicators are summarized as percentages and the absolute changes
(95% CI) between survey 1 and 4 calculated for each hospital.

Comparison of intervention and control arms

Two approaches were used. The first approach was a cluster level analysis of
mean change from baseline in intervention
(n = 4) and control
(n = 4) groups, a test of mean
difference-in-difference, using an unpaired t-test (with
individual sample variances if appropriate), which is reasonably robust to
deviations from normality, even for a small number of clusters. The second
approach compared the groups at survey 4 using a two-stage method [26]. In the
initial stage, logistic or linear regression analyses were conducted for
each outcome adjusting for hospital-level covariates (all-cause paediatric
mortality, malaria transmission, and size) and gender, illness outcome
(alive or died) at the patient-level but not study group. The observed
events were then subtracted from predicted events in the regressions to
obtain a residual for each cluster. The cluster residuals were then compared
in the second stage using a t-test [26].

Performance post intervention period

Data from intervention hospitals (surveys 4–6) were analysed to
determine the impact of intervention withdrawal by assessing trends
graphically and using regression analysis. Linear and binomial regression
analysis was used to assess whether the means or proportions changed over
time; this was done by testing to see whether there was a linear trend
associated with the postintervention period (surveys 4–6).

We acknowledge the use of multiple significance tests and report 95%
CIs and exact p-values where appropriate noting that
p-values lower than those traditionally considered
“significant” might be given greater weight. We would, however,
suggest consideration of the plausibility of the intervention's
effectiveness should also take into account any consistency in effect across
indicators.

Results

All hospitals participated in each survey as planned (Figure 2). The intervention's implementation
is summarized in Table 1 and
showed that intended training for at least 32 workers (the majority were nurses) was
attained in three of the four intervention sites. No hospital received additional,
nonstudy paediatric training during the study period. Staff turnover, which was of a
like-for-like nature, was high in both intervention and control hospitals,
especially in the larger hospitals (H3 and H7). At 18 mo only, 5% (2/35) to
13% (3/23) and 0 to 26% (6/23) of frontline clinical staff in the
intervention and control hospitals, respectively, had received initial training
(Table 1). As part of
supervisory activities, the implementation team conducted an additional
10–12-h training session in two intervention hospitals and two to three small
group sessions of 2–4 h in all four intervention hospitals over the 18 mo
intervention period.

Intervention and control sites were similar at baseline (Table 1), although routinely reported prior
paediatric mortality varied from 4.1% to 13.4%. Case records for
primary process of care indicators were available for 1,130 and 1,005 records at
baseline and 1,158 and 1,157 case records at 18 mo for intervention and control
hospitals, respectively (Table
1). Additional data summarizing the patient populations at cluster level
are provided in Tables S1 and S2.

Primary Effectiveness Measures

Results were similar from both approaches used to compare intervention arms,
i.e., adjusted comparison at 18 mo and difference of differences. For brevity,
we outline only the results of adjusted comparisons and present other data in
Tables
S3, S4, S5.

Changes in process and outcome indicators between baseline and 18
mo postintervention by hospital.

The proportion of admissions treated in line with clinical guidelines was
substantially higher in intervention compared to control sites for
prescription of twice rather than thrice daily quinine, once rather than
thrice daily gentamicin, appropriate quinine and gentamicin dose/kg body
weight, and the proportion of severely dehydrated children with correct
intravenous fluid volumes (Table 3). There were no differences in proportions receiving
possibly toxic gentamicin doses although this practice was relatively
uncommon.

Average performance in control and intervention hospitals at
baseline and 18 mo follow-up and adjusted difference (95% CI)
at 18 mo.

Secondary Effectiveness Measures

Outcome indicators

At baseline key child health policy interventions were rarely implemented.
Vitamin A was prescribed only in H7 to 27% of admissions (Table 2). Health workers
rarely documented missed opportunities for immunization (<9%
across six sites) or offered PITC for HIV at baseline (all sites fewer than
4%).

At 18 mo the proportion of children offered PITC for HIV was significantly
higher in intervention sites (adjusted difference, 19.4%; 95%
CI 12.3%–26.4%), as was checking vaccination status
(25.8%; 7.29%–44.4%]). Although,
prescription of Vitamin A and counselling improved in some hospitals,
differences between groups did not attain statistical significance (Table 3).

Structure indicators

Changes between baseline and 18 mo were positive in both groups for all
domains. Improvements in intervention hospitals were, however, consistently
greater than in control hospitals (Figure 3), with the mean difference of
difference analysis showing a 21% greater overall improvement
(p = 0.02, based on a simple
t-test).

Average change from baseline to 18 mo postintervention in
proportion of structure items available, for each major domain and
combined, for hospitals in the intervention and control
...

Performance within intervention sites during surveys 5 and 6

For most process indicators with improvement, and based on tests for trend
between survey 4 and survey 5 or survey 6, no major decline in performance
was noted even 12 mo after withdrawal of intervention and in the face of
continuing staff turnover (Figures 4 and S1).

Discussion

We tested an approach to implementing clinical guidelines for management of illnesses
that cause most deaths in children admitted to district hospitals in Kenya. Despite
their modest success in developed countries [15], we used a multifaceted
approach reasoning that deficiencies in knowledge, skills, motivation, resources,
and organization of care would all need to be addressed. The intervention design was
guided by experience in the setting [7],[8] and theories of change and
culture of practice [13],[15],[27]–[29]. Our baseline data and other reports [7]–[10] suggest that
the simple availability of authoritative WHO and national guidelines—for
periods of more than 15 y—are currently having little impact on hospital care
for children. So what did our interventions achieve?

The full intervention package resulted in significantly greater improvements in
almost all primary and secondary effectiveness measures. Within specific hospitals
performance of certain indicators, e.g., recording child's weight in H3, were
already high at baseline. For these specific hospitals there was limited scope for
improvement, but there remained significant potential for improvement at the group
level since performance for most indicators was below the projected level of
50% at baseline. Substantial, clinically important changes occurred in
processes of care despite very high staff turnover amongst the often junior
clinicians responsible for much care in each site. Indeed, of 109 clinical staff
involved in admitting patients sampled at survey 4 from intervention hospitals only
nine (8.3%) had received any specific formal or even ad hoc training. At
survey 6 this proportion had reduced to 4.4% (four out of 91) reflecting the
typically high turnover of junior clinicians in such settings. As the training and
guidelines were not being provided in preservice training institutions and as formal
orientation periods are absent [14], we infer, but cannot confirm, that new staff learned
correct practices more commonly from established clinicians or the facilitator in
intervention hospitals. Improvement in structure indicators occurred without any
direct financial inputs reflecting probably a small generalized improvement in
resource availability and use of funding from user fees (total hospital incomes
varied from US$57 to US$100 per bed per month [19]) that we feel was in part, in
response to hospital feedback and the advocacy of the facilitator [14].

Improvements in quality of care thus occurred across a set of common, serious
childhood conditions and over a prolonged period. These data are a major addition to
reports from sub-Saharan Africa indicating that financial incentives can improve
malaria-specific care and fatality [30] and that implementation of WHO guidelines can improve
emergency triage assessment and treatment of children [31]–[33] and hospital care and
outcomes for severe malnutrition [34]. They also complement evidence from middle-income
settings where a multifaceted intervention resulted in substantial improvements in
two key obstetric practices [35]. Our data however, to our knowledge, represent the first
major report examining national adaptation and implementation of a broad set of
rural hospital care recommendations. They are relevant to many of the 100 countries
with IMCI programmes where rural hospitals have important roles supporting primary
health care systems [36] and in helping to reduce child mortality [37],[38].

However, while change in simple process indicators was reasonably consistent in
intervention sites, in control (partial intervention) sites, changes were more
varied, even within hospitals (notably site H8). Certain indicators, e.g., PITC for
HIV, also improved only in three of four intervention sites and steadily but slowly.
Thus, while the full intervention may promote consistency, there was still
substantial evidence of variation across indicators, across sites, and across time.
Such variability is consistent with emerging debates drawing on theories of
complexity, chaos, and change emphasizing the effect of interactions with contexts
[39]–[41] and suggesting that understanding can be informed by
parallel qualitative enquiry [42]. Data collected during this study on barriers to use of
guidelines [18]
and views on supervision, feedback, and facilitation [14] together with published
literature [43]
suggest to us that poor or slow uptake may be associated with a requirement for
greater personal or organizational effort to change, the view that a task is not
directly related to care of the immediate illness, or, in intervention sites, an
area unlikely to be subject to local evaluation.

Limitations

Our study has limitations. Hospitals were not selected at random from a set of
all eligible hospitals for logistic reasons and, because random selection of a
small number of clusters may not have produced balance nor guaranteed
representativeness at baseline. Hospitals assented to participation and
randomization, but we were not able to engage communities in this process [44], and they
and survey teams were aware of intervention allocation. The latter is a
potential problem with results based largely on retrospective review of records.
The discrepancy between documentation and performance presents a particular
threat at baseline before efforts in all sites to improve clinical notes.
Prescription data are less susceptible to this limitation however, and improved
prescribing paralleled improvement in assessment indicators. Efforts to minimize
possible observation bias at the point of data collection included the use of
structured inventory forms, standard operating procedures, and extensive
training in survey methods. With only four hospitals per group, attempts to
adjust for baseline imbalance may also have only limited success. However, to
facilitate scrutiny we report on the context of intervention [19],[20], its
delivery and adequacy [12], the views of intervention recipients [18], and
detailed site-specific data (see Tables S1, S2, S3, S4, S5) and
suggest that all are considered for a complete interpretation of this study of a
complex intervention.

Replication and Scaling Up

Demonstrations that a similar intervention package is effective in other settings
would strengthen the evidence supporting widespread adoption. While there are
few studies of this nature reported, we note the recently reported success of
multifaceted interventions in middle- and high-income countries [35],[45]. However,
standardizing complex interventions may be difficult, if not impossible, given
the important role of context in shaping mechanisms and outcomes [46]. For
this reason, future reports will attempt to provide detailed insight into how
and why this intervention met with general but varying degrees of success. If
our results are deemed credible, however, the data we present have a number of
implications. Firstly, current efforts to implement and scale up improved
referral care in low-income settings need to go beyond the existing tradition of
producing and disseminating printed materials even when linked to training [15]. Instead
broader health system efforts, guided by current understanding of local contexts
and capabilities and theories of change, are required.

Within Kenya it would obviously be a mistake to consider that the intervention
package tested can be scaled up simply by aiming for much broader coverage with
the training course we designed. Effectiveness has been demonstrated only for
the multifaceted intervention. Thus, scaling up should aim to provide all inputs
not just guidelines, job aides, and introductory training. However, providing
regular support supervision and performance feedback related to child and
newborn care at first referral level are not routine. Resources and systems for
supervision need strengthening and supervisors themselves will need training and
organizing. Routine information systems are inadequate to generate the data
required to evaluate care, and capacity for conducting and disseminating
analyses as part of routine feedback is largely absent. The role of facilitators
is also not one that currently exists. Although the roles required could perhaps
be played by senior departmental staff, the lack of human resources means such
tasks cannot simply be added to already busy jobs [19]. Furthermore the skills or
desire to facilitate change are not necessarily present amongst such mid-level
managers.

Countries other than Kenya considering adopting the approach may have similar
limitations. In addition they may need to tailor some intervention components to
their particular setting. For example, the detail of a clinical guideline or job
aide or approach to training may need to reflect available resources or local
evidence. However, such adaptation would need to be complemented by careful
consideration of how systems can be made ready to support implementation of new
practices and improved quality of care. We would suggest this includes due
attention to influencing the institutional culture and context of rural
hospitals although willingness to invest in more integrated approaches often
seems lacking [47]. Finally, before making decisions on implementation,
policy makers increasingly require carefully collected and reported
cost-effectiveness data. Such a report is in preparation. Considering only the
financial costs of specific inputs, for example the typical 5-d training course
for 32 participants at approximately US$5,000 [13] or the annual cost of a
facilitator at less than US$5,000 [18], while of some value, are
insufficient for prioritizing resource use.

Conclusion

Our findings provide strong evidence that a multifaceted intervention can improve
use of guidelines and, more generally, the quality of paediatric care. Cost data
will help determine whether this implementation model warrants wider
consideration as one approach to strengthening health systems in low-income
settings.

Supporting Information

Figure S1

Effect of intervention on the processes and outcome of care within each
hospital during survey 1 through survey 6 (baseline to 30 mo follow-up).

Text S1

Text S2

Acknowledgments

The authors are grateful to the staff of all the hospitals included in the study and
colleagues from the Ministry of Public Health and Sanitation, the Ministry of
Medical Services, and the KEMRI/Wellcome Trust Programme for their assistance in the
conduct of this study. In addition we are grateful for the input of Martin Weber,
Alexander K. Rowe, Lucy Gilson, R.W. Snow, Kara Hanson, Bernhards Ogutu, and Fabian
Esamai in the initial stages of this work. John Wachira, Violet Aswa, and Thomas
Ngwiri helped develop and implement the training, ETAT+. Our thanks go to Jim
Todd, Elizabeth Allen, and Tansy Edwards for advice on analyses and comments on the
manuscript. The work of the hospital facilitators A. Nyimbaye, J. Onyinkwa, M.
Kionero, and S. Chirchir is also acknowledged and this report is dedicated to M.
Kionero who tragically died shortly after the study. This work is published with the
permission of the Director of KEMRI.

Abbreviations

CI

confidence interval

IMCI

Integrated Management of Childhood Illnesses

PITC

provider initiated testing and counselling

Footnotes

Santau Migiro, Wycliffe Mogoa, and Annah Wamae declared that they are employed by
The Kenyan Government within the Ministries of Health and have responsibilities
for child and newborn health. Mike English declares: 1. I coordinated the
development of the multifaceted approach prior to its being tested in the trial.
2. I help coordinate provision of ETAT+ training on a voluntary basis (one
component of the intervention) as attempts are made to provide the training to
non-trial hospitals and within the University of Nairobi to trainee
paediatricians and medical students. 3. I am attached to KEMRI and employed by
Oxford University. 4. I sit on an advisory committee (unpaid) to the government
of Kenya, the Child Health Interagency Coordinating Committee and have acted as
a technical advisor to WHO on several occasions in the child and newborn health
arena. There is no commercial aspect to the development of the training and
other aspects of the intervention. In fact all training materials are freely
available on the website http://www.idoc-africa.org. All the remaining authors have
declared that no competing interests exist.

Funds from a Wellcome Trust Senior Fellowship awarded to Mike English (#076827)
supported intervention development, provision of guidelines, and job aides and
all the research components. Routine hospital care was provided by the
Government of Kenya. The funders had no role in the design, conduct, analyses,
or writing of this study or in the decision to submit for publication.