Ensuring Quality in Contracted Child Welfare Services

This paper was prepared by Planning and Learning Technologies, Inc. in
partnership with The Urban Institute for the Office of the Assistant Secretary
for Planning and Evaluation, U.S. Department of Health and Human Services,
under contract HHSP233200600242U. The opinions expressed in this paper are
those of the authors and do not necessarily represent positions of the U.S.
Department of Health and Human Services.

This issue paper was written by Nancy Pindus and Erica Zielewski of The Urban
Institute, Charlotte McCullough of McCullough and Associates and Elizabeth
Lee of Planning and Learning Technologies, Inc. Paper review and comments
were provided by Crystal Collins-Camargo, University of Louisville, Kentucky.

This document is available online at:
http://aspe.hhs.gov/07/CWPI/quality

Privatizing a child welfare service does not relieve the public child welfare
agency of its responsibilities to ensure that children and families are well
served and that tax dollars are effectively spent. In addition to
developing and implementing policy, the public agency continues to be accountable
for high-quality and effective services that comply with state and Federal
rules, and achieve specified outcomes and results (Freundlich & Gerstenzang,
2003; McConnell, Burwick, Perez-Johnson, & Winston, 2003).

This is no easy undertaking. States struggle to develop thorough quality
assurance systems  partly because the evidence about best practice
in this area is in short supply. In 2007, the Childrens
Bureaus Quality Improvement
Center on the Privatization of Child Welfare Services (QIC PCW), found
that public agency administrators struggle to develop quality assurance systems
that systematically review contract performance while enabling contractors
to creatively manage the services they are enlisted to
provide.[1]

The purpose of this paper is to assist public agency child welfare administrators
to better monitor and assure quality of contracted services within the context
of the agencys overall quality assurance/improvement system.
This paper explains the importance of planning contract monitoring and
accountability systems and training staff to be effective contract
monitors. It describes the types of monitoring activities, as well
as methods for collecting and using monitoring information. The paper provides
examples of some of the decisions that must be made about what will be measured,
and how child welfare agencies have worked collaboratively with providers
to develop realistic and constructive approaches to contract monitoring.

An overarching theme of this and other papers in the series is
partnership. When public agencies contract for services, they are seeking
one or more partners to share the risks, rewards, and responsibilities of
delivering services to children and families in the child welfare system.
To the extent allowed by procurement rules, a collaborative public-private
planning process can ensure that consensus is reached on the broad goals
and expectations of the quality assurance and monitoring systems.

This is the sixth and final paper in a technical assistance series. The project
was funded in 2006 by the Office of the Assistant Secretary for Planning
and Evaluation, U.S. Department of Health and Human Services (DHHS, ASPE).
The paper series is designed to provide information to state and local child
welfare administrators who are considering or implementing privatization
reforms.

For the purpose of this paper series, privatization is
defined as the contracting out of the case management function with the result
that contractors make the day-to-day decisions regarding the child and
familys case. Typically, such decisions are subject to public
agency and court review and approval, either at periodic intervals or at
key points during the case. However, the following discussion about contract
monitoring is applicable to any public/private partnership, regardless of
the extent to which the service has been privatized.

This paper builds on information already presented in other papers in this
series and makes reference to the other papers throughout. These are available
online at
http://aspe.hhs.gov/hsp/07/CWPI/.

Assessing Site Readiness: Considerations about Transitioning to a Privatized
Child Welfare System

Program and Fiscal Design Elements of Child Welfare Privatization Initiatives

Evolving Roles of Public and Private Agencies in Privatized Child Welfare
Systems

Evaluating Privatized Child Welfare Programs: A Guide for Program Managers

Preparing Effective Contracts in Privatized Child Welfare Systems

This paper series incorporates research conducted under the
Quality Improvement Center
on the Privatization of Child Welfare Services (QIC PCW), funded in 2005
by the Childrens Bureau, Administration for Children and Families,
U.S. Department of Health and Human Services. It also draws from the
research on privatization in other, closely related social services
fields. Additional information for this paper comes from field experience
and telephone discussions with state and county child welfare administrators
and private providers.

The role of monitoring in child welfare is a critical, but complex one. A
1997 U.S. Government Accountability Office (GAO) study found that monitoring
contractors performance was the weakest link in the privatization
process (U.S. GAO, 1997, 14). Despite the importance of
monitoring, most studies conducted during the 1990s noted a myriad of problems
with public agency approaches to monitoring including: staff shortages in
the public agencys monitoring units; a lack of in-house expertise in
effective contract management; inconsistent approaches resulting in a tendency
for monitoring to be overdone or underdone from one contract to another;
and a disconnect between an agencys contract monitoring work from its
overarching quality assurance and improvement activities (Freundlich &
Gerstenzang, 2003; McCullough & Freundlich, 2007).

The National Child Welfare Resource Center for Organizational Improvement
(OBrien and Watson, 2002) notes that quality assurance (QA) is the
term most often used by child welfare administrators and senior managers
to describe efforts to assess their agencies success in working with
children and families. The NRC notes that, in practice, QA has had no consistent
meaning across child welfare agencies. Until recently, QA systems consisted
largely of case record audits to monitor and report on the extent of compliance
with state and Federal requirements. QA efforts have ranged from administrative
case review systems, to periodic research studies, to a review of regular
statistical compliance reports, and to comprehensive initiatives involving
all these elements and more.

While all public agencies conduct some form of quality assurance to review
the quality and impact of their directly delivered services, state systems
differ in the breadth and depth of this work. It is noteworthy that
in the first round of Child and Family Services Reviews (CFSRs), a full one
third of states were found to be out of substantial compliance with the systemic
factor that sought a state-level QA
system.[2]

Additionally, in many states, a child welfare agencys QA system primarily
focused on quality of services delivered directly by the public agency.
Results of those efforts were not connected to the findings from contract
monitoring that was done by small contract monitoring units operating on
the margins of the agency. The monitoring function and resulting reports
often had minimal impact on the services delivered by the agency or on future
procurement decisions (McCullough & Freundlich, 2007). Several early
studies on privatization found a general lack of accountability and performance
criteria in privatized contracts (Nightingale and Pindus, 1997; Petr and
Johnson, 1999); and without performance targets, it is difficult to hold
providers accountable (Freundlich & Gerstenzang, 2003).

The isolation of contract monitoring was only one part of the problem.
Perhaps an even larger issue related to the compliance-driven nature of
traditional monitoring efforts which focused on ensuring that contractors
did not do anything wrong, rather than on any expectation that they might
do things better. Monitors looked at whether providers served the expected
number of clients and delivered the expected number of service units; not
whether children and families benefited from the services they received or
the system operated more effectively.

In recent years, states have begun to invest more resources in contract
monitoring and quality assurance systems, and to build more robust systems.
Arguably, the most powerful motivating factors for states to improve and
integrate contract monitoring and QA activities have been the passage of
the Adoption and Safe Families Act (ASFA) and the implementation of the
CFSRs. With these two events, there is now a common set of outcomes
and systemic factors on which all states are assessed. These outcomes and
measures typically provide the foundation for the development of outcomes
and performance measures for inclusion in provider contracts and serve as
the focus for monitoring and quality improvement efforts (McCullough and
Freundlich, 2007).

The CFSRs, initiated in 2000, are a three-stage process consisting of a Statewide
Assessment, an on-site review of child and family services outcomes and program
systems, and a program improvement plan. The reviews are structured to help
states identify strengths and areas needing improvement within their agencies
and programs. They address three outcome areas (safety; permanency; and child
and family well-being) and seven systemic factors (statewide information
system; case review system; quality assurance system; staff and provider
training; service array and resource development; agency responsiveness to
the community; and foster and adoptive parent licensing, recruitment, and
retention).

Once the state has completed the first two stages, it prepares a program
improvement plan to address the areas that have been found to be
deficient. The Childrens Bureau monitors progress on the plan
on an ongoing basis and works with the state to determine when the issues
needing improvement have been addressed. In addition to providing states
with a common set of expectations, the CFSRs also provided a roadmap for
how they could monitor progress. For some states, the CFSR was the impetus
for new types of collaborative relationships with private agencies, as described
later.

The Federal Government has also encouraged improved tracking and oversight
of cases by providing enhanced funding for State Automated Child Welfare
Information Systems (SACWIS), developing reporting requirements for the
collection of adoption and foster care data as reported by the Automated
Foster Care and Adoption Review System
(AFCARS),[3] and creating requirements
for citizen review panels and peer reviews in the Child Abuse Prevention
and Treatment Act (CAPTA).[4]

Another separate, but related factor that has strengthened quality assurance
for contracted services is the expanded use of performance based
contracts. States and jurisdictions use performance based contracts
(PBC) to improve agency outcomes and by doing so, focus more resources on
the quality and impact of contracted services. There are several parallels
between performance based contracting (PBC) and QA efforts. A well developed
and implemented PBC initiative inherently supports agency QA efforts through
similar processes of identifying agency goals and measures, collecting data,
and modifying systems (or contracts) to better align contract incentives
with agency goals (Lee, Allen and Metz, 2006). Contracts are being
monitored, and in many cases, rewarded based on child and family outcomes.
These and other risk-based contracts require that special attention
be given to contract monitoring because providers are often at financial
risk if they do not meet performance expectations. For a more detailed
discussion of this issue, see Topical Paper #2,
Program and
Fiscal Design Elements of Child Welfare Privatization Initiatives
(http://aspe.hhs.gov/hsp/07/CWPI/models/index.shtml).

As a result of all of these factors, today, many public and private child
welfare agencies are collecting a range of information on program quality,
practice, client outcomes, cost-effectiveness, and satisfaction and have
more sophisticated tools and skills to do this. In most states, quality
assurance efforts involve both quantitative measures (client outcomes, worker
caseloads, casework activities) and qualitative measures (e.g. how well
stakeholders believe the system is working). Using these data, agencies identify
problems and implement improvement strategies on an ongoing basis. As a way
of differentiating these efforts from traditional compliance monitoring,
the new approaches often are called continuous quality improvement systems
(CQI). The new approach improves upon traditional compliance monitoring in
three ways (OBrien and Watson 2002):

Quality improvement programs are broader in scope, assessing practice and
outcomes, as well as compliance.

Rather than simply determining if services were delivered as required or
whether contractors were in compliance with federal, state and agency
requirements, quality improvement programs attempt to use data, information
and results continually to affect positive changes.

Quality improvement programs engage a broad range of internal and external
partners in the quality improvement process, including top managers, staff
at all levels, children and families served, and other stakeholders.

Many states have enhanced monitoring and QA efforts by incorporating elements
of the CFSR process into their quality improvement and contract monitoring
systems. For example, New Mexico used the CFSR process as a rallying
call to bring all stakeholders to the
table.[5] The process, which has been
evolving since 2000, includes both internal and external stakeholders, and
takes a systems perspective to quality assurance, quality engineering, and
quality improvement. The indicators included in CFSRs enabled all stakeholders
to talk about their measurements in a common way, to understand what others
are trying to accomplish, and to make decisions about priorities, including
the allocation and reallocation of resources. The state has included CFSR
outcomes in Requests for Proposals and providers contracts. Working
with providers to educate them about the CFSR goals has helped the state
to redirect resources to families at greatest risk and to services that are
most closely related to the CFSR goals. Using a data-driven approach to identify
the needs of the state with respect to child welfare, legislators and providers
have been more open to alternative approaches. The aim is not to shut providers
down, but rather to have providers extend their mission in order to more
directly address CFSR goals. In cases where a provider may have difficulty
changing focus, the New Mexico Children, Youth, and Families Department has
worked with them to identify other funding sources or help them to change
their work.

Like New Mexico, many other states are using CFSR outcomes and indicators
in contract requirements and requiring monthly or quarterly performance reports
from contractors. These reports, not unlike CFSR data profiles, allow contract
monitors and contractors to continually examine aggregate data to identify
trends and possible problems. Desk reviews and problem-solving meetings may
be supplemented by onsite visits/interviews. Case record reviews, often modeled
after the onsite portion of the CFSR, allow the contractor and contract monitor
to gather qualitative information that is not evident from reported data.
Both sources of information help to drive continuous quality improvement
efforts.

Other initiatives to improve quality assurance of contracted services include
the three projects funded under the QIC PCW. Three states (Florida,
Illinois and Missouri) have designed and implemented contracted services
that integrate performance based contracts with expanded quality assurance
systems. The pilot programs are aimed at using data to identify quality
practice techniques and improve both practice and client outcomes.
Each project has identified a range of outcomes and other indicators -- often
practice standards such as levels of visitation and/or contact between workers
and clients -- that appear to be related to outcome achievement. These
outcomes and indicators are incentivized in the performance based contracts
and data on performance is monitored through expanded quality assurance systems.

Ideally, public agencies design their specific contract monitoring/QA approach
while they are designing the service model that is to be contracted.
Service goals and objectives, and reporting requirements, should be clarified
at the outset and incorporated into contracts. Decisions about what is to
be monitored, how monitoring is done, and how the information will be used,
should be part of the initial contract discussions. These issues are addressed
in Topical Paper #5,
Preparing
Effective Contracts in Privatized Child Welfare Systems
(http://aspe.hhs.gov/hsp/07/CWPI/contracts/index.shtml).

This section outlines issues that agencies should address in building their
contract monitoring infrastructure. It also examines how the public agency
can design and implement its monitoring activities in partnership with service
providers, and how the responsibilities for quality assurance and monitoring
can be shared by public and private agencies and other oversight bodies.

Florida Department of Children and Families:
Contract Risk Assessment Guide

Consistent and uniform risk assessment permits the Contract Oversight
branch of DCF to efficiently apply its contract monitoring resources
systematically to the areas of greatest need.

What factors determine the level of risk to DCF?

Risk for DCF contracted service delivery is classified into four weighted
categories, including:

Annual Dollar Value of the Contract  the higher the annual dollar value,
the higher the risk the Department assumes in contracting with the provider

Nature of Service  weights are assigned to the type of service depending
on the risk associated with each service category

Prior Provider Performance and Corrective Actions  Providers who have
previously had serious financial, administrative, or program deficits or
have had difficulty being responsive to Department requirements are considered
to present a higher risk

Last Contract Monitoring Visit  the period of time since the last visit
will be a heavily weighed factor in the risk assessment with a longer time
period presenting a higher risk

How public agencies monitor contractors is as varied as the types of contracts
that public agencies have with private agencies. For each contract,
the public agency must have a monitoring plan, which lays out the steps for
monitoring, as well as the methods and techniques to be used. Ideally, the
plans also clearly define the roles of public agency staff and private
contractors in ensuring accountability.

The public agencys monitoring plan defines precisely what a
government must do to guarantee that the contractors performance is
in accordance with contract performance standards (Eggers, 1997,
22). Eggers (1997) lays out steps that are important to designing a
monitoring plan. The monitoring plan should be quantifiable and specific,
meaning that it includes information about the reporting requirements, the
frequency and number of meetings to be held, complaint procedures, and a
way to access the providers records if needed. A monitoring plan
should also include information about the number of individuals who are required
to monitor the contract, who those individuals are, and what their
responsibilities should be. Finally, the monitoring plan should tailor the
monitoring tasks to the specific services being provided and/or the outcomes
being measured. Different services and outcomes require different types
and levels of monitoring, which must be taken into account in the plan.
Similarly, different providers may need different monitoring structures.
For example, Florida bases the frequency of its on-site visits on the risk
assessment of the contractor. Those contractors that do not receive
an on-site visit receive annual desk reviews (see preceding text box).

In many states, the key elements in monitoring plans are prescribed by statute
or administrative rule. In Florida, for example, the Department of Children
and Families (DCF) is required to adopt written policies and procedures
for monitoring the contract for the delivery of services by lead community-based
providers [that] at a minimum, address the evaluation of fiscal
accountability and program operations, including provider achievement of
performance standards, provider monitoring of subcontractors, and timely
follow-up of corrective actions for significant monitoring findings related
to providers and subcontractors. (Florida Statute 409.1671[2][a])

As Eggers (1997) points out, monitoring should be viewed as a preventive
rather than an adversarial function. The contractor should be considered
a strategic partner and be given incentives to innovate, improve, and deliver
better service. For this to happen, a relationship of trust must be built
between the public agency and the contractor, and performance terms must
be mutually understood. Ideally, this begins in the planning stage with
developing a monitoring system that is clearly understood and accepted by
both public and private agencies. The process should include designating
individuals from the public agency and from the contractor staff who will
communicate on a regular basis, such as through monthly meetings or conference
calls.

In practice, state procurement regulations and practices vary with respect
to the timing and extent of communication between agency officials and
contractors prior to the award of a contract. If not prohibited, some agencies
involve contracted providers and other community stakeholders in the process
of determining which outcomes to measure and in defining a collaborative
approach to quality assurance and contract monitoring. There are several
examples of states that have used a collaborative decision making process
to develop performance measures, penalty and reward mechanisms, and feedback
loops.

One example is Missouri. Prior to initiation of performance based
contracting in Missouri, the state undertook a two-year developmental process
to involve community stakeholders in framing the content for service contracts.
Key stakeholders included executives of private contracting agencies, judges
and other juvenile court personnel, and representatives of advocacy groups.
The resulting contracting model provides for strong partnership communication
and routine feedback via interactions between contracting agencies and the
administrators in the Missouri Childrens Division (Watt et al., 2007).

Contractors can provide helpful advice in developing the performance indicators
that they are meant to achieve. An advantage to this approach is that
it lessens the likelihood of misunderstandings over the nature of the performance
measures during the contract period (Eggers, 1997). Furthermore, successful
collaborative planning often carries through to implementation.

i. The Organization and Roles of Public
Agency Staff

Individuals responsible for monitoring have different titles from one state
to another. While several different people with similar titles might
be responsible for different aspects of monitoring within a state, it is
not uncommon for their roles to blur in actual practice. In some states,
all staff responsible for monitoring reside in the same division within the
central office or in the district/region. In other states, staff might operate
out of totally different divisions within the public agency, with contract
compliance being part of a procurement unit, while program monitoring is
operated out of a program/service or licensing division. Some jurisdictions
rely upon a single individual to be the primary monitor; others have a team
approach.

While there is no evidence that one public agency staffing approach is preferable
to another, it is important for staff operating across divisions to communicate
and collaborate in the timing and frequency of their quality assurance or
monitoring activities, share findings, and strive to reduce the duplicative
and overlapping auditing and program monitoring functions that have proven
problematic in some privatization initiatives. In Florida, when the burden
of overlapping QA/monitoring became clear, DCF established a workgroup to
streamline monitoring/audit activities, including efforts to coordinate
concurrent Title IV-E, mental health, Medicaid, licensing, and community-based
care evaluation activities (Freundlich & Gerstenzang, 2003).

In addition to the need for strong communication and collaboration across
public agency divisions, it is critical to have the support and direction
of upper management in the design and implementation of monitoring
efforts. Strong leadership promotes consistent messages throughout
the public agency and to providers, and facilitates allocation of sufficient
resources for monitoring and support efforts. From discussions with
several states, contract monitoring and quality assurance models are still
a work in progress. States are working to establish the best structure
for their programs. As described below, Florida provides a good example
of a state working to improve its system based on lessons learned from its
prior efforts.

ii. The Private Agencys
Responsibility

To this point, the discussion has focused primarily on the public agency
as the entity monitoring its contract with the private provider. However,
it is important to note that most recent contracts require private agencies
to have the capacity to monitor their own performance and use a robust quality
assurance/improvement system to identify and remedy problems. Private
agencies with performance-based contracts often rely upon methods that are
similar to those used by their public agency counterparts  namely
ongoing review of performance data, chart reviews, focus groups, problem-solving
mechanisms at the practice and systems level, and satisfaction surveys to
tell them what is working and what needs improvement.

Prior to 2008, Floridas Department of Children and Families (DCF) had
been operating an integrated tiered approach to monitoring its local
community-based care (CBC) agencies. Floridas monitoring system
involved three tiers:

Tier 1  Lead agencies developed and implemented a Quality
Management Plan that involved minimum requirements established by DCF.
Lead agencies reviewed their in-house and subcontracted services and reported
the findings back to DCF.

This tiered approach to monitoring was designed to give the CBC lead agencies
the flexibility to monitor their contracts, but also to provide a structure
in which DCF could oversee how the system was working.

In practice, the tiered monitoring system was not as effective in tracking
lead agencies and subcontractors performance as planned (OPPAGA, June
2008). For instance, lead agencies were not completing their Tier 1
quality assurance reviews in a timely manner (and often not reviewing the
required number of cases). This resulted in significant delays between
Tier 1 and Tier 2 reviews, which made it difficult for state staff to validate
earlier findings  that is, match the quality assurance data collected
by the lead agency with what was currently being reported in case records.

In consultation with Chapin Hall Center for Children, Florida restructured
its oversight procedures to improve its ability to track contractual compliance
and agency performance; some of the major changes include (OPPAGA, June 2008):

Collecting fiscal and program information from lead agencies each quarter.
Program indicators include those that most affect lead agency expenditures
including caseloads, case entry rates, and proportion of cases entering foster
care;

Developing new quality assurance implementation and oversight teams made
up of lead agency and state staff that conduct quarterly reviews of the lead
agencies. Using a new quality assurance instrument with a common set
of quality assurance standards, Regional and lead agency staff conduct side
by side reviews of a subset of cases to help interpret information in case
files;

Assessing child well-being through the new on site quality assurance instrument
that contains a series of questions on educational and health services and
whether these services are meeting childrens needs;

Requiring case management supervisors to review 100% of cases on a quarterly
basis using a qualitative discussion guide and then providing timely feedback
to case workers on the quality of services and corrective action if needed;

Targeting practice trends that had not shown improvement, specifically: placement
stability, recurrence of abuse and neglect, and reentry into out of home
care; and

Offering additional training to public and private agency staff on data analysis
and means of identifying relationships between outcomes, service delivery,
and service quality.

iii. Oversight Bodies

Some states supplement their staff-driven and private agency quality assurance
and contract monitoring activities with oversight by independent community-based
stakeholder bodies. These groups are charged with reviewing overall agency
performance and helping to identify and remedy barriers to success. Some
of these bodies are created by the public agency, while others are appointed
by the Governor. Many states have legislatively mandated bodies charged
with helping to continually review performance of both the public agency
and its contract providers.

For example, when the County of Milwaukee Child Welfare system was taken
over by the state, the state legislature created the Partnership Council
by statute. The Partnership Council is an independent advisory body
comprised of state legislators, county board members and gubernatorial
appointees. Among those appointees are the Childrens Court Presiding
Judge, medical leaders, public school leaders, child advocates, public policy
advocates, and guardian ad litem representatives. All meetings
include public and private
partners.[6] One member of
the Partnership Council observed, As you look at the three-legged
stool holding up any system, community involvement and accountability
are good things. Having an independent body assist in bringing public
and private partners to the table to create improvements has been very effective
in Milwaukee.[7]

Several states and jurisdictions have formed institutional forums for resolving
problems and evaluating the public/private partnership. For example, one
committee may be responsible for operational issues, and one for technical
issues, while a senior executive committee addresses strategic issues. Illinois
uses such a strategy. The Child Welfare Advisory Committee (CWAC),
created by the Illinois General Assembly, meets quarterly to discuss any
and all issues related to child welfare in Illinois. This includes
any issues related to contracts and contract monitoring. According
to the current director of DCFS, the CWACs meetings set the stage
for collaboration between the private providers and DCFS. Moreover,
these meetings keep the vehicle open for [the] private agencies to
raise any issues or concerns (McEwen, 2006a). The CWACs
meetings provide an important avenue for private providers and the public
agency to come together to discuss Illinois child welfare system.

In other states, there are ongoing, less formal, public-private communication
mechanisms such as monthly meetings between the public agency and its contract
providers to share data, communicate new information on policies or procedures,
and discuss strategies for improvement. For example, the Tennessee Department
of Childrens Services (DCS) holds monthly reviews of performance data
with contractors. A spokesperson for Cornerstone, a child welfare service
provider that has a performance based contract with Tennessee DCS, indicates
that the monthly reviews and the relationship with DCS are a critical part
of the success of their contract because it has helped them to be able to
meet their targets. Similarly, in Missouri, the state agency
meets regularly with private partners, alternating between the program directors
(who manage the contract daily) and the CEOs (who bring big-picture issues
to the table). Communication occurs frequently between the
Departments oversight staff/management staff and the respective
contractors. A CQI process has been implemented locally with the
contractors, in which problem-solving between the public/private partners
occurs on issues that arise with respect to implementing the foster care
contract.[8]

Monitoring efforts can focus on different aspects of a contractors
performance, including:

Compliance with contract terms and state and federal requirements

Fiscal performance

Case decision making and/or collaborative reviews

Performance

i.
Compliance Monitoring

Public agencies monitor a private providers compliance with various
state and federal regulations, and with the terms of the contract.
As noted, until a decade ago compliance monitoring was the primary focus
of contract monitoring. Monitoring compliance is often tied with monitoring
a providers processes. For instance, the Texas child welfare
agency requires a contractor to maintain sufficient records that adequately
account for the use of awarded funds and to provide reasonable evidence that
the service delivery complies with contract provisions (Texas Department
of Family and Protective Services, 2008). Compliance is included as
part of its programmatic monitoring, and involves the following
activities:

Reviewing the service provisions of the contract to determine what the contractor
is to provide and the desired quality

Reviewing the contractors reports and other materials to determine
if services are being provided

Interviewing direct delivery staff and others to determine if the services
are being performed according to the contract (Texas DFPS, 2008).

ii.
Fiscal Monitoring

Public agencies are responsible for ensuring that contract dollars are spent
appropriately. Agencies vary with regard to whether fiscal monitoring is
conducted by a separate unit in state government, by the child welfare
contracting agency itself, or by an independent audit (paid for by the private
agency), and agencies differ in the level of detailed oversight required.
At a minimum, fiscal monitoring focuses on whether program cost information,
including administrative costs, are reasonable and necessary to achieve program
objectives. It involves:

Reviewing the contractors bills when they are received to determine
if appropriate units of measure are reported and that costs (units x rate)
are correct;

Comparing budgets and/or budget limits to actual costs to determine if the
contractors expenditures are likely to be more or less than budgeted;

Obtaining reasonable documentation that services billed were actually delivered
according to the contract; and

As appropriate, comparing bills with supporting documentation to determine
that costs were allowable, necessary, and allocable.

iii.
Case Decision-Making Monitoring

Public agencies can also monitor the case decision-making process through
collaborative reviews with providers. In some states, the public agency
works very closely with private providers to make decisions about cases on
an ongoing basis. This dual case management approach is used in places
like Philadelphia , Pennsylvania. For a detailed discussion of how
seven jurisdictions have divided and shared case management decision-making,
see Topical Paper #3
Evolving
Roles of Public and Private Agencies in Privatized Child Welfare
Systems (http://aspe.hhs.gov/hsp/07/CWPI/roles/index.shtml).

iv.
Performance Monitoring

Increasingly, with the expansion of performance based contracts, performance
monitoring has become a central focus of most public agencies monitoring
efforts. The U.S. GAO defined performance monitoring as  the ongoing
monitoring and reporting of program accomplishments, particularly towards
pre-established goals Performance measures may address the type or level
of activities conducted (process), the direct products and services delivered
by a program (outputs), and/or the results of those products and services
(outcomes) (U.S. GAO, 1999, 6). Typically, performance targets in child
welfare are stated as increases or decreases in a specified factor, such
as a reduction in the average length of time a child stays in foster care
or other measures that are directly linked to CFSR measures.

As previously noted, most public and private agencies use a myriad of methods
to assess performance, including desk reviews, case record reviews, site
visits/interviews, fiscal audits, customer satisfaction surveys, and independent
evaluations. Which methods a public agency uses to monitor its contracts
depends on the outcomes being measured, as well as other factors, such as
the level of monitoring required to ensure accountability and the funds available
to support monitoring activities. Examples from three jurisdictions are provided
below:

Kansas conducts annual administrative reviews, in which reviewers from the
public agency visit the contractors premises to ensure adherence to
general contract requirements like resource family licensing. Staff from
the Central Office review various contractor produced reports as well as
outcomes monitoring reports generated by the Children and Family
Services Division. Then, they perform analyses of data to identify trends
in performance results.

New York City has developed an evaluation tool called EQUIP (Evaluation and
Quality Improvement Protocol). EQUIP pulls together information from
several sources including administrative data, information from case record
reviews, interviews with child welfare clients and agency workers, and field
observations. All of these data are entered into the system to produce
an EQUIP score. This score, which is given to each agency, is used
to compare agency performance.[9]

In Franklin County, Ohio, public agency staff are co-housed in the private
agencies where they can conduct case reviews and work collaboratively on
strategies to improve performance. Public agency staff do not do home visits
or other activities that might be seen as undermining the managed care staff
with the families. Their role is to offer support and to also monitor
services and contract compliance.[10]

A critical part of contract monitoring is determining what information is
needed to monitor services, costs, and outcomes. The information needed is
based on answers to a few key questions:

What are contracts expected to achieve?

What needs to be measured to assess contractor performance in achieving goals?

Where will the data come from?

i.
Focus on What the Agency is Trying to Achieve through Contracting

Child welfare administrators need to examine the mission and goals for the
child welfare agency and the role of the private agencies, in light of Federal
outcomes of safety, permanency and well-being. What is the problem the agency
is trying to solve through contracted efforts? What results are needed? What
program components and actions will lead to the desired results? Further,
how can performance measures in contracts and the monitoring of contracts
help the public agency to achieve these results? These questions should be
addressed first at the agency level, as part of the agencys continuous
quality improvement process, and then incorporated into contracts. Some
organizations have found it helpful to use flow charts or logic models to
illustrate the relationship between activities and expected outcomes. These
models can then be used to define measures and identify sources of information.

ii.
Define the Measures

Performance measures can include both outcome and process measures.
Outcome measures focus on the results of services that contractors provide,
as well as intermediate indicators of success, such as rates of engagement
of families in team meetings to develop case plans, timeliness of case plans,
timeliness of reviews. Process measures focus on whether and how services
are delivered. They include things like the number of children served each
month, completion of assessments, accuracy of referrals, staff caseloads
and staff vacancies and training, data reporting, etc. Client satisfaction
can also be thought of as a process measure.

Selecting and operationalizing the performance measures that will be used
to determine success of the initiative is neither straightforward nor without
controversy. The challenge is to choose the right number of meaningful,
measurable outcome and performance measures that are both reliable and
valid. Measures must accurately show how well the initiative is meeting
its goals without overly burdening either the public agency or the contractor
with costly data collection, analysis, and reporting requirements. While
it is important not to overburden providers with too many reporting measures,
by focusing attention on too few measures, a contract may inadvertently encourage
providers to act in ways that contradict other program goals (McCullough
& Freundlich, 2007). For example, examining only the timeliness of
reunification or achievement of other permanency goals in the absence of
measures related to re-abuse and re-entry could create potential unintended
incentives in case management contracts: contractors may focus on timely
reunification without sufficient attention to ensuring lasting permanency.

Another key question relates to how the outcomes are selected. Many states
struggle to find the appropriate balance between using consistently defined
statewide measures that allow for comparisons across the state, and
community-specific measures that reflect local interests and needs (Freundlich
& Gerstenzang, 2003).

At the time that Requests for Proposals are developed and/or private agency
contracts executed, public agencies must be clear about the types of data
that will be gathered and how the information will be collected. The
two main types of data that an agency could potentially collect are:

Quantitative administrative data to illustrate aggregate trends in service
provision and client outcomes

Qualitative or descriptive data gathered from reviews of case notes, through
interviews and focus groups with children, families, agency staff, and key
external stakeholders, through stakeholder satisfaction surveys, or through
field observations

Each of these types of data helps the public agency to answer different types
of questions. For instance, quantitative data answers questions such
as how many children exited care in a six-month period. Quantitative data
can provide consistent measures across providers or over time about the impacts
of service provision and client outcomes that is missing from many other
methods of review.While important, these data do not provide any
information about the process of how children exit care, for example.
Case record and qualitative case reviews provide more information about the
black box of how a certain outcome is achieved. They can also
help ensure that processes are operating correctly. For instance, one goal
of a case record review might be to ensure that all licensed foster parents
have gone through appropriate background checks. Qualitative interviews and
focus groups provide an even greater level of detail about how well the system
is working. For example, a site visit which includes interviews with families
can provide information about the quality of services that may be missing
from a review that includes only quantitative data.

New York City and Illinois provide examples of how different data are used
to answer different performance-related questions. In New York City, the
Administration for Childrens Services (ACS) addresses three areas of
contractor performance: agency processes, quality of service, and outcomes
for children. ACS uses its own administrative data to measure agency
processes and child outcomes, but uses other data sources (e.g., case record
reviews, interviews with clients and workers, and field observations) to
assess the quality of a contractors services (Baron, 2003).
Similarly, Illinois DCFS uses different data sources to measure outcomes
in three key areas: permanency, stability, and family engagement.
DCFS relies on data compiled and analyzed by the Chapin Hall Center for Children
to measure outcomes related to permanency. To assess stability, the
state relies on data collected as part of the AFCARS system. Finally,
DCFS looks to the results of various case record reviews to monitor family
engagement (McEwen 2006a).

iii.
Address Data Collection, Communication, and Technology Issues

Researchers have noted that privatized initiatives have placed a premium
on access to real time information to guide case-level decisions, contract
monitoring, and system planning (Freundlich & Gerstenzang, 2003; McCullough,
2005). However, there is abundant evidence that many initiatives launched
in the 1990s lacked the technology or staff resources to collect or manage
data as intended.

Good data systems are a critical part of any privatization effort.
Both public agencies and providers need data for operational decisions and
successful contract management. The MIS must be able to track performance
from a variety of different perspectives  client status, service
utilization, service/episode costs linked with case plan goals, treatment,
and outcomes. The system must be need-driven, flexible, user-friendly,
and capable of generating useful reports for all users (McCullough &
Associates, 2005).

However, until quite recently, most public agencies and contractors lacked
the infrastructure, data collection tools, and information systems needed
to monitor contracts comprehensively. As one study of states
fiscal child welfare reform efforts notes, Inadequate data on service
needs, utilization, costs, performance, and outcomes plague states
attempts to implement child welfare fiscal reforms (Westat and Chapin
Hall Center for Children, 2002, 68). This study examined the management
information systems of 23 initiatives in 22 states and found that few initiatives
had information systems necessary to provide timely and adequate data.
Systems were found to be unable to measure impact of the reforms and did
not track all features of a program (e.g. service utilization, costs, client
status and outcomes). The systems were rarely compatible across agencies
and service systems. This study, along with several others, concluded that
in order to manage and monitor new state reforms, significant investments
in hardware, software, and training were needed.

Investments in information systems infrastructure needed for comprehensive
contract monitoring are needed in both the public agencies and the contracting
agencies, and such efforts must be coordinated across organizations.
The need for coordination in these activities is sometimes overlooked.
In a recent QIC PCW listserv request for information about states use
of SACWIS in a privatized setting, several states reported ongoing challenges
for private agencies with basic data entry and data base access. Many
private agencies continue to conduct dual data entry into the states
SACWIS and into their own case management system to record all necessary
information for contracting
purposes.[11]

Despite the limitations noted above, it appears that a privatization initiative
can improve a states ability to collect and analyze data over time.
In Kansas, for instance, regional foster care providers have developed extensive
case management systems to track clients and services, and are working to
track costs.[12] One of the
states private providers developed a management information system,
which compiles data on a daily, weekly, and monthly basis. These data
are used to measure performance for each division within the agency on a
monthly basis. Each division has clearly established performance goals and
these data are used in monthly meetings to determine whether the agency has
achieved these goals (Westat and Chapin Hall Center for Children 2002).
Similarly, another study of privatization efforts across six States found
that in five, the private agencies over time created the capacity to collect,
analyze, and report data at a level that surpassed the previous public
agencys capacity (Freundlich & Gerstenzang, 2003).

Issues that must be resolved in planning a monitoring system include the
degree to which data systems are shared between the public agency and
contractors; the mechanisms used to translate and communicate data into useful
reports; and an assessment of the information needed by contractors operating
under various risk-sharing contracts.

Contractors in many child welfare privatization efforts have at least limited
viewing privileges to the data systems used by their public agency counterparts.
In some initiatives, contractors access to data systems is notably
more extensive. In Florida, for example, private agencies with case management
responsibilities are required to use the States data system to manage
eligibility determinations and ongoing case management. Shared access to
information systems facilitates coordination among private and public agency
staff in a number of ways, not the least of which is ensuring that the state
is able to meet federal reporting requirements. Theoretically, a shared
data system also facilitates the resolution of communication problems and
makes it possible for contractor(s) and public agency staff to directly review
information from, or identify discrepancies in, their counterparts
systems.

Use of a common data system is not without challenges, however. The states
automated system may or may not support data collection that will enable
the private agency to effectively manage its services and meet all of the
requirements of the contract. For example, few state systems are equipped
for utilization management, provider network management, or claims, billing,
reconciliation, and paymentsall core functions required in some private
agency contracts. Some do not even contain all the data elements required
for performance monitoring.

Florida is a good example of a state wrestling with the challenges that must
be faced when public and private agencies share a data system for some data
collection, but maintain separate systems for other data. The community-based
care agency caseworkers are required to enter data into Floridas SACWIS.
Like all private agencies operating under risk-based contracts, each of these
agencies also maintains their own data systems to manage their business processes
and track their own performance. This requires dual data entryhardly
an ideal or cost-effective solution. In 2002, the University of South Florida
(USF), as part of its ongoing evaluation of community-based care, recommended
a number of steps to strengthen the current system and develop an effective
interface between the lead agencies data systems and the Departments
system. At a minimum, USF recommended that DCF and lead agencies reach agreement
regarding the data needed, the specified data format, and procedures that
would be allowed for electronic submission (USF, 2002).

Though data challenges remain, Florida has taken steps to ease the burden.
The State, as part of its community-based care initiative, has created a
document which features explicit instructions about data used for performance
measurement. This Performance Measure Methodology Document includes the
definition, calculations, data sources and data processes for each
measure. The definition describes what is meant by the measure, while
the algorithm explains how it is calculated. The data source identifies
who collects and enters the data into the information system. Finally,
the data processes discuss how the data are used and analyzed, as well as
any contract enforcement for a particular
measure.[13]

During focus groups conducted in 2005 to assess Arizonas readiness
for privatization of case management, many of the providers and external
stakeholders identified data technology as an area that might be
problematic. Planners of any privatized case management contract will
need to assess the current public agency information technology capacity
and identify enhancements that may be required to monitor the performance
of contractors. They will need to ensure that contract agencies have
the technological and human resource capacity to meet specified data collection
and reporting requirements. Among the basic questions that should be asked
and answered are the following:

If we privatize the case management function, what are the implications for
the states SACWIS and the collection and use of data?

Will private agency case managers enter data directly into state systems?
If not, how will the public agency ensure compliance with all federal and
state data reporting requirements and maintain a single case record?

What MIS enhancements are required to obtain the real-time information needed
to manage and monitor the system?

How will all parties verify the integrity of data used to monitor performance,
award incentives, or impose sanctions?(McCullough & Associates,
2005)

A final, and extremely important, component to contract monitoring revolves
around staff training. Not only are quality assurance efforts expanding and
evolving, but staff originally trained as case managers are now assuming
contract monitoring functions. Further, as contract expectations are increasingly
focused on service quality and outcome measures (versus the delivery of service
units) contract monitors need new skills to examine new features of performance.
As noted previously and throughout all of the Topical Papers in this series,
partnership and collaboration are a centerpiece of many recent contracts.
The training contract monitors might have received in the past may not have
prepared them for their new roles as a partner with the contractors
they monitor. This may be more difficult for monitors who assumed their
positions after their previous jobs as case managers.

Consequently, training for contract monitors must go beyond standardization
of processes and tools and also get to something more basic  helping
staff re-define and clarify their purpose in relation to the private agencies.
The traditional compliance-driven monitoring was not concerned with relationship
building or problem-solving, it was even at times adversarial and punitive.
In contrast, today states and private agencies are striving to operate more
like partners. The desired collaboration is only possible in a climate of
trust and openness. For many workers with monitoring experience, it is not
always clear how to hold agencies accountable while also partnering with
them to improve performance. As one administrator confided, Our contract
monitors struggle with their two hats  trusted-on-your-side-helper
versus enforcer of contract requirements. At some point, when the data says
things arent working, it is not always clear to contract monitors how
far they can or should go to help an agency that is not able to get the results
they are being paid to achieve.

Part of the challenge might be the lack of clarity in the nature of the
public-private relationship. In looking at the Florida experience, USF sums
up the key question that confronts community-based care agencies and the
Department, Are private agencies simply an extension of DCF, or are
DCF and the lead agencies business partners? (USF 2002, 30). How states
and private agencies answer that fundamental question may have far-reaching
implications for how contracts are monitored.

It is interesting to note that while much of the literature addresses the
need for training, there is little information about the kinds of
training offered to contract monitors. An agency in need of training
may participate in training provided through national organizations.
Or, an agency can look to peers in other agencies, counties, or states who
have undergone privatization efforts to learn more about their best practices
or lessons learned with regard to contract monitoring (Yates 1998). As with
other areas in child welfare, there is a need for ongoing training to address
the chronic turnover in child welfare staff and the subsequent discontinuity
in workers knowledge and experience. Florida recently noted that
staff turnover is a significant problem that adversely affects the level
of expertise in contract monitoring (Office of Program Policy Analysis &
Government Accountability an office of the Florida Legislature, 2008).

Florida has recently undertaken efforts to improve training for its contract
monitoring staff. In 2006, the Department of Children and Families
central office surveyed contract monitoring staff to identify their training
needs. Responses were used to design statewide training which focused
on essential components of the contract monitoring function, including report
writing, changes in community-based care contract requirements, and a recently
implemented monitoring tool for children in foster care who receive independent
living services (Office of Program Policy Analysis & Government
Accountability an office of the Florida Legislature 2008).

Using the information collected to ensure contract compliance, improve quality,
and achieve the agreed-upon outcomes requires user-friendly reports and processes
for sharing and learning. This section describes how several states are sharing
information across providers and with the public, how often reports are
generated, and the kinds of reports that states find to be useful for
stakeholders.

i. How States Share Information from
Monitoring

The ability to collect raw data, while essential, is not sufficient to ensure
that data are translated into useful reports needed by the private and public
agencies to fulfill their responsibilities under the contract. Child welfare
privatization initiatives have varied in the reporting requirements imposed
on private contractors, but many research studies have documented a tendency
for over- or under-reporting and a lack of clarity in the purpose of various
reports. There has been a growing trend to broadly share findings from
performance reports. Public agencies have posted performance data on the
states website, allowing a comparison between private agencies and
between the public and private agencies on key performance indicators or
outcome measures.

Kansas, Florida, and the District of Columbia are among the states that have
worked to make child welfare performance transparent. In Kansas, performance
data is available on the Internet, and includes case review information,
as well as annual performance reports for foster care services, adoption
services, and family preservation
services.[14] In Florida, CBC
agencies are able to compare their performance to all other CBCs and to the
statewide average for each outcome area. The Scorecard is updated monthly
and posted on the state website. Similarly the D.C. Child and Family Services
Agency (CFSA) has a Scorecard on its website that contains performance data
on CFSR indicators and on various other benchmarks established under a lawsuit
(LaShawn A v. Williams) that had placed the city under a receivership. The
scoreboard posts performance of all agencies with foster care contracts side
by side with the performance of CFSA staff that have similar
responsibilities.[15]

Creating data reports for contractors that link state child welfare
administrative data to data provided by contractors can also be a useful
tool. New Mexico, for example, collects data from private service providers
on the children that they have served and runs it against their own SACWIS
data. They produce reports for their contractors that include more
specific information on the clients that they have served. For instance,
for a provider that offers an intensive family support program and tries
to prevent further CPS involvement, CYFD provides information about the families
that come back into the system. Interviewees in New Mexico report that
this process is informative for contractors, and also helps to strengthen
existing relationships between contractors and CYFD.

ii. How Often Reports and Feedback are
Produced

How often do data need to be collected and reported? There is no right
or wrong answer to this question. Child welfare poses a challenge for assessing
outcomes because it can take a long time for outcomes to occur. For
instance, outcomes like time to adoption must be observed over a period of
several years. Most contracts today include both outcomes and more
immediate performance measures, thought to be associated with long-term results
that are measured on a monthly basis. For example, a contract with timely
reunification as a long-term outcome might also have monthly targets for
child/family visitation and contact between workers and parents as interim
measures that have been found to be correlated with long term success.

Alternatively, agencies can construct interim targets for long term outcomes.
Wulczyn (2007) provides an example of how this works in practice. The
total time period under examination is two years, but interim data are gathered
every six months (though he notes that the interim periods can be longer
or shorter). Each interim period is given a target, which is scaled
to the larger target. If for example, the agency expects there to be
831 exits from care in two years, it may be reasonable to assume that at
least 25 percent of them would occur in the first six months (25 percent
of the total time interval).

Contracts should explicitly define the data reporting requirements, since
providers need to include these costs in their budget proposals. As an example,
in a recent renewal of a statewide performance-based contract for foster
care recruitment, placement matching, and support, the contract specifies
how the public agency will monitor performance on an ongoing basis and stipulates
the contractors responsibility for submitting the following reports
on a quarterly basis:

Number of resource families licensed as compared to goals established within
each service area/community.

Number of families who leave each quarter per service area and reason.

Number of resource families who are interacting (phone or face-to-face) with
birth parents of children in care and the nature and frequency of interaction.

Number of licensed resource families that have not been selected for a placement
match within one (1) year of the issuance of the license and reasons for
family not being selected for a match.

Progress/barriers to achieving the areas recruitment plans.

The number of foster, pre-adoptive, and adoptive (post-finalization) families
who have received support and a description of general nature of support
provided.

Reports of findings from focus groups with resource families and with DHS
staff.[16]

iii. The Kinds of Reports that are Useful
to other Stakeholders

In general, reports are primarily used as tools for the agency and
contractors. However, data can also be useful to other stakeholders,
such as the courts, citizen review boards, legislators, etc. The reports
are similar to other reports produced, but should be tailored to the particular
audience. Public agencies can also use meetings with stakeholders as
ways to share information about how the state agency and its contractors
are performing.

OBrien and Watson suggest three different types of reports from automated
data systems that are useful to states:

Outcomes reports, which focus on client outcomes, such as lengths
of stay for children in care.

Practice reports, which focus on key practice issues that can be gleaned
from automated or other reporting mechanisms, such as the proportion of cases
in which a family team meeting was held.

Compliance reports, which provide information on the extent to which
an agency complies with requirements, such as the percent of investigations
completed within a given timeframe (OBrien and Watson, 2002, 22).

They also suggest some report formats that can be helpful, including:

Reports that allow easy comparison across regions, local offices, and units.

Reports on exceptions, such as reports flagging cases where investigation
dispositions are past due.

Early warning reports identifying cases that do not meet requirements prior
to a review (OBrien and Watson, 2002, 22).

Reports should also incorporate data from sources beyond automated data systems,
such as case record reviews and stakeholder input. For program
administrators, ideal reports would include information about both outcomes
and casework practice of both high and low performing agencies, to promote
practice changes when warranted. These data can be combined in reports
to analyze a systems strengths and weaknesses, providing a more holistic
view of the systems functioning.

The contract should specify clear procedures for addressing performance issues
and remedies for contract noncompliance. The public agency and the contractor
should share a mutual understanding about the consequences of any deficiencies
identified in the course of contract monitoring.

Because private agencies want the business and want to continue
providing services, they are likely to meet, or exceed, performance expectations
and provide all information that the public agency needs. In some cases,
however, performance problems occur. The private agency, for example, may
not provide the agreed upon services, may not provide reports in a timely
way, or cannot be reached for information. When these situations arise, it
is critical to be able to rely on contract provisions that clearly state
how the public and private agency will proceed if performance is not satisfactory
(Freundlich, 2007).

Technical assistance, performance triggers, and fiscal penalties are methods
that public agencies use to promote contractor compliance and address contractor
deficiencies. In fact, there is a continuum of steps that public agencies
can take to respond to performance problems:

Preventive activities that may include referral conferences and contract
review meetings;

Discussions and problem-solving with the private agency program staff regarding
performance expectation issues as they arise;

Utilization of the chain of command in both the public and private agency
to address performance issues;

Corrective action plans with timeframes for remedying poor performance; and

Termination of the contract and arranging for another agency to step in and
provide the services (Freundlich 2007).

Performance based contracts can be written with triggers in response to
deficiencies found during the contract monitoring process. For example, when
phasing in performance measures in Illinois, new contracts with foster care
agencies stipulated that agencies must achieve permanency within one year
for 24 percent of the existing caseload. Reviews occurred twice a year, and
during that first year, intake at some agencies was suspended due to insufficient
performance. This effectively sent the message that agencies would, in fact,
be required to abide by the terms of their contracts. In subsequent years,
the required permanency rate was increased. Agencies are now reviewed on
an annual basis. The public agency ranks all agencies from lowest to
highest permanency placement rates. Those with the highest rate are the most
likely to receive the guaranteed intake, which is now the only way of sustaining
their revenue (McEwen, 2006).

As a result of the CFSR process, some states are requiring providers to develop
and then implement program (or performance) improvement plans when performance
falls below a certain threshold. Iowa is a good example. The statewide
contractor responsible for recruitment, licensing, training, and placement
matching and support is required by the Department of Human Services to develop
a Performance Improvement Plan (PIP) any time performance falls below ten
(10) percentage points of any of the specified Performance Measure targets.
If the performance remains below ten percentage points after a 6-month period
of implementing the PIP, the contractor is required to develop and submit
for approval another PIP, which continues for a minimum of six months or
until the last day of the contract. If a second PIP is required, the contractor
will dedicate one percent of its base pay for the second PIP-plan period
exclusively to activities and actions related to improvement in the area
or areas of identified need.[17]

Corrective action and performance improvement plans are typically created
by the provider with input from the public agency and serve as a roadmap
to correcting any contract performance issues.

In New York City, the Agency Program Assistance Unit within the public agency
develops Corrective Action Plans based on an agencys EQUIP score (described
above) that is a compilation of performance data pulled from several sources
including administrative data, case record reviews and field observations
(see text box, above).

In Kansas, these are referred to as Local Action Plans. When contract-related
issues related to outcome performance arise, the Kansas Department of Social
and Rehabilitative Services (SRS) first discusses the concerns with the regional
contractor. They work together to identify any barriers that may cause the
concern and note any resources to address them. All discussions about
the concern and efforts to address it are carefully documented. Once
consensus about the issue is reached, the SRS regional office may decide
that the provider needs focused consultation and technical assistance. The
SRS regional office can ask the provider to prepare a written Local Action
Plan. This Plan is a tool for identifying the problem and measures
needed to correct it, and includes specific information about the staff
responsible for undertaking the plan and the timeframe for completion. It
serves as a written agreement between SRS and the provider. The SRS
Region monitors the Local Action Plan and informs the provider once they
have successfully completed the plan. If the provider is unable to
complete the plan, the SRS region may move to a more structured resolution
process.[18]

One study of professional services contracting (Fisher et al., 2006) cautions
against waiting until performance is in the red zone before taking
action. The study found that it is important to monitor trends and take action
when performance starts to dip, even if it is at an acceptable level. This
approach offers the opportunity to provide technical assistance to improve
contractor performance. This approach is important because there will be
situations where a provider does what is required in a contract (provides
expected services at expected levels), but does not achieve performance
targets. This early examination of performance issues can serve as
a reality check for both private and public agencies because the public agency
may have set unrealistic targets or provided insufficient supports in contracts
to enable contractor success.

As an example, an initiative in Florida (one of the three state initiatives
funded under the QIC PCW), has set up such an early warning system for its
new performance based contract and quality assurance initiative. When
potential issues in performance achievement by a case management agency are
identified, the lead agency provides free technical assistance for a period
of time. If problems persist and further technical assistance is required,
that service comes at a cost to the private case management agency.

According to state stakeholders, New Mexicos Children Youth and Families
Department (CYFD) takes a supportive approach to contract monitoring. If
CYFD staff see problems when they visit providers, they will offer technical
assistance. They also offer training to providers. CYFD has a collaborative
effort with a university to offer classes and, if CYFD monitors think that
the provider could benefit, they will suggest that they attend. Consistent
with this supportive approach, CYFD cannot sanction a provider and get money
back. In egregious cases, they can cancel a contract, but the agency indicates
that doesnt happen very often. Contracts are negotiated annually, at
which point CYFD can decide not to renew a contract.

From a legal standpoint, it is helpful to have an agreement for solving disputes
before they go to the courts. Lawyers can be very helpful in structuring
a contract, but ideally, contract monitoring and contractor performance issues
should proceed smoothly and not require further legal services to resolve
disputes. Clear, up-front, expectations, and a collaborative relationship
based on the shared goals of providing quality services and the best possible
outcomes for children and families are the best way to assure a constructive
partnership between public agencies and contractors.

The more that public agencies depend on private agencies to deliver services,
especially case management services to children and families, the more
sophisticated the quality assurance and contract monitoring systems should
be. Planners need to carefully think through the monitoring process, drawing
on the lessons learned from other communities that have struggled
with finding the right balance between oversight and innovation. What is
required is a balanced approach that allows the public purchaser to monitor
for results while also granting the provider the flexibility to innovate.

There is no single path to strong quality assurance. Many states have
significantly expanded their oversight efforts of contracted services, collecting
additional information and collecting it from more sources. While it is important
to set expectations, it can be challenging to know what to do when expectations
are not met, especially in this new atmosphere of enhanced collaboration
in service provision between public and private agencies.

A review of the literature and state experiences to date highlight the following
lessons about contract oversight and monitoring of child welfare services:

The support of upper management is critical. An effective contract
monitoring system requires buy-in at many levels, but support must start
at the top of the organization in order to obtain the resources needed, provide
support to staff as they transition to an outcome-focused system, and send
a consistent message to staff, contractors and potential contractors, and
the families they serve.

Understand the link between theory, program specification, and desired
outcomes and convey that understanding to providers. The focus on outcomes
represents a new way of thinking for agency staff as well as contractors.
What is the problem the agency is trying to solve? And what program components
and actions will lead to the desired results? Public agencies need to meet
regularly with contractors and genuinely engage them in planning and problem
solving. Discussions should include selecting outcomes/goals and reviewing
existing information and data on where performance is at the moment (OMB
Office of Federal Procurement Policy, 2008; OBrien, 2005).

View contract monitoring as part of continuous quality improvement. If
contract monitoring is going to be effective, it must be integrated under
an agencys QA umbrella, and the focus must be broadened beyond compliance
to include activities intended to stimulate and reinforce improvement. This
may require integration of previously separate staff functions or enhanced
communication across agency divisions. Key departments should be in constant
communication with one another, including program, information technology,
and accounting units (Meezan and McBeath, 2004).

Be open to re-thinking outcomes, expectations, and how contractors are
judged. Many public and private agencies have realized mid-way
through a contract that outcomes and performance measures were set at
unrealistically high levels. One effective way to prevent this is to examine
outcomes at regularly scheduled performance review meetings between the agency
and the contractor. At a minimum, public agencies should use contract
renewal negotiations to revise expectations based upon experience and research
evidence.

Be prepared to make changes as the system matures. Initial successesmay leave more challenging cases in the system or may reveal gaps in
services. For example, Illinois initiated performance based contracting for
child welfare services in 1997, and was successful in moving thousands of
children to permanency, but problems still remained with regard to placement
instability and the complexity of needs for harder-to serve youth. Having
achieved a reduction in cases, the state is changing performance based contracts
to emphasize best practices and to redirect funds in order to reduce targeted
caseload ratios (Kearney and McEwen, 2007).

Collect data that are useful and use the data. Based on the
identified linkages between program components and outcomes, public agencies
are increasingly reaching out to contractors to work together to select
meaningful and realistic outcome measures and designing data reporting
requirements around those measures. While other data may be required for
compliance with state and/or Federal reporting mandates, avoid collecting
any unnecessary data. Working closely with contractors also helps to
ensure that data definitions are consistent and that data are seen as valid
and reliable by both agencies and providers. Finally, use the data to monitor
progress and suggest improvements by comparing performance across contractors
and jurisdictions as well as performance over time.

Invest sufficient resources, especially in monitoring staff and staff
training. There is a growing realization that contract management and
monitoring is complex work. This requires that agencies allocate sufficient
resources in both the contracting and program offices, to do the job well
(OMB Office of Federal Procurement Policy, 2008).

Remember that contractors are partners and share the agencys goal
of achieving the best outcomes for children and families.
Traditionally, contract monitors were expected to maintain an arms length
distance from contractors, but that approach may not work for todays
contracting situations, especially performance based contracting. It is in
the best interest of all parties concerned that the contract be
successful. A team approach is essential and will require ongoing work
to sustain (OMB Office of Federal Procurement Policy, 2008).

Auditor General. (2001) Monitoring of community-based care providers of child
welfare services by the Department of Children and Family Services. Operational
audit (Report No02-033). Tallahassee, FL: State of Florida General Auditor.

Kearney, K. and E. McEwen. (2007). Striving for Excellence: Extending
Child Welfare Performance-Based Contracting to Residential, Independent,
and transitional Living Programs in Illinois. Professional
Development: The International Journal of Continuing Social Work
Education (Vol, 10, No 3, Winter).

U.S. Government Accountability Office. (1997b). Privatization: Lessons
learned by state and local governments. Publication No. GAO/GGD-97-48.
Washington, DC. Retrieved July 14, 2008 from
http://www.gao.gov/archive/1997/gg97048.pdf.

U. S. General Accounting Office (1998). Privatization: Questions state
and local decision-makers used when considering privatization options
(USGAO/GGD-97-98). Washington, DC: Government Printing Office.

[2] The CFSR includes an assessment
of the states quality assurance system  specifically, Item 30:
Standards to ensure quality services and ensure childrens safety and
health; Item 31: Identifiable QA system that evaluates the quality of services
and improvements. For more information about findings from the first
round of CFSRs, go to:
http://www.acf.hhs.gov/programs/cb/cwmonitoring/results/genfindings04/ch1.htm

[9] Starting in July 2008, New
York City implemented the Improved Outcomes for Children (IOC) initiative.
IOC is a series of reforms for Foster Care and Preventive Services designed
to strengthen the work of the Administration for Childrens Services
and its partner agencies. One of the IOCs reforms is a new
performance monitoring system, including a new provider agency evaluation
tool called Scorecard. Scorecard builds on the EQUIP system and will include
a performance scorecard for each agency, detailing each agencys performance
in the areas of safety, permanency, well-being, foster parent support, and
community and cultural competency. For more information see:
http://www.nyc.gov/html/acs/html/about/ioc_initiative_faqs.shtml