In September 1980 the National Long Term Care (LTC) Demonstration--known as channeling-- was initiated by three units of the United States Department of Health and Human Services--the Office of the Assistant Secretary for Planning and Evaluation (ASPE), the Administration on Aging, and the Health Care Financing Administration. It was to be a rigorous test of comprehensive case management of community care as a way to contain the rapidly increasing costs of LTC for the impaired elderly while providing adequate care to those in need.

A. The Intervention

Channeling was designed to use comprehensive case management to allocate community services appropriately to the frail elderly in need of LTC. The specific goal was to enable elderly persons, whenever appropriate, to stay in their own homes rather than entering nursing homes. Channeling financed direct community services, to a lesser or greater degree according to the channeling model, but always as part of a comprehensive plan for care in the community. It had no direct control over medical or nursing home expenditures.

Channeling was implemented to work through local channeling projects. The core of the intervention (i.e., case management) consisted of seven features:

Outreach to identify and attract potential clients who were at high risk of entering a LTC institution.

Standardized eligibility screening to determine whether an applicant met the following preestablished criteria:

Age: had to be 65 years or older.

Functional disability: had to have two moderate disabilities in performing activities of daily living (ADL), or three severe impairments in ability to perform instrumental activities of daily living (IADL), or two severe IADL impairments and one severe ADL disability. Cognitive or behavioral difficulties affecting ability to perform ADL could count as one of the severe IADL impairments.

Unmet needs: had to have an unmet need (expected to last for at least six months) for two or more services or an informal support system in danger of collapse.

Residence: had to be living in the community or (if institutionalized) certified as likely to be discharged within three months.

Medicare coverage: for the financial control model, had to be eligible for Medicare Part A.

Comprehensive inperson assessment to identify individual client problems, resources, and service needs in preparation for developing a care plan.

Initial care planning to specify the types and amounts of care required to meet the identified needs of clients.

Service arrangement to implement the care plan through the provision of both formal and informal in-home and community services.

Ongoing monitoring to ensure that services were appropriately delivered and continued to meet client needs.

Periodic reassessment to adjust care plans to changing client needs.

Two models of channeling were tested. The basic case management model relied primarily on the core features. The channeling project assumed responsibility for helping clients gain access to needed services and for coordinating the services of multiple providers. This model provided a small amount of additional funding to purchase direct services to fill in gaps in existing programs. But it relied primarily on what was already available in each community, thus testing the premise that the major difficulties in the current system were problems of information and coordination which could be solved largely by client-centered case management.

The financial control model differed from the basic model in several ways:

It expanded service coverage to include a broad range of community services.

It established a funds pool to ensure that services could be allocated on the basis of need and appropriateness rather than on the eligibility requirements of specific categorical programs.

It empowered case managers to authorize the amount, duration, and scope of services paid out of the funds pool, making them accountable for the full package of community services.

It imposed two limits on expenditures from the funds pool. First, for the entire caseload, average estimated expenditures under care plans could not exceed 60 percent of the average nursing home rate in the area. Second, for an individual client, estimated care plan expenditures could not exceed 85 percent of that rate without special approval.

It required clients to share in the cost of services if their income exceeded 200 percent of the state's Supplemental Security Income (SSI) eligibility level plus the food stamp bonus amount.

Ten sites participated in the demonstration. Their model designations were:

Basic Case Management Model

Financial Control Model

Baltimore, Maryland

Cleveland, Ohio

Eastern Kentucky

Greater Lynn, Massachusetts

Houston, Texas

Miami, Florida

Middlesex County, New Jersey

Philadelphia, Pennsylvania

Southern Maine

Rensselaer County, New York

The ten local projects opened their doors to clients between February and June of 1982, and were fully operational through June of 1984.

The goal of the evaluation, in addition to documenting the implementation of channeling, was to identify its effect on:

The use of formal health and LTC services, particularly hospital, nursing home, and community services.

Caregiving by family and friends, including the amount of care provided, the amount of financial support provided, and caregiver stress, satisfaction, and well-being.

To compare channeling's outcomes with what would have happened in the absence of channeling, the evaluation relied on an experimental design. Applicants found eligible for channeling were randomly assigned either to a treatment group or to a control group. In all, 6,341 persons were randomly assigned.

Several data sources were used. These included telephone and in-person surveys of the elderly members of the research sample and, for a subset, their primary informal caregivers; Medicare, Medicaid, channeling project, and provider records; and official death records obtained from state agencies. Finally, federal, state, local, and project staff were interviewed about the implementation and operation of the demonstration (these data are not included in the public use files).

B. The Nature of the Data

Some researchers will want to use the data to replicate the channeling results or explore certain issues in greater depth. They will simply have to master the complexity of the data base. Others will be interested in using the data to support efforts far removed from the original purposes of the evaluation. This section is written primarily for this latter group.

The channeling sample was designed to support the evaluation, it was not designed to be a statistically representative sample of the elderly. The sample consists of frail persons who voluntarily applied to channeling and were found to meet the demonstration's eligibility criteria. Channeling sought referral sources and engaged in outreach activities to identify applicants at risk of institutionalization. Hospitals, home health agencies and social service providers were the major referral sources. A breakdown of the referral sources is presented in Table 1.

To determine whether channeling participants were similar to the national population of the disabled elderly, we compared the baseline characteristics of the channeling sample with a nationally representative sample of the elderly. Using data from the National Long Term Care Survey (NLTCS), we simulated channeling's eligibility process to identify a subsample who were eligible for channeling. The simulation was done by selecting individuals who would have qualified according to the channeling ADL or IADL criterion. We thus ended up with a subsample from the NLTCS who, at least according to the measures of functioning, resembled the channeling sample. On the basis of that simulation, we estimated that 4.9 percent of the total noninstitutionalized population age 65 or over in 1982 qualified for channeling.

The main differences between the channeling sample and the simulated nationally eligible sample were in living arrangements, income, and formal service use (see Table 2). Channeling clients were more likely to live alone and less likely to be married. Their use of regular informal in-home care was about the same as for the simulated national sample. The income of the channeling sample was lower than the income of the national sample. Substantial differences were found in every measure of formal service use. Before the receipt of channeling services, compared to the simulated national sample, channeling sample members were almost twice as likely to be receiving formal in-home services, more than twice as likely to have had a hospital stay in the last two months, more than six times as likely to have been in a nursing home, and almost five times as likely to have been on a nursing home waiting list. These differences provide strong support for the argument that persons often came to the attention of channeling because of some precipitating event and were probably more closely connected with the community care system as a result. The occurrence of such an event may have been a major factor that differentiated those who applied for channeling from those who did not.

We also compared the demographic and economic characteristics of the entire aged population in the channeling sites with the characteristics of the entire aged population of the country. As a group, the demonstration sites were broadly similar to the nation as a whole. The only characteristics on which they differed markedly was the proportion who were of Hispanic origin (4.6 percent of the channeling sample were Hispanic, compared with 2.7 percent of the national elderly population). This resulted mainly from the fact that a third of the aged individuals in Miami were of Hispanic origin. Despite the general correspondence with national data, as one might expect, there was substantial variation across sites and models.

With respect to the economic resources of the aged population in the channeling sites, monthly median family income was similar to the national data, although there were more people below the poverty threshold in the basic model than in the United States as a whole, and fewer below that level in the financial control model. The proportion of aged in the demonstration states enrolled in Medicare and monthly Medicare expenditures per aged resident were similar to the national data. For Medicaid participation, the basic states had slightly more people receiving Medicaid, and the financial control states somewhat less, than the national average. Monthly Medicaid expenditures per aged resident were somewhat less than the national average in the basic model states and somewhat greater in the financial control states because of high expenditures in New York and Massachusetts.

To this point the focus has been on comparing the characteristics of the research sample and the aged population in the channeling sites to the national elderly population. What about the sites' service environments, were they broadly representative of the service environments throughout the country? This is a tougher issue to address because comparative data are not readily available. On the basis of an examination of nursing home bed supply data and data on waiting times to nursing home admission collected in the demonstration, we concluded that nursing home beds were probably somewhat less available in the channeling sites than in the nation, although we do not believe this had a major effect on demonstration outcomes.

Data on the availability of community care are even more limited. We do know that there was substantial control group use of both comprehensive case management services (10-20 percent) and of community in-home services (60-69 percent). The demonstration projects applied to participate in the demonstration and were selected through a competitive process, and it could be that their case management and community care systems were more developed than those in other sites. Users of the data base should consider whether such differences could affect their research results.

Taken together, these comparisons indicate that, even though the channeling sample is not a statistically representative sample of the frail elderly, the data can be used for applications unrelated to the evaluation as long as the differences that do exist between the channeling sample and one that would be broadly representative are not central to a particular application, and as long as careful attention is paid to the limitations of the data set.

In using the data, the treatment and control groups can be exploited in useful ways. The control group data tell us what occurred in the sites in the absence of channeling. For example, the control group data reveal what services people who were eligible for channeling were using at the time channeling was in operation. The treatment group data indicate what services people used in response to the channeling intervention, although for this purpose, the model differences are obviously critical. These data could be used as the basis for estimating, for example, the cost of a new benefit, although such an exercise requires using a great deal of judgment in evaluating the similarities and differences between channeling and its participating population and whatever program and population are being analyzed. Furthermore, estimates of program participation must be made, a critical task which cannot be addressed using the channeling data. If possible, when using the channeling data for purposes unrelated to evaluating channeling, other data sources should be used and care taken to evaluate the effects of changing key assumptions.

The channeling data base is very comprehensive and detailed. In exchange for that richness, one gives up representativeness. Nevertheless, it can be a very useful source of data in support of applications far removed from the evaluation of channeling.

One of the goals of the evaluation was to produce data for public use after the initial evaluation was completed. Thus, in developing the project data files and documentation, we always had the outside uses in mind. The principles followed in selecting data for public use were as follows:

All data that were used in the analyses should be available so that evaluation analyses could be replicated from the files.

Principle (1) should be achieved subject to maintaining the confidentiality of data.

As much data as possible should be included in the public use files.

Although the preparation of the public use files was always planned, their implementation occurred over the last six months of the project, to ensure that all variables used in analyses were available and documented for inclusion in the files.

The public use files were prepared from the project's data base. This data base was designed in the first 18 months of the project, and implemented in March 1982 at the time that screening and sample member randomization began. As new instruments were designed, pretested, and cleared by OMB, the data base was expanded to include new files.

Sample intake occurred over a period of 15 months, and primary data collection continued until July 1984, providing up to 18 months of followup. Secondary data associated with the followup period (claims data, death records, and provider records) continued to be acquired over the ensuing four months.

To compare the outcomes of channeling with what would have happened in the absence of channeling, the evaluation relied on an experimental design. Elderly persons who were referred to each channeling project were interviewed (most by telephone) by the project staff to determine their eligibility for channeling. If eligible, they were randomly assigned either to a treatment group, whose members could participate in channeling, or to a control group, whose members continued to rely on whatever services were otherwise available in their community. A total of 6,341 persons were randomly assigned to the two models of channeling. Given the substantial death rate among this population, as well as interview noncompletion, this yielded research samples of varying sizes, depending on the analysis. Table 3 shows the maximum sample sizes available for different subject areas for each model.

NOTE: Maximum sample sizes are the number of observations available for analysis in each area, except for a small number of observations lost due to item nonresponse for some measures.

Informal Caregiver Survey was not repeated at 18 months.

The cost analysis combines estimates from the analyses of the other subject areas.

B. Sample Characteristics

The channeling sample was elderly and frail, with severe functional, health, social, and financial problems. This was by design; the eligibility criteria for the sample were intended to identify persons who were at risk of nursing home placement. The characteristics of the channeling clients at baseline are shown in Table 4.

Both primary and secondary data were collected for the channeling evaluation. These data were collected from the following sources:

Interviews with sample members and proxies

Interviews with informal caregivers

Provider records

Medicare and Medicaid claims

Death records

Client tracking records

Financial control system at five sites.

This section reviews the primary data collection and provider records extracting procedures.1 The data collection through surveys, the means of collection, and the interviewer auspices were as follows:

Training materials and procedures for the baseline were developed by MPR in conjunction with Temple University, the demonstration's technical assistance contractor, and all trainers were trained by MPR and Temple staff. Demonstration screen and sample member (client) baseline interviewers were trained by Temple University. All MPR interviewers and extractors were trained by MPR project staff. To assess the comparability of the baselines administered by demonstration and MPR staff, a random validation sample of clients was administered a baseline by both demonstration and MPR staff.

Training was intensive for all instruments. For example, MPR baseline interviewers received five days of training that covered the instruments, procedures, sensitivity training, practice sessions, and evaluation of interviews. The demonstration baseline interviewer staff received the same training plus instruction on collecting clinical data for case management. Sample member followup interviewers were provided with comparable training, augmented by special training on searching for respondents and determining living arrangements. For the caregiver instrument, additional special training was provided on techniques for identifying appropriate caregivers, procedures to follow if the sample member was deceased, and techniques for telephone interviewing. Provider records extractor training covered the use of a provider characteristics instrument and procedures for extracting service use, charge, and reimbursement data from provider records. Respondent payments were made to control group members for baseline interviews and all sample members for followup interviews.

Completion rates for the interviews and provider records extraction are listed in Table 5. If deceased sample members are removed from the calculation, the response rates were uniformly high, at 76 percent or more.

The major reason for nonresponse was that the sample member had died. The percent of the sample not responding because deceased was 16.5 percent at 6 months, 26.6 percent at 12 months, and 40.9 percent at 18 months.

Each of the public use files was derived from one or more data base masterfiles, which, in turn, were created and maintained according to a standard set of procedures. These procedures transformed source data from data entry files into a structured data set, edited the data, and created a set of constructed variables.

A. Interview Data Procedures

Because these data were collected over a period ranging from one to two-and-a-half years, depending upon the data source, data were regularly added to the data base. Each cycle of data processing included the following steps: quality control checks of hard-copy interview forms; data entry; transmission of data to the research data base; quality control checks of computerized data; and the updating of both the status file and the existing masterfiles. Figure 1 summarizes these components.

FIGURE 1. Standard Data Manipulation Procedures: unavailable at the time of HTML conversion--will be added at a later date.

Quality Control and Data Entry. Completed instruments were manually edited and coded by trained quality control staff. This included checking the legibility of contact information, assigning codes to open-ended and "other, specify" responses, and reviewing key questions to ensure they were properly recorded. If necessary, project staff or respondents were contacted to resolve problems.

After the documents had been read, they were entered into the computer. Automated skip logic, range, and consistency checks were performed as part of the key entry program. When errors were found, a trained data cleaner reviewed the instrument (if necessary, the cleaner telephoned the respondent or interviewer) and corrected the error in the instrument and on the file. Finally, once a batch of interviews had passed the skip logic, range, and consistency checks, the batch was verified by reentry.

Data Transmission and Initial Processing. Every month, all data-entered instruments were transmitted to the mainframe computer in Extended Binary-Coded-Decimal Interchange Code (EBCDIC) format. During the initial processing of the newly transmitted data, this file was transformed into a structured intermediate SAS data set. The project status file was also updated at this time with new status information. In addition, the intermediate file picked up the data base ID and randomization information from the status file for each record. Finally, confidential variables (such as Medicare and Medicaid numbers) were added to a separate Medicare/Medicaid status monitoring file.

Frequency distributions and other descriptive statistics were generated for each variable. In addition, range checks and further checks on consistency which were beyond the capacity of the data entry program were performed. For example, in processing each file, we printed selected variables for cases which appeared to have more than one interview (such as a complete and an incomplete interview).

Data Entry. Potential errors identified through a review of descriptive statistics were resolved by reviewing the hard-copy questionnaire and/or consulting with the quality control staff (who recontacted the interviewer or respondent when necessary). Some "errors" (for example, some out-of-range responses) proved to be correct, and the values were retained in the data base. The frequency and nature of each type of error were documented, as was its resolution. In this way, resolution decisions could be consistent and based on precedent, where applicable.

Masterfile Maintenance and Updating. After inconsistencies in the intermediate file were resolved, the current masterfile was updated with the new observations. For each of the new observations added to the masterfile, certain descriptive variable values were converted into standard binary codes. Once the masterfile had been updated with the new observations, frequencies of the new masterfile were produced, reviewed, and distributed periodically to the research staff.

Once-Only Procedures. After all the completed research sample interviews had been processed and added to a masterfile via the process outlined above, a final review of the complete masterfile was undertaken. The same range and consistency checks used in initial processing were applied to the complete masterfile. In addition, descriptive statistics of all variables in the final masterfile were closely reviewed and distributed to research staff.

Other Data. Comparable procedures were followed in processing secondary data into masterfiles.

B. Developing Analysis Files

One of the products generated from the data base masterfiles was a set of analysis files--files which contain only the sample and variables of interest for a particular analysis. Because analysis files are smaller than masterfiles, their contents can easily be accessed for use in statistical analysis procedures. Some of the public use files were generated from analysis files. This section describes three steps in the creation of the research data base analysis files.

Selecting the Samples. A set of standard samples was defined in order to facilitate consistency across analyses. Once a standard sample was defined, a binary variable, or "sample flag," was created and permanently stored in the status file, facilitating the selection of the standard sample for use with any masterfile. In addition, since some analyses used subsets of the standard samples, many individual analysis files contained sample flags and data for several samples, allowing analyses of several samples.

Specifying and Programming Constructed Variables. Initial specifications of constructed analysis variables (both dependent and independent) were prepared by the analysts. These preliminary specifications were reviewed and modified by the research data base staff, in consultation with the analysts. Constructed variables were programmed, variable labels defined, and descriptive statistics produced and reviewed for each variable.

Extracting and Merging Data from Masterfile. Analysis files were generally "extracts" (i.e., sub- sets) of masterfiles. These extracts were based on defined samples that were selected using standard sample flags. However, some analysis files required data from more than one masterfile (for example, client tracking and status change masterfiles and caregiver and sample member masterfiles). In these cases, extracts of each file were merged together, so that a single case contained the correct information from each file.

There are 14 separate public use files, nine based on survey instruments, four based on analysis files and the project status file. Table 6 summarizes the content of the files at a general level and describes the purposes for which the data were collected.

The major purpose of the screening assessment was to determine whether an applicants was eligible to participate. It was intended to identify those at risk of institutionalization, focusing on age, place of residence, interest in participating, institutionalization status, functional impairment in performing activities of daily living (ADL) and instrumental activities of daily living (IADL), fragility of support system, and unmet needs.

b

Baseline

Applicants randomized during the caseload buildup phase who completed both a screen and a baseline instrument.

The major purpose of the baseline instrument was to collect data on sample members at the point of enrollment, measuring functioning, health status, recent service use, informal caregiving, financial resources, demographic factors, and unmet needs.

b

Client Tracking/Status Change

All channeling clients. c

The client tracking and status change forms were designed to allow client progress to be monitored through the channeling service system, as well as caseload size. The client tracking form included dates of referral to channeling, screening, randomization, care plan completion, and service initiation; the status change forms collected dates of change from active to inactive or terminated status and reasons for change of status.

Applicants randomized into the research sample during the caseload buildup phase who also completed a screening interview, a baseline interview, and a followup interview.

The purpose of the followup interviews was to collect outcome data at 6, 12, and 18 months after enrollment. Outcomes included insurance coverage, health status, housing conditions, expenditures, related transfers and services, in-home service use and support, formal community service use, hospital and nursing home use, social and psychological well-being, income and assets, and functioning.

-

Status File

Applicants randomized during the caseload buildup phase. a

The status file stores information about interview dates and completion (both complete and non-complete) status for each sample member, sample flags d, and information obtained from the death records search and Medicare and Medicaid entitlement checks.

Wooldridge and Schore, 1986.

Caregiver Baseline

Primary informal caregivers (to a subset of those sample members included in the screen public-use file e) who completed a caregiver baseline interview.

The caregiver baseline measured the amount of various types of informal services provided to the elderly sample members, the provision of financial contributions by informal caregivers, the economic and family behavior of informal caregivers, and caregiver psychological and social well-being.

Christianson and Stephens, 1984.

Christianson, 1986.

Caregiver Followups: 6-Month File and 12-Month File

Primary informal caregivers (to a subset of those sample members included in the screen public-use file e) who completed a 6- or 12-month caregiver interview.

The primary purpose was to collect data to assess the impacts of channeling on informal caregivers of elderly sample members. Questions focus on care by primary informal caregiver, as well as from other caregivers, the provision of financial contributions by informal caregivers, caregiving since institutionalization, formal services utilization prior to the death of the elderly sample member, caregiver well-being, and demographic and employment information.

Individuals who were members of at least one of the analysis samples on which these analyses were based.

The formal community services analysis file was developed from the sample member followup interviews, provider record extracts, surveys of privately contracted individuals, financial records from the sites, and Medicare and Medicaid claims. It includes information on the use of all major community services and expenditures for these services, by funding source.

Corson et al., 1986.

Thornton and Dunstan, 1986.

Brown and Phillips, 1986.

Informal Care Analysis File

Individuals who were members of at least one of the analysis samples on which this analysis was based.

The informal care analysis file was developed from the sample member followup interviews. It includes information on the types and amounts of services provided by informal caregivers and the relationship of caregivers to sample members.

Christianson, 1986.

Hospital, Nursing Home, and Other Medical Services Analysis File

Individiuals who were members of at least one of the analysis samples on which this analysis was based. Specifically, this file consists of persons who completed a baseline interview and who were known to be either Medicare entitled or not Medicare entitled.

This file was developed from data obtained from Medicare and Medicaid claims, provider records extracts, and sample member followup interviews. Contained in this file is information on hospital, nursing home, and other medical service use and expenditures, by funding source.

Wooldridge and Schore, 1986.

Quality of Life Analysis File

Individuals who were members of at least one of the analysis samples on which this analysis was based.

The quality of life analysis file was developed from the sample member followup interviews. It includes information on elderly sample member satisfaction with care, social-psychological well-being, and functioning.

Applebaum and Harrigan, 1986.

The caseload buildup phase began in March, May, or June 1982 and ended in May or June 1983, depending on the channeling project. There were applicants randomized during this phase who were not included in this sample; namely, those applicants who were members of the household of a treatment group client, fourteen cases whose screening survey instrument was lost in the mail, and one individual who was eliminated from all samples because, although assigned to the control group, the individual received channeling services.

Screen and baseline standard control variables (contained in this file, as well as in most of the other files) were used in most of the reports listed below.

Note that this sample includes clients who were not included in the samples used for the impact anlaysis--those individuals who enrolled before randomization began, those who were members of the household of a treatment client, and those who enrolled after the research sample size had been achieved and random assignment ceased.

Sample flags are binary variables indicating membership in a survey or analysis sample.

The caregiver subsample includes the caregivers named by the elderly sample members who were enrolled during an approximately six-month period beginning in November, 1982.

Confidentiality Precautions. For confidentiality reasons, some identifying variables were excluded from the public use files. Thus, for example, names and addresses and Medicare and Medicaid ID numbers were not included. Since the user must have a method for linking information on individuals across files, unique ID numbers are used to identify individual data. The ID numbers on the public use files are different from those used by MPR for fielding and analysis.

Additional confidentiality measures were taken. In order to avoid the possibility that one or more data items could identify, or nearly identify, an individual in a particular site, we reviewed the data and modified some fields. Variables indicating age were transferred from actual ages into age categories. In sites with very small minority populations, ethnicity was recorded by combining categories. In addition, calendar dates are not provided on the files. Instead, each data variable has been converted into a variable that indicates the number of days, screen randomization and the date of the event. Finally, we deleted information on legal guardianship.

Public Use File Formats. We converted the SAS data sets into sequential (EBCDIC) public use files from which each variable can be accessed by its column position, thus making the file readable in virtually every mainframe computer system.

Documentation Available. The public use file documentation consists of eleven reports, each of which documents one or more public use files. Each of the eleven reports contains three chapters:

I. Introduction (common to all reports)

II. Description of File

The development of the file

The variables included

The analysis samples included

III. File Documentation

File layout: the variables included in the file, by position and field length

Instrument: an annotated version of the survey instrument showing variable names assigned to each question (provided for source files only)

Constructed variable documentation: for each constructed variable a form is provided that describes how the variable was constructed and the values for different categories

Descriptive statistics: frequency distributions and means for all variables included on the file.

Physical tape specifications: information necessary to read the tape and a dump of the contents of the first two records on the tape

Copies of these 11 reports will be available for review at the breakout sessions of the conference.

Physical Tape Specifications: Table 7 summarizes the physical specifications of the tapes. Note that all files are available on tape, in EBCDIC. Files are density 6250, fixed block, and no label.

Although collected and structured for the specific purpose of evaluating the channeling demonstration, the resulting data can be used for a wide variety of other research. In this section, we briefly review completed and planned research, discuss potential applications, and caution users about some pitfalls.

A. Completed and Planned Analyses

Analyses conducted as part of the channeling evaluation and a followup study of targeting those at high risk of nursing home placement are now completed. A series of detailed technical reports on channeling document the results, data collection procedures, sample definitions, and methodology. Paragraph summaries of the reports and information on ordering them can be found in a summary the of Channeling Demonstration and an abstract list of reports available from the Office of Social Services Policy, Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services (DHHS), Room 410E, H.H. Humphrey Building, 200 Independence Avenue, S.W., Washington, D.C. 20201. The followup study of targeting is reported by Grannemann et al. (1986). It is currently not available for distribution.

We are aware of the following analyses that are currently being undertaken with the public use files:

Robert Clark of the Office of the Assistant Secretary for Planning and Evaluation of DHHS: (1) estimation of the total cost of care, both public and private, with particular emphasis on out-of-pocket costs, and (2) analysis of service use by the oldest old.

Corbin Liu of the Urban Institute: analysis of the costs of care of older persons with cognitive impairments, both those in nursing homes and those in the community.

Peter Kemper of the National Center for Health Services Research of DHHS: analysis of the determinants of the use of formal and informal community care.

Jim Callahan and Phyllis Mutschler of Brandeis University: analysis of changes over one year in the use of all types of care by the control group.

Other research will undoubtedly be initiated as more researchers obtain the public use files.

B. Potential Applications

The extensive data collected for the channeling evaluation create many research possibilities. Its special purpose--to form the basis of an evaluation--makes it better suited for some analyses than for others. Because it is a selected sample--data came from ten sites selected through competition, and the sample comprises applicants referred to a special community care program-descriptive analysis using the channeling data are less likely to be informative than analysis of the same questions using nationally representative data. The channeling data appear more appropriate for analyses that do not depend on representativeness but capitalize on the richness of the data or their original purpose.

The channeling data appear most useful for analyses of behavioral relationships, methodological research, or re-analysis of experimental results. The extensive data on the well-being of the elderly sample, for example, is fertile ground for psychometric analysis of quality of life measures or analysis of the determinants of well-being, but would not be appropriate for an analysis of the extent of unmet need in the United States. To estimate the cost of community care and nursing home care for use in calculating premiums for LTC insurance, channeling data would have to be used in conjunction with other data (e.g., a nationally representative sample); otherwise, the estimates would pertain to the selected channeling sample rather than to likely purchasers of LTC insurance.

C. Some Cautions

Although the richness and comprehensiveness of the channeling data set open numerous research possibilities, researchers should be aware of their complexity. Researchers accustomed to using cross-section surveys designed to collect data on a population, rather than longitudinal data designed to evaluate a program, are likely to be surprised by the complexity of the channeling data. One researcher who was not involved in the evaluation but has begun using the channeling data remarked that this is "the most complicated data set [he had] ever seen." The complexity arises from the large number of data sources, the structure of the files, and the special evaluation sampling objectives.

The data sources, as described above, are numerous: five interviews with the elderly sample members or their proxies (screen, baseline, and 6-, 12-, and 18-month followups), three interviews with primary informal caregivers (baseline and 6- and 12-month followups), Medicare claims, Medicaid claims, financial model channeling claims, provider billing records, client tracking data, and death records. The data set thus contains a massive amount of data. Variables can be constructed from more than one source, individually or in combination, so that researchers need to understand exactly how variables on the public use file are constructed before using them in analysis. Not all data are present in every case called for in the design--numerous data sources also provide numerous opportunities for missing data.

Evaluation needs determined the way in which files were structured and variables were constructed. Files were organized, for example, by analytic area so that data needed for a particular analysis were all on one file. Because all analyses controlled for baseline characteristics, a standard set of baseline variables were included among constructed followup variables. Constructed variables also were defined to meet evaluation needs. For example, because some baseline data were not comparably measured for treatment and control groups, screen data were sometimes used when a baseline measure might be better for other purposes.

Evaluation objectives also drove the hundreds of big and little decisions in the design of the data collection. Most important were the sampling decisions which optimized the usefulness of the data for evaluation purposes. Not all data were collected for all sample members. Indeed, the only data that are available for the entire sample are the screen and death records. Two examples will illustrate how sample design decisions could affect analysis possibilities. First, in order to limit the duration of the demonstration, only the first half of the sample to enroll were followed for 18 months. Longitudinal analysis must be limited, therefore, to 12 months of followup data or to the relatively small sample with 18 months of followup data. Second, in order to minimize data collection costs, provider billing records on community care costs were collected only for 20 percent of the sample for the first six months and 10 percent for the second six months. Consequently, data on the community service expenditures of private individuals and government programs other than Medicaid and Medicare are quite limited. Although these and other sample design decisions made sense in the evaluation, they may hinder the use of the data for other purposes.

Before undertaking a project using the channeling data, it is suggested that researchers begin by assessing the implications of the complexity of the data base for their project by reading the following reports:

The final report (Kemper et al., 1986), to gain an overview of the evaluation design and available data

The report on data collection procedures (Phillips et al., 1986), to learn how the data were collected

The particular technical report on the relevant substantive area, to understand how analysis files were constructed, how samples were defined, what variables were constructed, and what analysis was done as part of the evaluation. (For example, researchers planning analysis of informal care should read the technical report on that subject by Christianson, 1986. In addition, those seeking to replicate the evaluation analysis should read the report on research methodology by Brown, 1986).

Only after having assessed the complexity of the data and its implications for the contemplated research does it make sense to invest in the purchase of the public use tapes and associated documentation.

Survey Disclaimer

According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number. The valid OMB control number for this information collection is 0990-0379. The time required to complete this information collection is estimated to average 5 minutes per response, including the time to review instructions, search existing data resources, gather the data needed, and complete and review the information collection. If you have comments concerning the accuracy of the time estimate(s) or suggestions for improving this form, please write to: U.S. Department of Health & Human Services, OS/OCIO/PRA, 200 Independence Ave., S.W., Suite 336-E, Washington D.C. 20201, Attention: PRA Reports Clearance Officer.