Validating Competence: A New Credential for Clinical Documentation Improvement Practitioners

Abstract

As the health information management (HIM) profession continues to expand and become more specialized, there is an ever-increasing need to identify emerging HIM workforce roles that require a codified level of proficiency and professional standards. The Commission on Certification for Health Informatics and Information Management (CCHIIM) explored one such role—clinical documentation improvement (CDI) practitioner—to define the tasks and responsibilities of the job as well as the knowledge required to perform them effectively. Subject-matter experts (SMEs) defined the CDI specialty by following best practices for job analysis methodology. A random sample of 4,923 CDI-related professionals was surveyed regarding the tasks and knowledge required for the job. The survey data were used to create a weighted blueprint of the six major domains that make up the CDI practitioner role, which later formed the foundation for the clinical documentation improvement practitioner (CDIP) credential. As a result, healthcare organizations can be assured that their certified documentation improvement practitioners have demonstrated excellence in clinical care, treatment, coding guidelines, and reimbursement methodologies.

Introduction

As the health information management (HIM) profession continues to expand and become more specialized, there is an ever-increasing need to identify emerging HIM workforce roles that require a codified level of proficiency and professional standards. These evolving roles often advance into specialty areas or concentrations within the larger HIM industry and morph into in-demand positions with specialized competencies. The Commission on Certification for Health Informatics and Information Management (CCHIIM) explored one such role—clinical documentation improvement (CDI) practitioner—to define the tasks and responsibilities that the job comprises as well as the knowledge required to perform them effectively. An in-depth job analysis was conducted to codify the role, which later formed the foundation for developing the clinical documentation improvement practitioner (CDIP) credential. As a result, healthcare organizations can now have the confidence that their certified documentation improvement practitioners have demonstrated excellence in clinical care, treatment, coding guidelines, and reimbursement methodologies.

Background

Emerging professions or job roles bring an exciting air of possibility and uncertainty. Professional regulation, standards, and universal competency levels for these new roles are often ambiguous at best, leaving employers and job incumbents alike searching for a legitimate measure of job competence. A job analysis is the best tool to fully study and delineate these new workforce roles. The job analysis can later be used to form the foundation for a certification examination designed to assess the competency level of those interested in pursuing this role.

A job analysis (also known as a practice analysis, job/task analysis, or role delineation study) is conducted to determine the relevant tasks and knowledge, skills, and abilities (KSAs) needed to competently perform those tasks for a particular role. The main goal of a job analysis is to clearly and concisely define, through subject-matter expert (SME) validation, what professionals in that role do on the job.1, 2 The job analysis is an essential method for demonstrating the job relatedness of certification examination content, as the empirical study of a workforce role provides a linkage between job-related data and exam content.3 The importance of job analyses is further outlined through National Commission for Certifying Agencies (NCCA) and American National Standards Institute (ANSI) standards and guidelines. NCCA Standard 11 states: “The certification program must employ assessment instruments that are derived from the job/practice analysis and that are consistent with generally accepted psychometric principles.”4 The ANSI standard ANSI/ISO/IEC 17024:2003 further notes that a properly executed job analysis forms the basis of a valid, reliable, and fair assessment that reflects the KSAs required for competent job performance.5

A sound, comprehensive job analysis is integral to the legal defensibility of a credentialing exam, as the content domains and knowledge topics tested must be clearly linked to job-related performance criteria, resulting in content validity.6 Job analyses are often used as evidence of content validation during high-stakes examination legal challenges. Standard 14.14 of the Standards for Educational and Psychological Testing notes: “The content domain to be covered by a credentialing test should be defined clearly and justified in terms of the importance of the content for credential-worthy performance in an occupation or profession. A rationale should be provided to support a claim that the knowledge or skills being assessed are required for credential-worthy performance in an occupation and are consistent with the purpose for which the licensing or certification program was instituted.”7 In addition, the following criteria must be met in order for a job analysis to produce a content-valid examination:

The exam domains, or main subject matter areas, must be accurately weighted to reflect their relative importance on the job;

The difficulty level should match minimal competence for the credential; and

The job analysis should cover the full range of tasks performed in that role.8

CCHIIM conducts routine environmental scans to monitor any changes or growth opportunities in the health information and informatics workforce that affect the profession, and as a result, the commission decided to conduct a CDI practitioner job analysis. Numerous industry trends, such as the increased adoption of electronic health records (EHRs), an increase in health insurance fraud, and the need for complete and accurate documentation to support the requirements of the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM), all suggest the need for a highly qualified, specialized set of documentation improvement specialists who meet stringent professional guidelines.9 Additionally, general emphasis on revenue cycle processes, regulatory requirements, and continuous quality improvement converge to necessitate this type of credential. Because clinical documentation specialists have expertise in clinical care, coding guidelines, and reimbursement methodologies, a nationally recognized CDI-related credential would distinguish those practitioners as competent to provide direction relative to clinical documentation in the patient’s health record, thus promoting the HIM profession overall.

To explore the business need for and feasibility of developing a new CDI credential, CCHIIM conducted a thorough needs analysis and idea brief outlining the business impact, strategic context (including industry trends and member/customer needs), value proposition, and sustainability of this exam. The commission concluded that the exam would be a natural extension of the American Health Information Management Association (AHIMA) offerings that support clinical documentation improvement, including CDI practice briefs, a CDI tool kit for healthcare organizations and professionals, a practice community, related educational resources, and in 2007, through its House of Delegates, an approved resolution on quality data and documentation in EHRs.10–12 Additionally, creating a salient credential to validate the clinical documentation role was found to be both reactionary and forward-thinking because it would be a response to market demand from clinical documentation specialists already working in the HIM continuum, but also an opportunity to further expand and welcome complementary healthcare professionals to the HIM arena. This research served to solidify the general scope of a CDI-related credential and justify further exploration of developing this exam.

Methods

A task force composed of 19 CDI SMEs met for two days in May 2011 to create a job analysis survey to be sent to CDI industry practitioners. The SMEs on the task force were selected based on their clinical documentation expertise, as all were currently working in roles focused on clinical documentation improvement, education, and/or medical coding quality. A mix of SMEs, as reflected in Table 1, was chosen to reflect diversity in work setting, geographical location, supervisory level, and gender in order to obtain a representative sample of the specialty as a whole.

The job analysis task force was charged with developing a comprehensive list of knowledge and task statements required of the CDI practitioner role. Additionally, the group had to define the major domains (also known as topics or content areas) that represent the primary job responsibilities or facets of the job. The group determined that the knowledge and task statements would each be mapped to one of the six domains represented in Table 2.

To help define the scope of the related credential, the task force used an initial list of knowledge and task topics prepared in advance by AHIMA staff together with a small team of experienced CDI specialists. The task force then refined this task and knowledge list and supplemented it with their own insights based on their shared experience on the job. Additionally, the group developed “future topics” to identify potential developmental areas and predicted future job requirements for the CDI field as it continues to evolve. These included tasks that CDI practitioners may not be presently engaged in but will likely be asked to perform in the future, and knowledge areas that CDI practitioners will likely need to learn for the future.

These knowledge areas, tasks, and future topics were used to create the job analysis validation survey. In addition to defining the role in terms of the required knowledge and tasks performed on the job both currently and in the future, the task force also created survey scales regarding frequency and importance (listed in Table 3 and Table 4) to be used in the job analysis survey. A discrete, five-point Likert scale was selected to evaluate frequency, with possible response choices of “Never” (1), “Quarterly” (2), “Monthly” (3), “Weekly” (4), and “Daily” (5). A discrete, three-point Likert scale was used for the importance ratings, with the possible responses of “Not Important” (1), “Somewhat Important” (2), and “Very Important” (3). The task force members selected these rating scales because they felt that they best approximated the rate of occurrence and general importance levels relative to the job.

In order to find the appropriate group of practitioners to survey, random sampling of a targeted sector of the AHIMA membership database was conducted. To meet the criteria for inclusion in the survey, individuals had to be in one of four roles, practice in one of three clinical settings, and have at least one of three credentials, as shown in Table 5. At the time, 12,914 individuals in the AHIMA membership database met those requirements. A random number generator randomly assigned each member a number from 1 to 12,914, with replacement. In the sample, 4,923 candidates received numbers below 5,000 and were included in the survey. Based on the criteria of clinical setting, supervisory level, RHIA and RHIT certification, CCS certification, and RN registration, the sample selected was within 1.5 percent of the distribution of members for each criterion.

Survey invitations were e-mailed to the 4,923 potential respondents on Friday, June 24, 2011, and the survey closed at midnight on Tuesday, July 12, 2011. The response rate was 14.7 percent, with 733 respondents completing the survey and demographic questions. The sampling error was +/- 1.1 percent at the 95 percent confidence level.

In July, the job analysis task force reconvened to review the survey results. The original weightings given in their preliminary exam blueprint were compared to the weights resulting from the job analysis validation survey. To reconcile the two, the task force voted for the target weights for each content area within the knowledge and task domains. The percentage weighting of each domain was determined based on the aggregate importance and frequency ratings given to each domain. The domains that contained tasks and knowledge statements rated as more important or more frequently performed received higher percentage weights.

For each of the target weights, a range of +/- 2 percent was calculated to create the maximum and minimum percentages for each domain. These maximum and minimum percentage weightings became the weightings for the final exam blueprint and determined the total number of test items included in each domain. A percentage range, as opposed to an absolute percentage, was created to allow for variance between preliminary blueprint expectations and survey responses, serving as a buffer for the margin of error. Additionally, the maximum and minimum domain percentages allowed for some leeway to slightly adjust weightings by topic area as necessary based on industry changes.

Results

The final domain weightings, including the maximum and minimum percentage ranges, preliminary weightings, survey weightings, and target weightings, are shown in Table 6. The target weighting was determined by the task force after comparing the survey data with the original preliminary blueprint.

Table 7 and Table 8 reflect the frequency and importance survey ratings for each task and knowledge statement ranked from highest to lowest in each domain. The weighted average of each task and knowledge rating was calculated from the aggregate survey responses. Because the frequency ratings used a scale of 1 to 5 and the importance ratings used a scale of 1 to 3, a scaling factor of 1.667 was used to multiply the importance rating so that its weight would be equal to that of the frequency rating. These corrected mean frequency and importance ratings were used to rank the tasks and knowledge statements within their domains and were also used to calculate the weight for each domain.

The Record Review & Document Clarification and Clinical & Coding Practice domains received the highest target weightings on the exam blueprint (26 percent and 24 percent respectively) because they had the greatest number of task or knowledge items that also had the highest frequency and importance weightings based on the survey responses. Because these areas make up the greatest proportion of the work done on the job and the knowledge required to complete those tasks, they form the largest proportion of the exam. Conversely, the Compliance domain has the smallest overall target weighting on the exam blueprint (6 percent) because it had fewer task or knowledge items, which also had the lowest frequency and importance ratings.

Table 9 and Table 10 depict the survey ratings for the “future” task and knowledge topics included in the survey. The data show that the majority of survey respondents felt that all of the future knowledge topics would be needed in the short term (within six months to one year), with the knowledge areas related to electronic health records (EHRs) being the most highly rated. The future task topic data show how many respondents were already performing each task, how frequently they perform it, and how important they rate it. Those who indicated that they do not currently perform a task were asked when they expect themselves or their organization to perform the task. Respondents were also asked to rate under which domain they felt the future task belonged. The data show that 10 to 40 percent of respondents were already performing one or more of the future tasks, while the majority of those who were not performing the tasks indicated they would either begin in the next six months to one year or would never perform that task.

Finally, the survey respondent demographics are represented in Tables 11, 12, 13, 14, 15, 16, 17, 18, 19, and 20. Respondents’ geographic area, work setting, practice setting, facility size, type of health record system, employee status, department, job title, and age were all captured to ascertain the representativeness of the sample. All demographic characteristics were appropriately distributed, as they closely match the population’s demographic profile.

Discussion

The opinions and experience of a representative sample of CDI specialists was obtained through the job analysis process to build a solid, legally defensible foundation for the CDIP credential based on job-related competency. This foundation takes shape in the exam blueprint, which outlines the main content domains tested on the exam. The weighting for each domain proportionately reflects the major components of the CDI practitioner job role. By following job analysis and test development best-practice methodology, CCHIIM was able to codify the clinical documentation improvement specialty by defining the critical factors of the job role and developing a standardized tool used to evaluate CDI practice competency. This credential will strengthen the CDI role by instilling employer confidence in CDIP-credentialed individuals who have met measured, defined, and validated professional standards.

Additionally, the job analysis will help provide direction for the specialty as it continues to grow. The job analysis included measurement of both current and future task and knowledge statements to track how the CDI practitioner role may evolve and what knowledge and abilities will be required of these workers as they grow in their roles. These “future” topics will be monitored and reevaluated in the next job analysis (typically conducted every three to five years, or sooner if the specialty undergoes an extreme transformation) to determine what adjustments should be made to the CDIP exam blueprint to best represent the profession.13

Numerous steps to minimize job analysis survey bias were taken. Survey incentives (such as the award of one continuing education unit [CEU] and an entry into an American Express gift card drawing) were offered to limit nonresponse bias and increase the response rate. Additionally, e-mailed survey reminders were sent in order to reach as many respondents as possible. Undercoverage bias was also avoided by ensuring that the demographic composition of the sample mirrored that of the population. The distribution of respondents meeting the parameters of the population (credentials, work setting, and job role) showed no significant difference in demographics when compared to the sample cohort as a whole. Therefore, neither undercoverage nor nonresponse bias was found to be a significant problem in the sample.

As Watzlaf, Rudman, Hart-Hester, and Ren noted in their 2009 article, the roles and job functions of HIM professionals are continuously changing and becoming more specialized.14 New specializations continue to emerge because of a variety of regulatory and environmental factors, and the new specializations in turn increase the need to certify individuals working in these nontraditional roles to ensure the integrity and quality of their work. HIM certification bodies must stay on top of these trends in order to provide meaningful professional guidelines and standards of excellence for these growing fields. As the CDI role and the entire HIM industry evolve, CCHIIM will continue to routinely examine job roles and functions and update the requisite body of knowledge and competency required for HIM excellence through job analyses and exam blueprint updates.

Limitations

While care was taken to ensure representativeness of the sample and obtain a satisfactory response rate, the study has some limitations. Because the population and resulting sample were drawn from the AHIMA membership database because of financial constraints and other factors, the survey results could have possibly been strengthened by casting a wider net and surveying individuals who do CDI work but are not AHIMA members.

Additionally, there is some debate about the use of five-point and three-point scales (as used for frequency and importance in this survey) versus four-point, forced-choice scales in survey research. Some argue for the use of four-point rating scales because they eliminate the tendency toward the middle and force respondents to pick a side, as opposed to a three- or five-point scale that has a “neutral” midpoint. However, four-point scales can force respondents to answer in a way that does not truly reflect their opinions, in cases when respondents may truly be neutral or middle-of-the-road in their opinion of a certain topic.15 Forcing respondents to give an untrue answer will unnecessarily skew results. These reasons led to the decision to use three- and five-point survey scales. Respondents were also given the opportunity to write in any comments they had about their ratings or the survey questions for each domain.

Conclusion

To fill an industry need for a validated professional standard of CDI excellence, CCHIIM explored the possibility of creating a new CDI credential for this growing field. To do so, a job analysis was conducted to thoroughly yet concisely define the requisite tasks and knowledge areas for the CDI practitioner role. The job analysis data were used to develop the CDIP exam blueprint in accordance with test development best-practice methodology, in that the domain weightings were determined based on SME rankings of task or knowledge criticality and frequency. Because validated, job-specific content is the crux of the CDIP exam, those who list the CDIP credential after their name have proven their competency and expertise related to the codified CDI body of knowledge. As a result, the HIM field in its entirety is strengthened by having a defined, measurable, and future-thinking measure of proficiency related to ensuring the quality of patient health information.

Jessica Ryan, MA, is a learning specialist at the University of Chicago Medical Center in Chicago, IL.

Karen Patena, MBA, RHIA, FAHIMA, is a clinical associate professor and director of health information management programs at the University of Illinois at Chicago in Chicago, IL.

Wallace Judd, PhD, is a psychometrician at Authentic Testing, Inc., in Gaithersburg, MD.

Mike Niederpruem, MS, MA, CAE, is the director of education and research at the Dental Auxiliary Learning and Education Foundation in Chicago, IL.

Mehrens, William A., and W. James Popham. “How to Evaluate the Legal Defensibility of High-Stakes Tests.”

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. Standards for Educational and Psychological Testing. Washington, DC: American Psychological Association, 1999, p. 161.