2nd Annual ICD-10 Coding Contest Results [Sponsored]

Since the implementation of ICD-10-CM/PCS two years ago, coding has taken on a new meaning. Code specificity is now used in many ways, and most impact provider payments in some way. Consider the following:

Reimbursement—current fee for service

Reimbursement—quality, often impacts reimbursement two years in the future

Quality scores

Provider profiles

Regulatory audit focus

Population health management initiatives

Risk adjustment

Contract negotiation

Bundled payment preparation

Process improvement

Research

We all perform conventional audits—internal and external—throughout the year. Preparation for these audits can be time consuming and may limit the frequency of conducting them. To supplement your external audits, consider the benefit of having your coding staff code your own medical records without interrupting production—with a way to score, analyze, compare results, and determine educational needs at the chapter, category, or code level.

Central Learning users create their own answer key, load it into the application, and determine the correct codes for their cases. This provides the ability to continually monitor coding staff in between routine external audits and focus training on code specificity that impacts a specific facility. Code specificity is critical in today’s value-based-payment world as it impacts risk adjustment and severity of illness, which impacts provider payments.

Coding accuracy under a new lens

The thought process around coding audits and accuracy rates must change as well. Perhaps in addition to the conventional overall case accuracy, we need to look at accuracy in different ways, considering multiple types of accuracy scores:

DRG accuracy

Principal diagnosis accuracy

Principal procedure accuracy

Diagnosis and procedure accuracy based on code specificity

By changing our views on coding accuracy and accepting the differences between ICD-10 and ICD-9, we can better address code accuracy. As clinical documentation improvement programs focus on ensuring documentation is present and accurate, coding professionals need to focus on specified code assignment beyond the DRG.

Contest Background

Central Learning’s 2nd Annual National ICD-10 Coding Contest participants were recruited via email and advertising through various HIM and coding publishing organizations. Contestants were given coding guidelines for each patient type and access to the Central Learning software. Access to encoder software was not provided within the contest, but coders were allowed to use their own individual encoder tools and coding references.

Coding contest participants self-reported their primary coding certification. Among contest coders, 99 percent reported as certified coders (AHIMA or AAPC), and one percent reported no certification. The average years of experience Inpatient coding contestants reported was 14.3 years. Outpatient coding contestants reported 9.9 years of coding experience.

From July 14th to August 11th, 2017, Central Learning measured inpatient ICD-10-CM/PCS coding and DRG accuracy, along with the outpatient (ambulatory surgery and emergency department) ICD-10-CM and CPT coding (excluding facility evaluation and management (E/M)). Contest participants coded a total of 1,636 real medical record cases using a uniform online coding platform. Participants chose the cases they preferred to code based on their area of coding specialty.

Once coded, the deidentified cases were electronically graded against Central Learning’s standardized answer keys to remove any bias for accuracy scoring. Prior to contest initiation, the Central Learning answer keys were vetted, validated, and approved through a rigorous process:

Based on the American Hospital Association’s Coding Clinic and the Centers for Medicare and Medicaid Services ICD-10 Official Guidelines for Coding and Reporting and other published coding guidelines

Reviewed and approved by a forum of certified coders, AHIMA-approved ICD-10-CM/PCS trainers and consultants, both internal and external

Results uncover need for ongoing ICD-10 training and audits

The 2nd Annual National ICD-10 Coding Contest data show electronically unbiased measured results while providing a national comparison to benchmark ICD-10 coding accuracy rates. This year’s findings continue to reveal accuracy rates far below the 95 percent accuracy standard. As in 2016, coding accuracy was still strongest among inpatient cases and weakest among emergency department, both increasing from last year. Ambulatory surgery accuracy rates actually decreased from last year.

An important observation to note is the correlation between accuracy and productivity scores:

The contest also evaluated the DRG assigned by the contestants. Accuracy rates were very similar for 2016 and 2017. Again, this result is much lower than the 95 percent industry standard.

Like the 2016 results, DRG 455 combined anterior/posterior spinal fusion w/o cc/mcc remained the DRG with the most potential revenue loss at -$4,248 per case (See Yellow Highlighted area in Table below). For the five new cases in 2017, DRG 871 septicemia or severe sepsis w/o mv >97 hrs. w/ mcc ($3,794 per case) reflected the most potential revenue loss (See Orange Highlighted area in Table below). The Office of Inspector General (OIG) released data September 7, 2017 on DRG 870 septicemia or severe sepsis W/ MV >96 hours, stating that inpatient days are being calculated rather than actual ventilation time. Therefore, with the increased DRG focus, it will be important for providers to audit DRGs 870 and 871 to ensure accurate vent time calculation. Finally, opportunities for DRG 870 are evident since sepsis continues to be a major challenge both from a documentation and coding perspective.