The Medical Transcriptionist's Resource for MS Word and Windows

Navigation

Studies on SRT Errors

Error Studies in Speech Recognition

Written by Laura Bryan, MT (ASCP), CHDS, AHDI-F

An
exhaustive search of peer-reviewed literature published in the past 10 years
demonstrates a paucity of research on the use of speech recognition technology
to capture clinician dictation.The
following are excerpts from studies involving speech recognition technology
used for medical record documentation.

1) Reports dictated with voice recognition took 50% longer
to dictate despite being 24% shorter than those conventionally transcribed, 2)
There were 5.1 errors per case, and 90% of all voice recognition dictations
contained errors prior to report signoff while 10% of transcribed reports
contained errors. 3). After signoff, 35% of VR reports still had errors. (Pezzullo et al., 2008)

Despite the frequent
introduction of voice recognition (VR) into radiology departments, little
evidence still exists about its impact on workflow, error rates and costs. …42
% and 30% of the finalized VR reports for each of the radiologists investigated
contained errors. (Strahan &
Schneider-Kolsky, 2010)

Despite
the potential to dominate radiology reporting, current speech recognition
technology is thus far a weak and inconsistent alternative to traditional human
transcription. This is attributable to poor accuracy rates, in spite of vendor
claims, and the wasted resources that go into correcting erroneous reports. (Voll, Atkins, & Forster, 2008)

…a
consistent claim across the speech recognition industry is that SRTs can reduce
costs due to faster turn-around times of medical documentation, higher
efficiency, and increased accuracy.Little research exists on the impact of SRT technologies on the actual
work of creating medical records. (David, Chand,
& Sankaranarayanan, 2014)

Twenty
respondents had been selected to test the system and the outcome from the
experiment showed that speech recognition application has faster time compared
to text writing. However, data captured through speech recognition once
translated to health records is always inaccurate. It was noted that the human
factors such as accent and tone in speaking affect the translation of speech
recognition into medical records. (Abd Ghani
& Dewi, 2012)

Our key recommendation from this
study is that as the QA function is removed through the implementation of new
technologies, more attention needs to be paid on the potential impacts of this
decision, on the quality of the documentation produced(David et al.,
2014)

In the physician-as-editor model, it is assumed that the
physician will find errors, edit the

document,
and do the proper formatting.There is
evidence, however, that this assumption does not necessarily hold, and thatphysicians do not take the time to proof-read
and edit their records. (David et al., 2014)

Furthermore, hospital administrators
need to consider how to best maintain QA functions when the method of medical
record production undergoes drastic transformation as when once-and-done
production technologies are introduced. (David
et al., 2014)

The
results demonstrated that on the average, on the order of 315,000 errors in one
million dictations were surfaced.This
shows that medical errors occur in dictation and quality assurance measures are
needed in dealing with those errors….Anecdotal evidence points to the belief
that records created directly by physicians alone will have fewer errors and
thus be more accurate.This research
demonstrates this is not necessarily the case when it comes to physician
dictation.As a result, the place of
quality assurance in the medical record production workflow needs to be
carefully considered before implementing a "once-and-done” (ie,
physician-based) model of record creation.(David et al., 2014)

At least one major error was
found in 23% of ASR reports, as opposed to 4% of conventional dictation
transcription reports (p < 0.01). Major errors were more common in
breast MRI reports (35% of ASR and 7% of conventional reports), the lowest
error rates occurring in reports of interventional procedures (13% of ASR and
4% of conventional reports) and mammography reports (15% of ASR and no
conventional reports) (p < 0.01). (Basma,
Lord, Jacks, Rizk, & Scaranelo, 2012)

Errors were divided into two
categories, significant but not likely to alter patient management and very
significant with the meaning of the report affected, thus potentially affecting
patient management (nonsense phrase). Three hundred seventy-nine finalized CR
(plain film) reports and 631 non-CR (ultrasound, CT, MRI, nuclear,
interventional) finalized reports were examined. Eleven percent of the reports
in the CR group had errors. Two percent of these reports contained nonsense
phrases. Thirty-six percent of the reports in the non-CR group had errors and out
of these, 5% contained nonsense phrases. (Chang,
Strahan, & Jolley, 2011)

"My
reports -- and I try to be careful -- average seven errors per report, which go
from punctuation to ludicrous," said Dr. Michael McNamara Jr. from Case
Western Reserve University School of Medicine. "[Voice recognition
software] inserts a ''no,'' it drops a ''no'' -- it''s a very dangerous weapon and
we have to use it very, very carefully," he said. http://www.auntminnie.com/index.aspx?sec=rca_n&sub=rsna_2012&pag=dis&ItemID=101776

References

Note: To the author’s knowledge, this
list represents the extent of peer-reviewed research papers published in the
last 10 years studying the use of speech recognition in
healthcare.