Abstract

Automated systems using natural language processing may greatly speed chart review tasks for clinical research, but their accuracy in this setting is unknown. The objective of this study was to compare the accuracy of automated and manual coding in the data acquisition tasks of an ongoing clinical research study, the Northern Manhattan Stroke Study (NOMASS). We identified 471 neuroradiology reports of brain images used in the NOMASS study. Using both automated and manual coding, we completed a standardized NOMASS imaging form with the information contained in these reports. We then generated ROC curves for both manual and automated coding by comparing our results to the original NOMASS data, where study investigators directly coded their interpretations of brain images. The areas under the ROC curves for both manual and automated coding were the main outcome measure. The overall predictive value of the automated system (ROC area 0.85, 95% CI 0.84-0.87) was not statistically different from the predictive value of the manual coding (ROC area 0.87, 95% CI 0.83-0.91). Measured in terms of accuracy, the automated system performed slightly worse than manual coding. The overall accuracy of the automated system was 84% (CI 83-85%). The overall accuracy of manual coding was 86% (CI 84-88%). The difference in accuracy between the two methods was small but statistically significant (P = 0.026). Errors in manual coding appeared to be due to differences between neurologists' and neuroradiologists' interpretations, different use of detailed anatomic terms, and lack of clinical information. Automated systems can use natural language processing to rapidly perform complex data acquisition tasks. Although there is a small decrease in the accuracy of the data as compared to traditional methods, automated systems may greatly expand the power of chart review in clinical research design and implementation. (C) 2000 Academic Press.

title = "Coding neuroradiology reports for the Northern Manhattan Stroke Study: A comparison of natural language processing and manual review",

abstract = "Automated systems using natural language processing may greatly speed chart review tasks for clinical research, but their accuracy in this setting is unknown. The objective of this study was to compare the accuracy of automated and manual coding in the data acquisition tasks of an ongoing clinical research study, the Northern Manhattan Stroke Study (NOMASS). We identified 471 neuroradiology reports of brain images used in the NOMASS study. Using both automated and manual coding, we completed a standardized NOMASS imaging form with the information contained in these reports. We then generated ROC curves for both manual and automated coding by comparing our results to the original NOMASS data, where study investigators directly coded their interpretations of brain images. The areas under the ROC curves for both manual and automated coding were the main outcome measure. The overall predictive value of the automated system (ROC area 0.85, 95% CI 0.84-0.87) was not statistically different from the predictive value of the manual coding (ROC area 0.87, 95% CI 0.83-0.91). Measured in terms of accuracy, the automated system performed slightly worse than manual coding. The overall accuracy of the automated system was 84% (CI 83-85%). The overall accuracy of manual coding was 86% (CI 84-88%). The difference in accuracy between the two methods was small but statistically significant (P = 0.026). Errors in manual coding appeared to be due to differences between neurologists' and neuroradiologists' interpretations, different use of detailed anatomic terms, and lack of clinical information. Automated systems can use natural language processing to rapidly perform complex data acquisition tasks. Although there is a small decrease in the accuracy of the data as compared to traditional methods, automated systems may greatly expand the power of chart review in clinical research design and implementation. (C) 2000 Academic Press.",

N2 - Automated systems using natural language processing may greatly speed chart review tasks for clinical research, but their accuracy in this setting is unknown. The objective of this study was to compare the accuracy of automated and manual coding in the data acquisition tasks of an ongoing clinical research study, the Northern Manhattan Stroke Study (NOMASS). We identified 471 neuroradiology reports of brain images used in the NOMASS study. Using both automated and manual coding, we completed a standardized NOMASS imaging form with the information contained in these reports. We then generated ROC curves for both manual and automated coding by comparing our results to the original NOMASS data, where study investigators directly coded their interpretations of brain images. The areas under the ROC curves for both manual and automated coding were the main outcome measure. The overall predictive value of the automated system (ROC area 0.85, 95% CI 0.84-0.87) was not statistically different from the predictive value of the manual coding (ROC area 0.87, 95% CI 0.83-0.91). Measured in terms of accuracy, the automated system performed slightly worse than manual coding. The overall accuracy of the automated system was 84% (CI 83-85%). The overall accuracy of manual coding was 86% (CI 84-88%). The difference in accuracy between the two methods was small but statistically significant (P = 0.026). Errors in manual coding appeared to be due to differences between neurologists' and neuroradiologists' interpretations, different use of detailed anatomic terms, and lack of clinical information. Automated systems can use natural language processing to rapidly perform complex data acquisition tasks. Although there is a small decrease in the accuracy of the data as compared to traditional methods, automated systems may greatly expand the power of chart review in clinical research design and implementation. (C) 2000 Academic Press.

AB - Automated systems using natural language processing may greatly speed chart review tasks for clinical research, but their accuracy in this setting is unknown. The objective of this study was to compare the accuracy of automated and manual coding in the data acquisition tasks of an ongoing clinical research study, the Northern Manhattan Stroke Study (NOMASS). We identified 471 neuroradiology reports of brain images used in the NOMASS study. Using both automated and manual coding, we completed a standardized NOMASS imaging form with the information contained in these reports. We then generated ROC curves for both manual and automated coding by comparing our results to the original NOMASS data, where study investigators directly coded their interpretations of brain images. The areas under the ROC curves for both manual and automated coding were the main outcome measure. The overall predictive value of the automated system (ROC area 0.85, 95% CI 0.84-0.87) was not statistically different from the predictive value of the manual coding (ROC area 0.87, 95% CI 0.83-0.91). Measured in terms of accuracy, the automated system performed slightly worse than manual coding. The overall accuracy of the automated system was 84% (CI 83-85%). The overall accuracy of manual coding was 86% (CI 84-88%). The difference in accuracy between the two methods was small but statistically significant (P = 0.026). Errors in manual coding appeared to be due to differences between neurologists' and neuroradiologists' interpretations, different use of detailed anatomic terms, and lack of clinical information. Automated systems can use natural language processing to rapidly perform complex data acquisition tasks. Although there is a small decrease in the accuracy of the data as compared to traditional methods, automated systems may greatly expand the power of chart review in clinical research design and implementation. (C) 2000 Academic Press.