Abstract

Background and Purpose—We undertook this study to evaluate the performance of an automated detection software in the detection of Doppler microembolic signals (MES) during cardiac surgery.

Methods—Intraoperative monitoring was performed over two spatially separated vessel segments of each middle cerebral artery in 18 patients undergoing coronary artery bypass surgery (n=16) or cardiac valve replacement (n=2). All monitoring sessions were saved on digital audiotape and subsequently played back to the same ultrasound machine, set up to automatically detect MES by evaluating the temporary delay in their appearance between the two segments, in the presence of an experienced examiner. Software sensitivity and specificity in MES detection were then evaluated, with the results of the human observer considered the gold standard.

Results—A total of 44 933 high-intensity signals (artifacts and MES) were evaluated. Overall sensitivity and specificity of the software, with the human observer considered the gold standard, were 64% and 78.5%, respectively, ranging from 54% to 96% and from 74% to 90% in individual patients. When the overall results of the software were compared with those of the human observer, κ was 0.72.

Conclusions—The tested software displayed a satisfactory specificity. Provided that the sensitivity is further improved, it could provide a valuable tool in intraoperative monitoring.

Since the first description of Doppler MES during cardiac surgery,1 several studies examined their clinical relevance and reported a relation between the prevalence of perioperative complications and MES counts.2345 Intraoperative MES detection, however, is a time-consuming procedure, requiring both the presence of an experienced examiner during monitoring and precise off-line evaluation of MES counts. Automated MES detection therefore provides an attractive tool in this setting. A promising detection technique based on “multirange” monitoring was introduced by Aaslid in 1994 and has since been evaluated in a number of studies.6789 We examined the applicability of this technique for automated MES detection during cardiac surgery.

Subjects and Methods

Bilateral TCD monitoring was performed in 18 patients (13 men, 5 women, aged 64±2 years) undergoing elective cardiac surgery for coronary artery bypass grafting (n=16) or cardiac valve replacement (n=2), with the use of a pulsed ultrasound machine (Multi-Dop X-4, DWL) with 2-MHz transducers during the entire operative procedure. Window overlap was set at 60% and sample volume at 5 mm. Power was reduced to 20 to 40 mW/cm2 to ensure adequate dynamic range (>65 dB), and the detection level for MES was set at 9 dB. We used 64-point fast Fourier transform. Monitoring was performed over two spatially separated segments of both middle cerebral arteries, whose distance ranged between 5 and 8 mm. Monitoring sessions were saved on DAT with an 8-channel recorder (TASCAM DA 88). An experienced examiner, who recorded the stages of the operative procedure based on the time of the DAT and readjusted the probe when necessary, was present during all monitoring sessions.

All tapes were subsequently played back to the same Doppler machine. The embolus detection software TCD-8, version 8.00 T, was used. This software initially identifies all signals causing an intensity increase higher than a preset value (in our application 9 dB) above background, thus purely using an energy threshold. Background intensity is thereby evaluated by averaging the intensity of the spectra following the MES over a period of 10 milliseconds in the frequency domain. Subsequently, the temporary delay in the appearance of each high-intensity signal is calculated, based on the particle velocity measured in the proximal sample volume and the distance between the two sample volumes. Signals appearing between the two depths monitored with a temporary delay ranging between 25% and 250% of the value calculated above are accepted as MES, while remaining signals are rejected as artifacts. All high-intensity signals recognized by the software are listed with a subheading displaying either the signal strength in decibels, if classified as MES, or xx, if classified as artifacts. The examiner was asked to note the number of MES and artifacts correctly identified as such by the automated software, the number of MES incorrectly rejected as artifacts, the number of MES that the software failed to identify, and the number of artifacts incorrectly classified as MES. This evaluation was performed separately for each patient. The operating procedure was divided into the following stages: (1) chest opening to aortic cannulation, (2) aortic cannulation and inception of bypass, (3) cardiopulmonary bypass, and (4) termination of bypass until skin closure; the above counts were evaluated for each substage.

The human observer was used as the gold standard in MES identification. The following identification criteria were used: characteristic sound, intensity increase ≥3 dB above background, short duration (<300 milliseconds), and random appearance in the cardiac cycle according to a recent consensus.10 Bidirectional signals fulfilling the remaining criteria were not rejected. MES “showers” were not evaluated because quantification was not feasible in these cases.

The specificity and sensitivity of the applied software were calculated (number of MES detected/total number of MES and number of artifact signals rejected/total number of artifact signals, respectively) for each patient and each operation substage, with the human observer used as the gold standard. Additionally, Cohen’s κ was evaluated by comparing the results of the human observer with the results obtained by the automated software.11 These values range between −1 (complete disagreement) and 1 (complete agreement), whereby 0 reflects lack of a relation between the evaluations of the two observers. Values between 0.4 and 0.75 indicate acceptable to good, and those >0.75 indicate excellent agreement.11 Normally distributed data were expressed as mean±SE and nonnormally as median and 95% CI. The Mann-Whitney test was used for comparison of nonnormally distributed data. Significance was declared at the P<.05 level.

Results

A total of 44 933 high-intensity transients, consisting of 9411 MES and 35 522 artifact signals, were recorded by the human observer. A total of 6018 MES were correctly identified as such by the software, 2498 were classified as artifacts, and 895 were not identified. Four hundred thirty-seven of the 2498 rejected MES (17.5%) only appeared in the proximal channel; 27 879 of the 35 522 artifact signals were correctly rejected. Thus, the overall sensitivity and specificity of the software were 64% and 78.5%, respectively. A total of 4481 artifact signals occurred in association with coutering (12.6% of the total artifact count) and were rejected in 75.5% of cases. The specific agreement (κ value) between the software and the human observer in the detection of MES and signals was 0.72.

MES were only detected in three patients during stage 1; their counts (95% CI) in the remaining stages were 44 (58 to 77), 215 (57 to 400), and 100 (27 to 348) (stages 2, 3, and 4, respectively; total=486 [236 to 794]). No significant differences in software performance were noted among the operation stages (sensitivity, 65% [58% to 77%]; 66% [63% to 72%]; and 63% [57% to 68%], stages 2, 3, and 4, respectively; specificity, 79% [76% to 82%]; 88% [76% to 100%]; 87% [81% to 94%]; 85% [80% to 90%], stages 1 to 4, respectively; all P>.05, Mann-Whitney). Software performance in the individual patients ranged from 54% to 96% (sensitivity, 64% [61% to 69%], median and 95% CI) and 74% to 90% (specificity, 80% [77% to 82%], median and 95% CI).

Discussion

The advantage of the “bigated” approach is the fact that it is based on a physical principle, thus being able to equivocally identify each single signal. Three previous studies evaluated the applicability of this technique: Smith et al6 examined 138 MES and 170 artifact signals and reported a temporary delay of 11.04 milliseconds (95% CI, 6.24 to 16.41) for MES and 0.08 milliseconds (95% CI, −0.48 to 0.64) for artifacts. Molloy and Markus7 evaluated the use of this technique in both an in vitro flow model and in vivo studies and reported sensitivity values of 75.2% and 92.6% in prosthetic valve carriers and patients with carotid disease, respectively, and a specificity of 99% for both patient groups, while 100% accuracy was described in the in vitro model. Georgiadis et al8 reported similar results in an in vitro model and 98.1% and 98.8% sensitivity and specificity, respectively, in patient studies. Still, these reports examined the applicability of the method’s principle rather than the possibility of automated embolus detection with commercially available software based on that method. Droste et al9 recently evaluated the performance of an automated detection software on 10 prosthetic valve carriers and 12 normal control subjects. While the reported specificity and sensitivity were promising (59.9% and 74.3%, respectively), their results are weakened both by the patient group they studied, since MES in patients with prosthetic valves are easier to discriminate because of their higher intensity,12 and by the low total number (267) of recorded MES. Additionally, artifact signals in this as well as in previous studies did not appear spontaneously but were rather directly caused by the examiner or the instructed control subject and are not necessarily comparable to the monitoring situation.

A higher sensitivity and a lower specificity are evident when our results are compared with those of Droste et al.9 This discrepancy could be coincidental because of differences in specific MES characteristics between the examined groups or the use of a more recent software version in the present study.

When MES identification is solely based on automated software, identification of high-intensity signals is warranted before they are further classified as MES or artifacts. Failure to recognize the intensity increase associated with MES was responsible for failure to identify 895 MES (9.5%) in this study. The sensitivity of the software could therefore be improved by reducing the detection threshold. We nevertheless found that the 9-dB threshold provides the best overall results in intraoperative monitoring (D.G., unpublished data, 1997), since its reduction results in an almost continuous registration of artifact signals, making evaluations such as that performed in the present study almost impossible. The MES detected in only one channel (17.5% in our study) is most probably due to the fact that the sample volume is not covering the whole vessel or due to escape of a number of microemboli through perforating branches. The assumption that coutering artifacts cannot be rejected by the software used cannot be confirmed by our results.

The recorded sensitivity and specificity are lower than the described values of the neuronal network.13 Still, it must be taken into account that our evaluation was performed intraoperatively. This approach is more difficult than in patients with prosthetic valves or potential native embolic sources because of a higher prevalence of artifact signals, signal disturbances caused by probe dislocations, variations in blood flow through inception and termination of cardiopulmonary bypass, and finally the high intensity and (at least partially) temporary proximity of MES. Van Zuilen et al14 recently compared two automated detection software systems based on spectral analysis and a neuronal network with human observers and reported sensitivity values ranging between 44% and 70% for the software and 62% for the neuronal network, while κ values ranged between 0.18 and 0.93. The performance of the TCD 8 software is satisfactory compared with these results, in particular as a result of the aforementioned additional difficulties associated with monitoring in an intraoperative setting.

In conclusion, our results suggest that MES detection with the use of automated software based on the bigate approach is feasible. Further improvement of its sensitivity would result in a valuable tool for intraoperative monitoring.