Πέμπτη 7 Μαΐου 2020

Automated ASPECT scoring in acute ischemic stroke: comparison of three software tools

Automated ASPECT scoring in acute ischemic stroke: comparison of three software tools:

234.jpg

Abstract



Purpose

Various software applications offer support in the diagnosis of acute ischemic stroke (AIS), yet it remains unclear whether the performance of these tools is comparable to each other. Our study aimed to evaluate three fully automated software applications for Alberta Stroke Program Early CT (ASPECT) scoring (Syngo.via Frontier ASPECT Score Prototype V2, Brainomix e-ASPECTS® and RAPID ASPECTS) in AIS patients.




Methods

Retrospectively, 131 patients with large vessel occlusion (LVO) of the middle cerebral artery or the internal carotid artery, who underwent endovascular therapy (EVT), were included. Pre-interventional non-enhanced CT (NECT) datasets were assessed in random order using the automated ASPECT software and by three experienced neuroradiologists in consensus. Interclass correlation coefficient (ICC), Bland-Altman, and receiver operating characteristics (ROC) were applied for statistical analysis.




Results

Median ASPECTS of the expert consensus reading was 8 (7–10). Highest correlation was between the expert read and Brainomix (r = 0.871 (0.818, 0.909), p < 0.001). Correlation between expert read and Frontier V2 (r = 0.801 (0.719, 0.859), p < 0.001) and between expert read and RAPID (r = 0.777 (0.568, 0.871), p < 0.001) was high, respectively. There was a high correlation among the software tools (Frontier V2 and Brainomix: r = 0.830 (0.760, 0.880), p < 0.001; Frontier V2 and RAPID: r = 0.847 (0.693, 0.913), p < 0.001; Brainomix and RAPID: r = 0.835 (0.512, 0.923), p < 0.001). An ROC curve analysis revealed comparable accuracy between the applications and expert consensus reading (Brainomix: AUC = 0.759 (0.670–0.848), p < 0.001; Frontier V2: AUC = 0.752 (0.660–0.843), p < 0.001; RAPID: AUC = 0.734 (0.634–0.831), p < 0.001).




Conclusion

Overall, there is a convincing yet developable grade of agreement between current ASPECT software evaluation tools and expert evaluation with regard to ASPECT assessment in AIS.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου