Department of Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
Copyright © 2023 Korean Society of Gastrointestinal Endoscopy
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Conflicts of Interest
The authors have no potential conflicts of interest.
Funding
None.
Author Contributions
Conceptualization: MM, HM, AE; Data curation: MM, HM, AE; Formal analysis: MM, HM, AE; Investigation: MM, HM, AE; Methodology: MM, HM, AE; Project administration: MM, HM, AE; Resources: MM, HM, AE; Software: MM, HM, AE; Supervision: MM, HM, AE; Validation: MM, HM, AE; Visualization: MM, HM, AE; Writing–original draft: MM, HM, AE; Writing–review & editing: MM, HM, AE.
Study | Aim of the study | Data for the development | Experimental design | Performance |
---|---|---|---|---|
van der Sommen et al. (2016)19 | AI stand-alone performance during detection of early neoplasia in BE | – | 100 HD-WLE images from 44 patients | Per image sensitivity and specificity of 83% |
de Groof et al. (2020)20 | Evaluation of AI stand-alone performance and comparing it to the performance of nonexpert endoscopists | Pretraining: 494,364 images from all intestinal segments | 1. Validation-only dataset: 80 HD-WLE images of 80 patients | 1. CADx: sensitivity, 90%; specificity, 88%; accuracy, 89% |
Training: 1,544 BE and BERN HD-WLE images of 509 patients | 2. Dataset for validation and comparison to 53 nonexpert endoscopists: 80 HD-WLE images of 80 patients | CADe: optimal biopsy spot in 97% | ||
2. CADx: sensitivity, 93%; specificity, 83%; accuracy, 88% | ||||
CADe: optimal biopsy spot in 92% | ||||
Endoscopists: sensitivity, 72%; specificity, 74%; accuracy, 73% | ||||
de Groof et al. (2020)21 | Detection of BERN during real-life endoscopic examination | Training: 1,544 BE and BERN HD-WLE images of 509 patients | Evaluation of 144 HD-WLE of 10 patients with BERN and 10 with BE | Sensitivity, 76%; specificity, 86%; accuracy, 84% |
Hashimoto et al. (2020)22 | Evaluation of AI stand-alone performance during BERN detection | Total: 916 images of 65 patients with BERN, 919 images of 30 patients without dysplastic BE | Evaluation of 225 BERN and 233 BE images | Sensitivity, 96.4%; specificity, 94.2%; accuracy, 95.4% |
Training: 691 images with BERN, 686 with BE | HD-WLE and NBI images | Mean average precision: 0.75 | ||
IOU: 0.3 | ||||
Iwagami et al. (2021)23 | AI stand-alone performance during detection of adenocarcinoma at the EGJ and comparison to 15 experts | Training: 1,172 images from 166 EGJ cancer cases, 2,271 images of normal EGJ mucosa | Evaluation of 232 HD-WLE images from 79 EGJ cancer and non-cancer cases | AI stand-alone: sensitivity, 94%; specificity, 42%; accuracy, 66% |
Comparison to 15 experts | Experts: sensitivity, 88%; specificity, 43%; accuracy, 63% | |||
Struyvenberg et al. (2021)25 | AI stand-alone performance during differentiation between BE and BERN on near focus videos | Pretraining: 494,364 images from all intestinal segments | Internal validation: 71 BE and 112 BERN near focus NBI images | Internal validation: sensitivity, 88%; specificity, 78%; accuracy: 84% |
Training: 557 BE and 690 BERN HD-WLE overview images, 71 BE and 112 BERN near focus NBI images | External validation: 59 BERN and 98 BE near focus NBI videos | External validation: sensitivity, 85%; specificity, 83%; accuracy, 83% | ||
Hussein et al. (2022)26 | AI stand-alone performance during classification and localization of BE and BERN | For classification: | Classification: 264 i-scan images of 28 BERN and 16 BE patients | Sensitivity, 91%; specificity, 79%; Dice score, 50% (with one expert) |
Training: 148,936 frames of 31 BERN, 31 BE and 2 normal esophagus | Segmentation: 86 i-scan images of 28 BERN patients | |||
Validation: 25,161 frames of 6 BERN and 5 BE | ||||
For segmentation: | ||||
Training: 94 images of 30 BERN | ||||
Validation: 12 images of 6 BERN | ||||
Ebigbo et al. (2019)27 | AI stand-alone performance during detection of BE and BERN | – | MICCAI data: 100 (HD-WLE) images of 39 BE and BERN cases | MICCAI data (HD-WLE images-only): sensitivity, 92%; specificity, 100%; Dice coefficient, 0.56 |
Augsburg data: 148 (HD-WLE/NBI) images of 74 BE and BERN cases | Augsburg data (HD-WLE/NBI): sensitivity, 97%/94%; specificity, 88%/80%; Dice coefficient, 0.72 | |||
Comparison to expert segmentation | ||||
Ebigbo et al. (2020)28 | Detection of BERN during real-life endoscopic examination | Training: 129 images of 129 cases of BE and BERN | Validation of the AI system under real-life examination conditions with 14 patients | Sensitivity, 83.7%; specificity, 100%; accuracy, 89.9% |
Real-time evaluation of 36 extracted BERN and 26 BE images | ||||
Ebigbo et al. (2021)30 | Prediction of submucosal invasion of BERN with the help of AI; comparison to expert endoscopists | Images of pT1a and pT1b adenocarcinoma | Differentiation between pT1a and pT1b BERN | AI stand-alone: sensitivity, 77%; specificity, 64%; accuracy, 71% |
108 pT1a and 122 pT1b | Experts: sensitivity, 63%; specificity, 78%; accuracy: 70% | |||
HD-WLE BERN images | ||||
Comparison to 5 experts | ||||
Struyvenberg et al. (2021)35 | AI-aided detection of BERN during VLE | Training: 22 patients with 134 BE and 38 BERN targets | Validation set: 95 BE and 51 BERN targets of 25 patients | AI stand-alone: sensitivity, 91%; specificity, 82%; accuracy, 85% |
Comparison to 10 VLE experts | Experts: sensitivity, 70%; specificity, 81%; accuracy, 77% | |||
Waterhouse et al. (2021)36 | AI-aided differentiation of BE from BERN during spectral endoscopy | Training: 572 spectra | Differentiation of BE from BERN during spectral endoscopy | Sensitivity, 83.7%; specificity, 85.5%; accuracy: 84.8% |
Test-set: 143 spectra |
AI, artificial intelligence; BE, non-dysplastic Barrett’s esophagus; HD-WLE, high-definition white light endoscopy; BERN, Barrett’s esophagus-related neoplasia; CADx, computer-aided diagnosis; CADe, computer-aided detection; IOU, intersection over union; EGJ, esophagogastric junction; NBI, narrow band imaging; MICCAI, Medical Image Computing and Computer Assisted Interventions Society; VLE, volumetric laser endomicroscopy.
Study | Application |
---|---|
Pan et al. (2021)38 | Automatic AI-aided identification of the squamous–columnar junction and gastroesophageal junction |
Ali et al. (2021)39 | Automatic AI-aided determination of BE extension |
Wu et al. (2019)42 | WISENSE: automatic time measurement, recording of images, and detection of blind spots |
Study | Aim of the study | Data for the development | Experimental design | Performance |
---|---|---|---|---|
van der Sommen et al. (2016)19 | AI stand-alone performance during detection of early neoplasia in BE | – | 100 HD-WLE images from 44 patients | Per image sensitivity and specificity of 83% |
de Groof et al. (2020)20 | Evaluation of AI stand-alone performance and comparing it to the performance of nonexpert endoscopists | Pretraining: 494,364 images from all intestinal segments | 1. Validation-only dataset: 80 HD-WLE images of 80 patients | 1. CADx: sensitivity, 90%; specificity, 88%; accuracy, 89% |
Training: 1,544 BE and BERN HD-WLE images of 509 patients | 2. Dataset for validation and comparison to 53 nonexpert endoscopists: 80 HD-WLE images of 80 patients | CADe: optimal biopsy spot in 97% | ||
2. CADx: sensitivity, 93%; specificity, 83%; accuracy, 88% | ||||
CADe: optimal biopsy spot in 92% | ||||
Endoscopists: sensitivity, 72%; specificity, 74%; accuracy, 73% | ||||
de Groof et al. (2020)21 | Detection of BERN during real-life endoscopic examination | Training: 1,544 BE and BERN HD-WLE images of 509 patients | Evaluation of 144 HD-WLE of 10 patients with BERN and 10 with BE | Sensitivity, 76%; specificity, 86%; accuracy, 84% |
Hashimoto et al. (2020)22 | Evaluation of AI stand-alone performance during BERN detection | Total: 916 images of 65 patients with BERN, 919 images of 30 patients without dysplastic BE | Evaluation of 225 BERN and 233 BE images | Sensitivity, 96.4%; specificity, 94.2%; accuracy, 95.4% |
Training: 691 images with BERN, 686 with BE | HD-WLE and NBI images | Mean average precision: 0.75 | ||
IOU: 0.3 | ||||
Iwagami et al. (2021)23 | AI stand-alone performance during detection of adenocarcinoma at the EGJ and comparison to 15 experts | Training: 1,172 images from 166 EGJ cancer cases, 2,271 images of normal EGJ mucosa | Evaluation of 232 HD-WLE images from 79 EGJ cancer and non-cancer cases | AI stand-alone: sensitivity, 94%; specificity, 42%; accuracy, 66% |
Comparison to 15 experts | Experts: sensitivity, 88%; specificity, 43%; accuracy, 63% | |||
Struyvenberg et al. (2021)25 | AI stand-alone performance during differentiation between BE and BERN on near focus videos | Pretraining: 494,364 images from all intestinal segments | Internal validation: 71 BE and 112 BERN near focus NBI images | Internal validation: sensitivity, 88%; specificity, 78%; accuracy: 84% |
Training: 557 BE and 690 BERN HD-WLE overview images, 71 BE and 112 BERN near focus NBI images | External validation: 59 BERN and 98 BE near focus NBI videos | External validation: sensitivity, 85%; specificity, 83%; accuracy, 83% | ||
Hussein et al. (2022)26 | AI stand-alone performance during classification and localization of BE and BERN | For classification: | Classification: 264 i-scan images of 28 BERN and 16 BE patients | Sensitivity, 91%; specificity, 79%; Dice score, 50% (with one expert) |
Training: 148,936 frames of 31 BERN, 31 BE and 2 normal esophagus | Segmentation: 86 i-scan images of 28 BERN patients | |||
Validation: 25,161 frames of 6 BERN and 5 BE | ||||
For segmentation: | ||||
Training: 94 images of 30 BERN | ||||
Validation: 12 images of 6 BERN | ||||
Ebigbo et al. (2019)27 | AI stand-alone performance during detection of BE and BERN | – | MICCAI data: 100 (HD-WLE) images of 39 BE and BERN cases | MICCAI data (HD-WLE images-only): sensitivity, 92%; specificity, 100%; Dice coefficient, 0.56 |
Augsburg data: 148 (HD-WLE/NBI) images of 74 BE and BERN cases | Augsburg data (HD-WLE/NBI): sensitivity, 97%/94%; specificity, 88%/80%; Dice coefficient, 0.72 | |||
Comparison to expert segmentation | ||||
Ebigbo et al. (2020)28 | Detection of BERN during real-life endoscopic examination | Training: 129 images of 129 cases of BE and BERN | Validation of the AI system under real-life examination conditions with 14 patients | Sensitivity, 83.7%; specificity, 100%; accuracy, 89.9% |
Real-time evaluation of 36 extracted BERN and 26 BE images | ||||
Ebigbo et al. (2021)30 | Prediction of submucosal invasion of BERN with the help of AI; comparison to expert endoscopists | Images of pT1a and pT1b adenocarcinoma | Differentiation between pT1a and pT1b BERN | AI stand-alone: sensitivity, 77%; specificity, 64%; accuracy, 71% |
108 pT1a and 122 pT1b | Experts: sensitivity, 63%; specificity, 78%; accuracy: 70% | |||
HD-WLE BERN images | ||||
Comparison to 5 experts | ||||
Struyvenberg et al. (2021)35 | AI-aided detection of BERN during VLE | Training: 22 patients with 134 BE and 38 BERN targets | Validation set: 95 BE and 51 BERN targets of 25 patients | AI stand-alone: sensitivity, 91%; specificity, 82%; accuracy, 85% |
Comparison to 10 VLE experts | Experts: sensitivity, 70%; specificity, 81%; accuracy, 77% | |||
Waterhouse et al. (2021)36 | AI-aided differentiation of BE from BERN during spectral endoscopy | Training: 572 spectra | Differentiation of BE from BERN during spectral endoscopy | Sensitivity, 83.7%; specificity, 85.5%; accuracy: 84.8% |
Test-set: 143 spectra |
Study | Application |
---|---|
Pan et al. (2021)38 | Automatic AI-aided identification of the squamous–columnar junction and gastroesophageal junction |
Ali et al. (2021)39 | Automatic AI-aided determination of BE extension |
Wu et al. (2019)42 | WISENSE: automatic time measurement, recording of images, and detection of blind spots |
AI, artificial intelligence; BE, non-dysplastic Barrett’s esophagus; HD-WLE, high-definition white light endoscopy; BERN, Barrett’s esophagus-related neoplasia; CADx, computer-aided diagnosis; CADe, computer-aided detection; IOU, intersection over union; EGJ, esophagogastric junction; NBI, narrow band imaging; MICCAI, Medical Image Computing and Computer Assisted Interventions Society; VLE, volumetric laser endomicroscopy.
AI, artificial intelligence; BE, Barrett's esophagus.