Retinal vessel analysis plays a crucial role in the detection and management of various systemic and ocular diseases, such as diabetic retinopathy, hypertension, and cardiovascular disorders. Precise segmentation of retinal vessels from fundus images enables clinicians to analyze vessel morphology, which can reveal disease progression or underlying conditions. Over recent years, deep learning methods have significantly advanced retinal vessel segmentation.
However, the performance of deep learning models heavily relies on large, well-annotated datasets. In medical imaging, and particularly retinal vessel segmentation, constructing such datasets is impractical due to the labor-intensive and time-consuming nature of manual vessel structure annotation, which requires expert knowledge. Furthermore, fundus images from different sources exhibit distinct visual appearances, which results from variations in imaging devices, protocols, and patient characteristics. As a result, a domain gap challenge arises when a model trained on one dataset faces difficulties generalizing effectively to a different dataset. These discrepancies can significantly degrade the performance of deep learning models when applied to data from different domains, limiting their utility in real-world clinical scenarios.
To solve the problems, Fei Guo et al. construct a retinal vessel segmentation dataset, termed as RetinaDA, that includes domain gaps. This
finding is published in
Frontiers of Computer Science by Higher Education Press and Springer Nature. The RetinaDA incorporates fundus images from various imaging devices, protocols, and clinical environments, enabling models to learn robust features that generalize across different domains, enhancing their performance in diverse clinical contexts for retinal vessel segmentation.
The proposed dataset is constructed using six widely recognized public datasets. First, they randomly crop the microvascular region surrounding the macula, avoiding interference from unrelated structures such as the optic disc, ensuring the focus remains on key vascular features for analysis. Second, they apply random augmentations, which include random horizontal and vertical flips, resizing to a consistent input size, and rotations within a 360° range, to introduce variability and improve model robustness. Finally, all patches are resized to a uniform resolution of 512 × 512 pixels to ensure consistency across the dataset.
DOI:
10.1007/s11704-024-41114-1