arXiv Analytics

Sign in

arXiv:2208.10745 [eess.IV]AbstractReferencesReviewsResources

Retinal Structure Detection in OCTA Image via Voting-based Multi-task Learning

Jinkui Hao, Ting Shen, Xueli Zhu, Yonghuai Liu, Ardhendu Behera, Dan Zhang, Bang Chen, Jiang Liu, Jiong Zhang, Yitian Zhao

Published 2022-08-23Version 1

Automated detection of retinal structures, such as retinal vessels (RV), the foveal avascular zone (FAZ), and retinal vascular junctions (RVJ), are of great importance for understanding diseases of the eye and clinical decision-making. In this paper, we propose a novel Voting-based Adaptive Feature Fusion multi-task network (VAFF-Net) for joint segmentation, detection, and classification of RV, FAZ, and RVJ in optical coherence tomography angiography (OCTA). A task-specific voting gate module is proposed to adaptively extract and fuse different features for specific tasks at two levels: features at different spatial positions from a single encoder, and features from multiple encoders. In particular, since the complexity of the microvasculature in OCTA images makes simultaneous precise localization and classification of retinal vascular junctions into bifurcation/crossing a challenging task, we specifically design a task head by combining the heatmap regression and grid classification. We take advantage of three different \textit{en face} angiograms from various retinal layers, rather than following existing methods that use only a single \textit{en face}. To facilitate further research, part of these datasets with the source code and evaluation benchmark have been released for public access:https://github.com/iMED-Lab/VAFF-Net.

Related articles:
arXiv:2311.06009 [eess.IV] (Published 2023-11-10)
Polar-Net: A Clinical-Friendly Model for Alzheimer's Disease Detection in OCTA Images
Shouyue Liu et al.
arXiv:2107.10476 [eess.IV] (Published 2021-07-22)
A Deep Learning-based Quality Assessment and Segmentation System with a Large-scale Benchmark Dataset for Optical Coherence Tomographic Angiography Image
Yufei Wang et al.