EL-AURIAN: Enhanced Learning Algorithms for Ultra-High-Resolution Neurovascular Imaging and Analysis
The human brain receives nutrients and oxygen through an intricate network of blood vessels. Pathology affecting small vessels, at the mesoscopic scale, represents a critical vulnerability within the cerebral blood supply and can lead to severe conditions, such as Cerebral Small Vessel Diseases (CSVD). The advent of 7 Tesla MRI systems has enabled the acquisition of higher spatial resolution images, making it possible to visualise such vessels in the brain. However, the lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
EL-AURIAN is a vessel segmentation project led by Dr Soumick Chatterjee, in collaboration with Dr Hendrik Mattern, Prof. Andreas Nürnberger, and Prof. Oliver Speck, which began with the DS6 study, seeks to improve vessel segmentation in ultra-high-resolution 7T ToF-MRAs through the application of deep learning methods. Thus far, techniques employing four distinct learning paradigms: supervised, semi-supervised, weakly supervised, and unsupervised have been developed as part of this project. Furthermore, the project has created a benchmark dataset, SMILE-UHURA, to facilitate progress in this domain and support further advancements.
DS6, Deformation-Aware Semi-Supervised Learning: Application to Small Vessel Segmentation with Noisy Training Data
The first paper from this project proposes a deep learning architecture to automatically segment small vessels in 7 Tesla 3D Time-of-Flight (ToF) Magnetic Resonance Angiography (MRA) data. The algorithm was trained and evaluated on a small imperfect semi-automatically segmented dataset of only 11 subjects; using six for training, two for validation, and three for testing. The deep learning model based on U-Net Multi-Scale Supervision was trained using the training subset and was made equivariant to elastic deformations in a self-supervised manner using deformation-aware learning to improve the generalisation performance. The proposed technique was evaluated quantitatively and qualitatively against the test set and achieved a Dice score of 80.44 ± 0.83. Furthermore, the result of the proposed method was compared against a selected manually segmented region (62.07 resultant Dice) and has shown a considerable improvement (18.98%) with deformation-aware learning.
SMILE-UHURA Challenge: Small Vessel Segmentation at MesoscopIc ScaLE from Ultra-High ResolUtion 7T Magnetic Resonance Angiograms
To address the complexities of mesoscopic vessel segmentation and to highlight the need for advanced techniques to manage the high noise levels and poor vessel-to-background contrast inherent in “ultra-high-resolution” data, the SMILE-UHURA challenge was organised. This challenge, held in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2023, in Cartagena de Indias, Colombia (and virtually), aimed to provide a platform for researchers working on related topics. The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI. This dataset was created through a combination of automated pre-segmentation and extensive manual refinement. In this manuscript, sixteen submitted methods and two baseline methods are compared both quantitatively and qualitatively on two different datasets: held-out test MRAs from the same dataset as the training data (with labels kept secret) and a separate 7T ToF MRA dataset where both input volumes and labels are kept secret. The results demonstrate that most of the submitted deep learning methods, trained on the provided training dataset, achieved reliable segmentation performance. Dice scores reached up to 0.838 ± 0.066 and 0.716 ± 0.125 on the respective datasets, with an average performance of up to 0.804 ± 0.15. The SMILE-UHURA dataset is kept publicly available to facilitate the training of new machine learning models and to provide a benchmarking platform for researchers.
SPOCKMIP: Segmentation of Vessels in MRAs with Enhanced Continuity using Maximum Intensity Projection as Loss
Semi-supervised patch-based approaches, like DS6, have been effective in identifying small vessels of one to two voxels in diameter. This study focuses on improving the segmentation quality by considering the spatial correlation of the features using the Maximum Intensity Projection (MIP) as an additional loss criterion. Two methods are proposed with the incorporation of MIPs of label segmentation on the single (z-axis) and multiple perceivable axes of the 3D volume. The proposed MIP-based methods produce segmentations with improved vessel continuity, which is evident in visual examinations of ROIs. In this study, a UNet MSS with ReLU activation replaced by LeakyReLU is trained on the Study Forrest dataset. Patch-based training is improved by introducing an additional loss term, MIP loss, to penalise the predicted discontinuity of vessels. A training set of 14 volumes is selected from the StudyForrest dataset comprising of 18 7-Tesla 3D Time-of-Flight (ToF) Magnetic Resonance Angiography (MRA) images. Then it is used to perform a five-fold cross-validation. The generalisation performance of the method is evaluated using the other unseen volumes in the dataset. It is observed that the proposed method with multi-axes MIP loss produces better quality segmentations with a median Dice of 80.245 ± 0.129. Also, the method with single-axis MIP loss produces segmentations with a median Dice of 79.749 ± 0.109. Furthermore, a visual comparison of the ROIs in the predicted segmentation reveals a significant improvement in the continuity of the vessels when MIP loss is incorporated into training.
DILITHIUM: Deep learning with less-to-no supervision for the segmentation of vessels in high-resolution 7T MRAs using unsupervised and weakly-supervised learning
This work explores the possibility of using unsupervised and weakly-supervised learning techniques for vessel segmentation. Furthure details and the preprint will be available soon!