An explainability framework for cortical surface-based deep learning
The emergence of explainability methods has enabled a better comprehension of how deep neural networks operate through concepts that are easily understood and implemented by the end user. While most explainability methods have been designed for traditional deep learning, some have been further developed for geometric deep learning, in which data are predominantly represented as graphs. These representations are regularly derived from medical imaging data, particularly in the field of neuroimaging, in which graphs are used to represent brain structural and functional wiring patterns (brain connectomes) and cortical surface models are used to represent the anatomical structure of the brain. Although explainability techniques have been developed for identifying important vertices (brain areas) and features for graph classification, these methods are still lacking for more complex tasks, such as surface-based modality transfer (or vertex-wise regression). Here, we address the need for surface-based explainability approaches by developing a framework for cortical surface-based deep learning, providing a transparent system for modality transfer tasks. First, we adapted a perturbation-based approach for use with surface data. Then, we applied our perturbation-based method to investigate the key features and vertices used by a geometric deep learning model developed to predict brain function from anatomy directly on a cortical surface model. We show that our explainability framework is not only able to identify important features and their spatial location but that it is also reliable and valid.
READ FULL TEXT