Blending unsupervised and supervised mimic learning for discovering interpretable phenotypes in 3D imaging volumes
Files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The ability to differentiate between different subtypes of diseases is essential for informed clinical decision-making. However, in cases where only partial knowledge of a disease exists and its subtypes are unknown, traditional supervised classification approaches are not applicable. Instead, unsupervised methods are necessary for subtype discovery. Previous research in imaging-based subtype discovery has been limited, often lacking interpretability in the discovered phenotypes. This thesis proposes a novel data-driven method for discovering interpretable imaging phenotypes in 3D image volumes. While previous research has been limited, especially for 3D imaging modalities like MRI and CT, our approach focuses on CT scan images as a demonstration of 3D imaging. Our method combines unsupervised learning and supervised mimic learning for phenotype discovery and interpretability, respectively. We utilize unsupervised 3D autoencoders to discover phenotypes and employ supervised mimic learning to interpret our unsupervised pipeline. Notably, this is the first application of supervised mimic learning to understand an unsupervised model. Additionally, we introduce and formulate VolPAM, a technique enabling 3D interpretation of the discovered phenotypes. Our method is applicable to various medical imaging modalities and holds the potential to advance our understanding of respective diseases and conditions.