Scientific Lectures //
Multimodal Fusion: Exploiting Links among Enriched Types of Imaging, Cognitive and Behavioural…
Multimodal Fusion: Exploiting Links among Enriched Types of Imaging, Cognitive and Behavioural Information for Individuals
Jing Sui, Ph.D., Assistant Professor of Translational Neuroscience, The Mind Research Network
Presented: September 21, 2016
ABSTRACT: Recent years have witnessed a rapid growth of multimodal imaging approaches to brain imaging. It is becoming increasingly clear that multi-modal fusion is able to provide more information for individual subjects by exploiting the rich multimodal information that exists, rather than an analysis of each modality alone. An increasing number of studies are using multimodal data in the context of the study of mental illness. However, the number of studies that do true multimodal fusion (i.e. capitalizing on joint information among modalities) is still remarkably small given the known benefits. In part, this is because multi-modal studies involve considerably more expertise in collecting, analyzing, and interpreting the results than do unimodal studies. Moreover, to take full advantage of the joint information in multimodal data, we need efficient methods that can draw valid conclusions from high dimensional data, but only with a limited number of samples. In this talk, we start by introducing the basic reasons why multimodal data fusion is important and what it can do, and help compensate for imperfect brain imaging studies. Then we give two examples of using multimodal information to 1) guide detection of potential cognitive neuromarkers and 2) to realize individualized prediction of clinical measures in practice, which may provide more clues by exploiting links among enriched types of imaging, cognitive and behavioural information for individuals.
To view the presentation please click here.