You are here
Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing.
Cortical activity is inherently complex and non linear. Yet, common
approaches to assess the significance of cortical activation in the MEG
are univariate, neglecting the interdependence between sensors or
voxels. Pattern classification methods are theoretically able to reveal
activity that would otherwise go unnoticed and have been used in fMRI
and sensor space EEG/MEG. However, the big amount of dimensions present
in fMRI and source space MEG is a huge problem for classifiers. It is
commonly solved by using "searchlight" approaches, yet sacrificing the
ability to reveal spatial patterns. The other possibility is to use the
elastic net approach with a high sparsity constraint. In the talk I will
present an application of this method to 5D MEG data (3D spatial plus
time and frequency), revealing commonly known but also unknown areas
engaged in differentiating images of faces and bodies.
The functional and structural representation of the brain as a complex network is marked by the fact that the comparison of noisy and intrinsically correlated high-dimensional structures between experimental conditions or groups shuns typical mass univariate methods. Furthermore most network estimation methods cannot distinguish between real and spurious correlation arising from the convolution due to nodes' interaction, which thus introduces additional noise in the data. We propose a machine learning pipeline aimed at identifying multivariate differences between brain networks associated to different experimental conditions. The pipeline (1) leverages the deconvolvedindividual contribution of each edge and (2) maps the task into a sparse classification problem in order to construct the associated "sparsedeconvolved predictive network", i.e., a graph with the same nodes of those compared but whose edge weights are defined by their relevance for out of sample predictions in classification. We present an application of the proposed method by decoding the covert attention direction (left or right) based on the single-trial functional connectivity matrix extracted from high-frequency magnetoencephalography (MEG) data. Our results demonstrate how network deconvolution matched with sparse classification methods outperforms typical approaches for MEG decoding.
Vectorial representation has been so far the most common strategy adopted for decoding of fMRI data. The high flexibility and ease of treatment of vectorial representations are coupled with equaly high disadvantages, both from a theoretical and practical point of view, namely: lost of spatial information and need of a common space for between-subject classification tasks. In this talk we want to discuss the possibility of adopting a graph theoretical representation of the brain data as a feasible alternative to vectorial encoding. We argue that graph encoding not only can preserve spatial information, but also represent relational information between brain areas. We give a synthetic review of the most common methods to assess functional connectivity in fMRI data and discuss their utility for the task of graph encoding. We will argue that parcellation methods, in a broad sense, are still the most straightforward and intuitive approaches that easily meet our demands. In the final part, after sketchings some possible implementations, we discuss the main challenges of graphical encoding, namely: what kind of problems it is more suited to address, alternative to parcellation, and interpretability of the results.
If the number of parameters to estimate exceeds the number of measurements, an estimation problem is said to be ill-posed. Due to limited acquisition times, physics of the problems and complexity of the brain, the field of functional brain imaging needs to address many ill-posed problems. Among such problems are: the localization in space and time of active brain regions with MEG and EEG, the estimation of functional networks from fMRI resting state data and what was is commonly called "decoding". Decoding consists in predicting from fMRI data a behavioral variable or classifying brain states using supervised learning methods like SVM. In this talk I will describe some recent contributions to the resolution of the M/EEG inverse problem using sparse structured priors in time-frequency dictionaries [1,2], as well as to the problem of supervised learning from fMRI data using spatially regularized convex priors [3,4,5].
 Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods. A Gramfort, M Kowalski, M Hämäläinen. Physics in medicine and biology, 2012
 Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with non-stationary source activations. A Gramfort, D Strohmeier, J Haueisen, M Hämäläinen, M Kowalski. NeuroImage 2013
 Total variation regularization for fMRI-based prediction of behaviour. V Michel, A Gramfort, G Varoquaux, E Eger, B Thirion. IEEE Transactions on Medical Imaging, 2011. http://www.ncbi.nlm.nih.gov/pubmed/21317080?dopt=Abstract
 Multi-scale Mining of fMRI data with Hierarchical Structured Sparsity. R Jenatton, A Gramfort, V Michel, G Obozinski, F Bach, B Thirion. SIAM J. Imaging Sci., 2012. http://fr.arxiv.org/abs/1105.0363
 Identifying predictive regions from fMRI with TV-l1 prior. A Gramfort, B Thirion, G Varoquaux. Pattern Recognition in Neuroimaging (PRNI), 2013. http://hal.inria.fr/docs/00/83/99/84/PDF/paper.pdf
Regularities in the world are human defined. The patterns in the observations are there because human experts define them and recognize them as such. Automatic pattern recognition tries to bridge human judgment with measurements made by artificial sensors. This is done in two steps: representation and generalization. Traditional object representations in pattern recognition, like features and pixels, either neglect possibly significant aspects of the objects, or neglect their dependencies and structure.
In this presentation human observations and recognition are reconsidered. The direct experience of dissimilarities between objects will be used as a basis. From this starting point pattern recognition systems can be defined in a natural way by pairwise object comparisons. This results in the dissimilarity representation for pattern recognition.
An analysis of dissimilarity measures optimized for performance shows that they tend to be non-Euclidean. The Euclidean vector spaces, traditionally used in pattern recognition and machine learning may thereby be suboptimal. We will show this by some examples. Causes and consequences of non-Euclidean representations will be discussed. It is conjectured that human judgment of object differences result in non-Euclidean representations as the entire objects, including their structure, is taken into account.
Humans can categorize complex natural scenes quickly and accurately. Which properties of scenes enable such an astonishing feat? Line drawings of natural scenes provide us with comparably easy access to these properties, while still being compatible to photographs in their neural representation of scene category (Walther et al., PNAS 2011). We extracted five sets of scene properties from line drawings of natural scenes: contour length, orientation, and curvature, and type and angle of contour junctions. We then categorized natural scenes based on the statistical distributions of these properties. Orientation was the property that allowed for the highest categorization accuracy. However, we found that the pattern of categorization errors for curvature, junction type and angle provided the best match with human behavior. Thus, junctions and curvature appear to be particularly relevant for the human ability to categorize scenes. We verified this computational prediction in a behavioral experiment with manipulated line drawings of scenes, in which the junctions were modified while preserving length, orientation and curvature. As expected, this manipulation led to a significant decrease in categorization accuracy. Our results indicate that the human ability to categorize complex natural scenes is to a large extent driven by the structure of scenes, which is described by junctions and curvature. Line orientation, which is tightly linked to the spatial frequency spectrum, is useful for computational scene categorization but does not match human behavior. This finding challenges the popular view that natural scene categorization relies on statistical regularities of the spatial frequency spectrum.
Since 2001, following the study by (Haxby et al. 2001), multiple studies have successfully demonstrated that it was possible to decode representations of objects and their properties from fMRI data using the multivariate approach. In the present study, multivariate decoding approach has been applied to the properties of mental operations, namely, the direction of mental rotation. An fMRI experiment has been designed, where subjects had to rotate the stimulus in their mind’s eye clockwise or counterclockwise toward the target to respond if it had the same shape as the target. Two workflows for ROI identification have been worked out to reduce the number of features. Then, activity data from those ROIs were extracted and fed to an SVM classifier to see if rotation direction could be predicted with accuracies above chance. The results of the analysis were the following: first, several ROIs have been identified where the accuracies for decoding mental rotation direction in terms of “up” versus “down” versus “cross-meridian (level)” were between 10 and 30% above chance depending on the subject, decoding type (multiclass vs. pairwise) and type of analysis (workflow 1 or 2). Second, these ROIs (mainly in the occipital and parietal areas, BA 7, 18 and 19) largely match those identified in previous imaging studies of mental rotation.
Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P., (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293 (5539), 2425–2430.
Sligte, Ilja G., Dirk van Moorselaar, & Annelinde R. E. Vandenbroucke (2013). Decoding the Contents of Visual Working Memory: Evidence for Process-Based and Content-Based Working Memory Areas? The Journal of Neuroscience, January 23, 2013, 33(4):1293–1294
In neuroimaging experiments, "decoding" the brain activity means to recognise
the mental state of the subject from neural correlates. In cognitive neuroscience
the decoding approach is used as a first step to understand how the brain works,
frequently under the form of a comparison between hypotheses. In this context, a
classifier that is able to accurately predict the kind of stimulus presented to
the subject is considered as evidence of the presence of the related mental
process within the data. This talk is meant to describe the detailed steps for
conducting a test of hypothesis to assess whether a classifier is better than
predicting at random. We start by introducing the different approaches (classical
and Bayesian) to testing statistical hypotheses and we show how to implement them for
testing the classifier. Then we show that the widely adopted cross-validation
scheme may introduce optimistic bias in the results of statistical tests. We
conclude by proposing a novel way to assess the ability of a classifier to
discriminate that avoids the shortcomings of the commonly used classification
accuracy, especially for unbalanced and multiclass problems.