You are here
Resolving the neural representation of word meaning in time and space
How do we make sense of words? What are the neural mechanisms by which a simple arrangement of strokes may trigger rich and elaborated mental representations? Meaning is a multidimensional representation that includes both abstract features (e.g. taxonomy), as well as perceptual features (e.g. prototypical size, color) of the objects referred to by words. What are the neural systems coding for these different conceptual and perceptual dimensions of word meaning? How do they articulate in time to give rise to the impression of a unitary representation of meaning? Our first study suggests the existence of a postero-to-anterior gradient of information coding along the ventral pathway: from purely perceptual (e.g. prototypical size) to conceptual (e.g. semantic category). Capitalizing from what we have learned, we conducted a combined MEG/fMRI experiment in order to extend our investigation in two main directions: (1) add another perceptual dimension, studying the patterns associated not only with visual properties but also auditory ones; (2) add information on the temporal dynamics thanks to MEG high time resolution. The reactivation of visual features in primary visual areas corresponds to a similar reactivation of auditory features in primary auditory areas? Do we first activate the more abstract semantic features in anterior temporal lobes, and only after the perceptual features in perceptual regions, or is the temporal order of activation the reverse? Alternatively, do these different representations get activated entirely in parallel? Analyzing the data with multivariate methods (i.e. classifying and correlating the different patterns of activation recorded) we will try to shed light on when and where a given stimulus dimension is coded and how brain activity changes in time and space to accomplish the goal of transforming meaningless symbols into unitary meaningful concepts.