You are here
presentation at the ATTEND project monthly meeting
A few recent studies (Reddy et al. 2010, Cichy et al 2011) have shown that there is common neural basis between perception and imagery. In these studies it was shown that it was not only possible to decode object category of perceived or imagined objects from object selective voxels in the brain, but decoding worked also in a cross modal manner. That is, the classifier trained on the activity pattern in object selective cortex during perception trials could reliably decode the object category of the imagery trials and vice a versa. However, it was shown that decoding of perceived objects using as training data imagery trials works somewhat better than in the other direction.
I presented the results of the analysis of the experiment data where subjects in half of the trials viewed images of people and cars and in the other half of the trials imagined people or cars according to the cue. In the decoding analysis, I replicated the results of the previous studies on cross-modal decoding of perception and imagery using he data of 10 subjects. Below are the accuracies obtained with 4-fold cross-validation and averaged over the number of subjects. First, it was possible to decode the object category with reliable accuracy within modality: it was around 90% in low visual areas and above 70% in ITG and fusiform cortex for perception and around 64% in object selective cortex (OSC) for imagery. Next, for cross-modal decoding I had 64% accuracies in OSC when the classifier was trained on imagery data and tested on perception data and 60% when the classifier was trained on perception data and tested on imagery trials. Apart from replicating the previous results, the goals of the present study are: 1) to use perception and imagery data for cross-modal decoding of visual search preparation data and 2) for setting up a real time fMRI experiment where the object identity during visual search preparation could be decoded online.