Abstract

Important advances have recently been made using computational semanticmodels to decode brain activity patterns associated with concepts; however,this work has almost exclusively focused on concrete nouns. How well thesemodels extend to decoding abstract nouns is largely unknown. We address thisquestion by applying state-of-the-art computational models to decodefunctional Magnetic Resonance Imaging (fMRI) activity patterns, elicited byparticipants reading and imagining a diverse set of both concrete andabstract nouns. One of the models we use is linguistic, exploiting therecent word2vec skipgram approach trained on Wikipedia. The second isvisually grounded, using deep convolutional neural networks trained onGoogle Images. Dual coding theory considers concrete concepts to be encodedin the brain both linguistically and visually, and abstract concepts onlylinguistically. Splitting the fMRI data according to human concretenessratings, we indeed observe that both models significantly decode the mostconcrete nouns; however, accuracy is significantly greater using thetext-based models for the most abstract nouns. More generally this confirmsthat current computational models are sufficiently advanced to assist ininvestigating the representational structure of abstract concepts in thebrain.