Mentions:
Lastly, we assessed whether valence in a specific individual corresponded to affect representations in others’ brains. As previous work has demonstrated that representational geometry of object categories in the VTC can be shared across participants 35,36, we first examined whether item-level (i.e., by picture) classification was possible by comparing each participant’s item-based representational similarity matrices to that estimated from all other participants in a leave-one-out procedure. We calculated the classification performance for each target picture as the percentage that its representation was more similar to its estimate, compared pairwise to all other picture representations (50% chance; for details, see Online Methods and Supplementary Fig. 7). We found that item-specific representations in the VTC were predicted very highly by the other participants’ representational map (80.1 ±1.4 % accuracy, t15 = 21.4, P = 2.4 × 10−12;Fig 6a). Cross-participant classification accuracy was also statistically significant in the OFC (54.7 ±0.8 % accuracy, t15 = 5.7, P = 0.00008); however, it was substantially reduced compared to the VTC (t15 = 15.9, P = 8.4 × 10−11), suggesting that item-specific information is more robustly represented and translatable across participants in the VTC compared to the OFC.

Mentions:
Lastly, we assessed whether valence in a specific individual corresponded to affect representations in others’ brains. As previous work has demonstrated that representational geometry of object categories in the VTC can be shared across participants 35,36, we first examined whether item-level (i.e., by picture) classification was possible by comparing each participant’s item-based representational similarity matrices to that estimated from all other participants in a leave-one-out procedure. We calculated the classification performance for each target picture as the percentage that its representation was more similar to its estimate, compared pairwise to all other picture representations (50% chance; for details, see Online Methods and Supplementary Fig. 7). We found that item-specific representations in the VTC were predicted very highly by the other participants’ representational map (80.1 ±1.4 % accuracy, t15 = 21.4, P = 2.4 × 10−12;Fig 6a). Cross-participant classification accuracy was also statistically significant in the OFC (54.7 ±0.8 % accuracy, t15 = 5.7, P = 0.00008); however, it was substantially reduced compared to the VTC (t15 = 15.9, P = 8.4 × 10−11), suggesting that item-specific information is more robustly represented and translatable across participants in the VTC compared to the OFC.