Bottom Line:
We extracted five context-related network structures including a bottom-up network during encoding and, seconds later, cue-dependent retrieval of the same network with the opposite top-down connectivity.These findings show that context is represented in the cortical network as distributed communication structures with dynamic information flows.This study provides a general methodology for recording and analyzing cortical network neuronal communication during cognition.

ABSTRACTContext is information linked to a situation that can guide behavior. In the brain, context is encoded by sensory processing and can later be retrieved from memory. How context is communicated within the cortical network in sensory and mnemonic forms is unknown due to the lack of methods for high-resolution, brain-wide neuronal recording and analysis. Here, we report the comprehensive architecture of a cortical network for context processing. Using hemisphere-wide, high-density electrocorticography, we measured large-scale neuronal activity from monkeys observing videos of agents interacting in situations with different contexts. We extracted five context-related network structures including a bottom-up network during encoding and, seconds later, cue-dependent retrieval of the same network with the opposite top-down connectivity. These findings show that context is represented in the cortical network as distributed communication structures with dynamic information flows. This study provides a general methodology for recording and analyzing cortical network neuronal communication during cognition.

fig3s3: Five latent network structures were robust against ICA model order selection.(A) PARAFAC results (with DTLD/GRAM method) from data obtained from ICA with 90% (ICA90%, as in Figure 2D), 80% (ICA80%), and 75% (ICA75%) of total variance preserved. In all cases, five latent network structures yelled the optimal fits. (B) Similarities among structures obtained from different ICA results. Top row: We first compared Structure i to Structure j obtained from ICA90% (90% vs 90%). Correlations were evaluated in four different domains: Comparison (the first tensor dimension), Time and Frequency (the second tensor dimension), and Causal outflow (the third tensor dimension). The significant correlations (α = 0.05) are indicated as asterisks, and the correlations with high correlation coefficients (Pearson, ρ > 0.8) are indicated as circles. Second row: Correlations between Structure i obtained from ICA80% and Structure j obtained from ICA90% (80% vs 90%). The structures obtained from ICA80% were reordered so that the correlations in the diagonals were maximized. Bottom row: Correlations between Structure i obtained from ICA75% and Structure j obtained from ICA90% (75% vs 90%). High correlations were found in the diagonals in 80% vs 90% and 75% vs 90%, and high similarities were found among 90% vs 90%, 80% vs 90%, and 75% vs 90%. These results indicate that the five latent network structures were similar under different selections of ICA model order.DOI:http://dx.doi.org/10.7554/eLife.06121.017

Mentions:
To extract structured information from this high-volume dataset, we deconvolved the 3D tensor into multiple components by performing parallel factor analysis (PARAFAC), a generalization of principal component analysis (PCA) to higher order arrays (Harshman and Lundy, 1994) and measured the consistency of deconvolution under different iterations of PARAFAC (Bro and Kiers, 2003). Remarkably, we observed five dominant structures from the pooled ∆ERCs that represented functional network dynamics, where each structure contained a comprehensive fingerprint of network function, dynamics, and anatomy (Figure 3D, and Figure 3—figure supplement 2). These five structures were robust against model order selection for ICA (Figure 3—figure supplement 3).

fig3s3: Five latent network structures were robust against ICA model order selection.(A) PARAFAC results (with DTLD/GRAM method) from data obtained from ICA with 90% (ICA90%, as in Figure 2D), 80% (ICA80%), and 75% (ICA75%) of total variance preserved. In all cases, five latent network structures yelled the optimal fits. (B) Similarities among structures obtained from different ICA results. Top row: We first compared Structure i to Structure j obtained from ICA90% (90% vs 90%). Correlations were evaluated in four different domains: Comparison (the first tensor dimension), Time and Frequency (the second tensor dimension), and Causal outflow (the third tensor dimension). The significant correlations (α = 0.05) are indicated as asterisks, and the correlations with high correlation coefficients (Pearson, ρ > 0.8) are indicated as circles. Second row: Correlations between Structure i obtained from ICA80% and Structure j obtained from ICA90% (80% vs 90%). The structures obtained from ICA80% were reordered so that the correlations in the diagonals were maximized. Bottom row: Correlations between Structure i obtained from ICA75% and Structure j obtained from ICA90% (75% vs 90%). High correlations were found in the diagonals in 80% vs 90% and 75% vs 90%, and high similarities were found among 90% vs 90%, 80% vs 90%, and 75% vs 90%. These results indicate that the five latent network structures were similar under different selections of ICA model order.DOI:http://dx.doi.org/10.7554/eLife.06121.017

Mentions:
To extract structured information from this high-volume dataset, we deconvolved the 3D tensor into multiple components by performing parallel factor analysis (PARAFAC), a generalization of principal component analysis (PCA) to higher order arrays (Harshman and Lundy, 1994) and measured the consistency of deconvolution under different iterations of PARAFAC (Bro and Kiers, 2003). Remarkably, we observed five dominant structures from the pooled ∆ERCs that represented functional network dynamics, where each structure contained a comprehensive fingerprint of network function, dynamics, and anatomy (Figure 3D, and Figure 3—figure supplement 2). These five structures were robust against model order selection for ICA (Figure 3—figure supplement 3).

Bottom Line:
We extracted five context-related network structures including a bottom-up network during encoding and, seconds later, cue-dependent retrieval of the same network with the opposite top-down connectivity.These findings show that context is represented in the cortical network as distributed communication structures with dynamic information flows.This study provides a general methodology for recording and analyzing cortical network neuronal communication during cognition.

ABSTRACTContext is information linked to a situation that can guide behavior. In the brain, context is encoded by sensory processing and can later be retrieved from memory. How context is communicated within the cortical network in sensory and mnemonic forms is unknown due to the lack of methods for high-resolution, brain-wide neuronal recording and analysis. Here, we report the comprehensive architecture of a cortical network for context processing. Using hemisphere-wide, high-density electrocorticography, we measured large-scale neuronal activity from monkeys observing videos of agents interacting in situations with different contexts. We extracted five context-related network structures including a bottom-up network during encoding and, seconds later, cue-dependent retrieval of the same network with the opposite top-down connectivity. These findings show that context is represented in the cortical network as distributed communication structures with dynamic information flows. This study provides a general methodology for recording and analyzing cortical network neuronal communication during cognition.