Current models of language processing advocate that word meaning is partially stored in distributed modality-specific cortical networks. However, while much has been done to investigate where information is represented in the brain, the neuronal dynamics underlying how these networks communicate internally, and with each other are still poorly understood. For example, it is not clear how spatially distributed semantic content is integrated into a coherent conceptual representation. The current thesis investigates how perceptual semantic features are selected and integrated, using oscillatory neuronal dynamics. Cortical oscillations reflect synchronized activity in large neuronal populations that are associated with specific classes of network interactions. The first part of the present thesis addresses how perceptual semantic features are selected in long-term memory. Using electroencephalographic (EEG) recordings, it is demonstrated that retrieving perceptually more complex information is associated with a reduction in oscillatory power, which is in line with the information via desynchronization hypothesis, a recent neurophysiological model for long-term memory retrieval. The second, and third part address how distributed semantic content is integrated and coordinated in the brain. Behavioral evidence suggests that integrating two features of a target word (e.g., Whistle) during a dual property verification task, incurs an additional processing cost if features are from different (visual: tiny, audio: loud), rather than the same modality (visual: tiny, silver). Furthermore, EEG recordings reveal that integrating cross-modal feature pairs is associated with a more sustained low-frequency theta power increase in the left anterior temporal lobe (ATL). The ATL is thought to converge semantic content from different modalities. In line with this notion, ATL is shown to communicate with a widely distributed cortical network at the theta frequency. The fourth part of the thesis uses magnetoencephalographic (MEG) recordings to show that, while low frequency theta oscillations in left ATL are more sensitive to integrating features from different modalities, integrating two features from the same modality induces an early increase in high frequency gamma power in left ATL and modality-specific regions. These results are in line with a recent framework suggesting that local, and long-range network dynamics are reflected in different oscillatory frequencies. The fifth part demonstrates that the connection weights between left ATL and modality-specific regions at the theta frequency are modulated consistently with the content of the word (e.g., visual features enhance connectivity between left ATL and left inferior occipital cortex). The thesis concludes by embedding these results in the context of current neurocognitive models of semantic processing.