Past research on the electrophysiology of face perception has focused almost exclusively on brain responses to artificial stimuli that are transient and static. Therefore, our knowledge of the electrophysiological correlates of face perception is rudimentary, consisting mostly of averaged ERP responses in the first 200 ms after stimulus onset, and lacking virtually any description of how our brain may respond to more naturally occurring dynamic faces. Our goal was to characterize the neural correlates of naturally occurring dynamic faces over a more sustained presentation time (500ms). To this end, we recorded Magnetoencephalography responses to both dynamic and static face and non-face stimuli and used both traditional ERF component analysis to compare our results to the M100 and M170 face responses, as well as machine learning techniques to reveal other representations of viewing a dynamic face. In our ERF analyses, we observe that the dynamic-face induced ERFs have larger M100 and M170 responses (M170 is ~40ms earlier) compared to the static-face ERFs. In our classification analyses, the face vs non-face classification performance is shown to constantly improve as a larger time window is used, until 500ms, yielding ~80% accuracy at 500ms for both dynamic and static stimuli. Hence, the information of face-ness is not specific to a time interval but rather distributed (more widely in the case of dynamic stimuli) in the full temporal content. Finally, this strong face selectivity is achieved at the sensors that probe the temporal lobes for dynamic stimuli, and the occipital lobes for static stimuli. Overall, our results both provide new correlates of dynamic face perception and emphasize the critical information that lies in looking at sustained responses rather than the traditional transient responses to static faces.