Abstract: Thanks to the enormous progress and success of deep neural networks (DNNs), computer architecture research has been regaining its past "excitement" again recently: lot's of architectural proposals based on vastly/slightly different approaches have been proposed for the accelerated execution of the training/inference of DNNs. Most of them, including those from our own research group, have common archtiectural features: i.e., reconfigurable and in-memory processing architectures. This talk will try to give 1) insights on why they are happening now, 2) what are the recent findings in this movement, and 3) where this architectural innovation will be heading.