This work proposes an approximate algorithmic transformation and a new class of accelerators, called neural processing units (NPUs). NPUs leverage the approximate algorithmic transformation that converts regions of code from a Von Neumann model to a neural model. NPUs achieve an average 2.3× speedup and 3.0× energy savings for general-purpose approximate programs. This new class of accelerators shows that significant performance and efficiency gains are possible when the abstraction of full accuracy is relaxed in general-purpose computing.