The primate visual system constructs intermediate shape representations using something akin to a radial modulation function (Pasupathy & Connor, 2001, 2002). However, less is known about whether sparse representations derived from radial modulation functions are useful in performing higher level tasks, such as object recognition. The current study investigated whether several shape-encoding schemes based on radial modulation functions can be used by neural networks to learn the identity of closed-contour shapes degraded by various amounts of blur or curvature noise. We measured performance of neural networks on a shape classification task with 10 sets of 10 unique shape classes, each consisting of 10,000 samples. When the shapes were not degraded, classification accuracy for representations based upon the radial position of, or angularity between, either positive or negative curvature extrema was high. When the shapes were blurred, classification accuracy was significantly lower for representations based on the angles between curvature extrema. However, adding frequency noise reduced classification accuracy by similar amounts across sparse representation types. Sparse representations also led to faster training of neural networks compared to richer representations. We conclude that sparse shape vectors derived from radial functions can support shape identification, and faster training of networks for shape identity.