How do humans represent behaviorally-relevant dimensions of real-world objects? To address this question, we recently used a triplet odd-one-out task to collect >800,000 behavioral judgments on images of 1,854 diverse basic-level object categories. To explain human behavior and characterize the similarity between pairs of objects, we developed a simple cognitive model that yielded sparse, interpretable perceptual and conceptual dimensions. To determine the utility of those dimensions, here we investigate two questions. First, to what degree can we predict those dimensions from a semantic embedding (Pilehvar & Collier, 2016) and activations in a deep convolutional neural network (CNN)? Second, can we use those predicted dimensions to reconstruct human behavioral similarity? To address these questions, we applied Ridge and Elastic Net regression to semantic embeddings and the activations in fully-connected layer 7 of the CNN VGG-16, respectively. We related the performance to two baseline models: The computational models alone, and a recently proposed method that transforms model features (Peterson et al., 2016). Our results demonstrate excellent prediction of many dimensions and strongly improved predictions of behavioral similarity using our model as compared to both baseline models. These results represent an important step towards both predictive and interpretable models of human cognitive representations.