Stanford University Ph.D student Andrej Karpathy wanted to figure out what makes one selfie better than the rest with a visual recognition computer process, so he ran over 2 million photos tagged as selfies through the system.

The system, Convolutional Neural Network, or ConvNets for short, calculated how many people have seen and liked each photo, and Karpathy normalized the results based on the followers each user had.

After creating this rubric, Karpathy entered in another 50,000 new photos and had ConvNets order them from "best" to "worst" according to his algorithm. One key finding was that the style of the image is more important than the raw attractiveness of the person.

Filters and borders: Black and white photos did very well and most top images contained a filter that decreases the contrast. Horizontal or vertical white borders also made a frequent appearance on the top photos by ConvNet.

Group selfies: While Ellen Degeneres and friends had one of the greatest selfies of all time at the Oscars in 2014, this practice is not commonly viewed as the best selfie practice. Karpathy suggests to "keep it simple and take up all the space yourself," presumably, with your head partially cut off.

Published at 7:16 PM EDT on Oct 30, 2015 | Updated at 9:47 AM EDT on May 13, 2016