AI can diagnose some genetic disorders from face photos

Five minutes ago are the genomes. In personalized medicine, everything revolves around phenomena.

OK, that's overkill. However, many genetic disorders lead to pronounced facial phenotypes (Down syndrome is probably the best known example). Many of these diseases are quite rare and therefore not easy for clinicians to recognize. This lack of familiarity can cause patients with the disorders (and their parents) to undergo a long and traumatic diagnostic odyssey before they find out what hurts them. In rare cases, these rare diseases are uncommon, but not uncommon: they affect eight percent of the population.

FDNA is a genomics / AI company that "seeks to capture, structure, and analyze complex human physiological data. It provides useable genomic insight. "They have developed a facial image analysis framework called DeepGestalt that can diagnose genetic states based on facial images with greater accuracy than doctors. The results are published in Nature Medicine .

To train the algorithm, the company relied on a record of 500,000 facial images of 1

0,000 people selected from the Internet. When this record was compiled in 2014, it was larger than any known, similar record other than Facebook, which is privately owned.

They then tested how well he could identify faces of people with a particular genetic disorder, and were mixed with faces of people with several other disorders – a situation in which a clinician or a genetic counselor could find himself very well. They performed two tests of this type, one with Cornelia de Lange syndrome and the other with Angelman syndrome. Both are developmental disorders with cognitive and motor impairments. In both cases, DeepGestalt achieved accuracies of over 90 percent – better than experts closer to 70 or 75 percent.

Another test examined whether DeepGestalt could distinguish between a small pool of humans with the same disorder but different genotypes. These are images of people with Noonan syndrome who mutate depending on which of five different genes is mutated. It only reached an accuracy of 64 percent this time, but that's better than the randomly predicted 20 percent. Especially since "two dysmorphologists came to the conclusion that the phenotype of the face alone was not sufficient to predict the genotype."

The final test was to diagnose hundreds of images of faces comprising 216 different disorders. She was 90 percent accurate.

The algorithm works by cutting the face into multiple regions, assessing how much each region corresponds to each syndrome, and then aggregating the regions to see which syndrome is most appropriate. Hence shape. However, the authors note that "DeepGestalt, like many artificial intelligence systems, can not explicitly explain its predictions and provide no information about which facial features affected the classification."

It's a black box; It may outperform experts in creating a phenotypic genetic diagnosis, but it can not teach them what to do.