Saturday, September 26, 2015

"A report from a future where genetic engineering has sabotaged society."

From Nautil.us:

What If Tinder Showed Your IQ?

The not-so-young parents sat in the office of their socio-genetic
consultant, an occupation that emerged in the late 2030s, with at least
one practitioner in every affluent fertility clinic. They faced what had
become a fairly typical choice: Twelve viable embryos had been created
in their latest round of in vitro fertilization. Anxiously, they pored
over the scores for the various traits they had received from the
clinic. Eight of the 16-cell morulae were fairly easy to eliminate based
on the fact they had higher-than-average risks for either
cardiovascular problems or schizophrenia, or both. That left four
potential babies from which to choose. One was going to be significantly
shorter than the parents and his older sibling. Another was a girl, and
since this was their second, they wanted a boy to complement their
darling Rita, now entering the terrible twos. Besides, this girl had a
greater than one-in-four chance of being infertile. Because this was
likely to be their last child, due to advancing age, they wanted to
maximize the chances they would someday enjoy grandchildren.

That
left two male embryos. These embryos scored almost identically on
disease risks, height, and body mass index. Where they differed was in
the realm of brain development. One scored a predicted IQ of 180 and the
other a “mere” 150. A generation earlier, a 150 IQ would have been high
enough to assure an economically secure life in a number of
occupations. But with the advent of voluntary artificial selection, a
score of 150 was only above average. By the mid 2040s, it took a score
of 170 or more to insure your little one would grow up to become a
knowledge leader.

At the same time, the merger of 23andMe—the
largest genetics database in the world—and InterActiveCorp (owner of
Tinder and OKCupid), and their subsequent integration with Facebook,
meant that not only were embryos being selected for implantation based
on their future abilities and deficits, but that people were also
screening potential spouses based on genotype. Rather than just
screening for non-smokers, why not screen for non-smokers who are
genotypically likely to pass that trait onto one’s potential offspring?

But
there was a catch. There was always a catch. The science of
reprogenetics—self-chosen, self-directed eugenics—had come far over the
years, but it still could not escape the reality of evolutionary
tradeoffs, such as the increased likelihood of disease when one
maximized on a particular trait, ignoring the others. Or the social
tradeoffs—the high-risk, high-reward economy for reprogenetic
individuals, where a few IQ points could make all the difference between
success or failure, or where stretching genetic potential to achieve
those cognitive heights might lead to a collapse in non-cognitive
skills, such as impulse control or empathy.

Against this
backdrop, the embryo predicted to have the higher IQ also had an
eight-fold greater chance of being severely myopic to the point of
uncorrectable blindness—every parent’s worst nightmare. The fact that
the genetic relationship between intelligence and focal length had been
known about for decades did not seem to figure in the mania for
maximizing IQ.1 Nor the fact that the correlation worked
through genes that controlled eye and brain size, leading to some very
odd looking, high IQ kids.2 (And, of course, anecdotally, the
correlation between glasses and IQ has been the stuff of jokes for as
long as ground lenses have existed.)

Parents
were lured by slick marketing campaigns that promised educational
environments fine-tuned to a child’s particular combination of
genotypes.

The early proponents of reprogenetics
failed to take into account the basic genetic force of pleiotropy: that
the same genes have not one phenotypic effect, but multiple ones.
Greater genetic potential for height also meant a higher risk score for
cardiovascular disease. Cancer risk and Alzheimer’s probability were
inversely proportionate—and not only because if one killed you, you were
probably spared the other, but because a good ability to regenerate
cells (read: neurons) also meant that one’s cells were more poised to
reproduce out of control (read: cancer).3 As generations of
poets and painters could have attested, the genome score for creativity
was highly correlated with that for major depression.

But nowhere
was the correlation among predictive scores more powerful—and perhaps
in hindsight none should have been more obvious—than the strong
relationship between IQ and Asperger’s risk.4 According to a
highly controversial paper from 2038, each additional 10 points over 120
also meant a doubling in the risk of being neurologically atypical.
Because the predictive power of genotyping had improved so dramatically,
the environmental component to outcomes had withered in a reflexive
loop. In the early decades of the 21st century, IQ was, on average, only
two-thirds genetic and one-third environmental in origin by young
adulthood.5 But measuring the genetic component became a
self-fulfilling prophecy. That is, only kids with high IQ genotypes were
admitted to the best schools, regardless of their test scores. (It was
generally assumed that IQ was measured with much error early in life
anyway, so genes were a much better proxy for ultimate, adult cognitive
functioning.) This pre-birth tracking meant that environmental
inputs—which were of course still necessary—were perfectly predicted by
the genetic distribution. This resulted in a heritability of 100 percent
for the traits most important to society—namely IQ and (lack of) ADHD,
thanks to the need to focus for long periods on intellectually
demanding, creative work, as machines were taking care of most other
tasks....MUCH MORE