There is a range of psyches that has been favored by natural selection, maybe somewhat different in different parts of the world. Some people are outside that set: their minds are different, different in ways that were never favored by Darwinian selection*. Now some traits we don’t like are probably in the set favored by natural selection: some kinds of unpleasantness probably work, at least at low frequencies. Sociopaths might fall in this category. There certainly are species with adaptive genetic behavioral variation – alternate male morphs, for example.

Moreover, standard psyches have their limitations. There are tasks that normal humans are bad at – think of optical illusions, or the Monte Hall problem. Or making sense.

Then there are special cases where people with psyches that clearly are out of whack are particularly good at some task, like bipolar poets.

Does this mean that we should say that having a Darwinian mental disease is hunky-dory? Should we refrain from trying to develop treatments or cures? Should we consider all possible mental states equally ‘valid’ ( whatever that means) ?

Of course not: only a loon would believe that. Being a normal human being isn’t a panacea, but adding gobs of genetic noise (or environmental insults like prenatal rubella or cytomegalovirus) isn’t going to make things better. That pattern at least survived all the tests of the past.

Having an accurate evaluation of a syndrome as a generally bad thing isn’t equivalent to attacking those with that syndrome. Being a leper is a bad thing, not just another wonderful flavor of humanity [insert hot tub joke] , but that doesn’t mean that we have to spend our spare time playing practical jokes on lepers, tempting though that is.. Leper hockey. We can cure leprosy, and we are right to do so. Preventing deafness through rubella vaccination was the right thing too – deafness sucks. And so on. As we get better at treating and preventing, humans are going to get more uniform – and that’s a good thing. Back to normalcy!

It’s also a factor that blindness is typically less treatable in the current world than deafness – certain kinds of deafness can be compensated for with implants, while there are as yet no effective artificial eyes.

Maybe…for me the difference is just the severity of the condition…the decrease in fitness if you want: being deaf is a large fitness hit, no doubt…but being blind is just huge, no comparison at all. If I had to choose between the two, I will but need long to decide…so it’s much more difficult to say blindness is just a difference to be cherished….

I remember back when the geneticists Muller, Dobzhansky and Crow worried that modern technologies were working against Darwinian natural selection by creating a “genetic load” in the human gene pool; allowing the survival and reproduction of individuals whose individual “flawed” genome might have otherwise been naturally selected out of the human genome. They never considered the mirror image problem that modern technological societies might culturally select against SNPs that enhance the survivability of the total human genome. Do we really want to eliminate the mutation causing sickle cell anemia entirely? What about the gene causing Cystic Fibrosis? Psoriasis? Is it a good idea that everybody be tall, have regular features, have an IQ two standard deviations above average? As genetic engineering and tinkering with individual genomes and the genomes of individual’s germ cells proceeds apace we are going to have to start seriously addressing questions like this.

Dr. Cochran, speaking of heterozygote advantage: are there any cases where such an advantage is not the result frequency-dependent selection for parasite-mediated-disease survival? I’ve been trying to find any examples outside of that realm, but all to no avail, leading me to believe that, essentially, eliminating all such mutations (given the benefit of modern hygiene etc.) would be a desirable outcome of a future genetic therapy.

My fantasy: sociology, anthropology, and psychology profs forced to include your stuff in their curriculum. Watching them and their snowflakes come apart at the seams over this little post would be worth more than a monthly Netflix subscription.
“No, Prof. Cochran! You can’t possibly mean you hope there is one day a cure for homosexuality or transgenderism. IT’s NOT A SICKNESS! It doesn’t NEED to be ‘fixed’!”

When the day comes we can cure homosexuality, we can cure heterosexuality too, or sexuality at all.
On workday, take the white pill, and concetrate on your work. For weekend, take the pink pill and enjoy the party!

I often speak with young college students who’ve taken lower division courses in these subject areas, courses meant to meet the social science graduation requirements. True, they’re not Ivy Leaguers (can you convince me that would matter much?) but rather students enrolled in CA community colleges and in CA state universities. If what they say reflects what they’ve covered in those courses, then I can state they’ve been fed mush. If what they say does not reflect what’s been presented, then we’re wasting our resources.

I’ve spent enough time in enough classrooms to understand that curriculum lists are pretty things, pretty things that often sit in corners, collecting dust. Students especially like sociology and speak of it as “an easy ‘A’.”

Maybe it’s a California thing. We aren’t very discriminating in much of anything any more.

Was high IQ autism favored by natural selection? It’s probably now a Darwinian mental disease because it probably reduces reproductive fitness via sexual selection, but high IQ autistics probably have a more accurate mental map of reality than does the typical person, so it would be a mistake to say that “[t]heir minds ain’t right.”

Unless whatever gives high-IQ autistics autism also greatly contributes to them having a high IQ. Hyperlexia supports the idea that high IQ autism isn’t just high IQ plus damage caused by genetic load. Hyperlexia “is a syndrome characterized by a child’s precocious ability to read.” One hyperlexic child I know could spell words, including elephant, before he was two. Almost all hyperlexic children go on to develop autism. High IQ autism could be caused by a brain being “too good” at pattern recognition and if you removed the genes that gave you this superior pattern recognition and replaced them with typical genes you would likely lower IQ.

My wife read at age three, my daughter at age four. Neither was at all autistic. My daughter was very fluent: I remember her, at age 4, first pretending to be a lion, then asking me to save her “from the lion I was”.

Eric Raymond once speculated on why there are so many aspergers/autism spectrum types in hard technical fields. His guess was that these syndromes freed your brain from “playing monkey games”–worrying about status and appearance and how people around you are seeing you.

Monkey games probably take up a good chunk of most peoples’ brainpower–introspectively, I certainly perceive myself to be spending a lot of cycles on that stuff, even when I’d rather focus on a technical matter.

I don’t know if this is correct, or how it might be tested, but I haven’t seen it anywhere else and it seemed to me to have a real insight.

I also read at 3, and 2/3 of my kids read by 4. (The last waited till 6 or so and then suddenly transitioned from reading only when required to reading all the time for pleasure–the diary of a wimpy kid books, followed by the Harry Potter books were the immediate incentive.).

None of us are autistic–one of my kids has some traits you’d associate with being on the spectrum somewhere, but probably not anything like enough to be diagnosed.

A few related observations (this is a – ‘this information might be of interest’-observation, not ‘I think X (of tribe Y) is right’):

a) IQ measures of autistics may be subject to very large variation/(‘measurement error’?), depending on the metric used:

“We […] assessed a broad sample of 38 autistic children on the preeminent test of fluid intelligence, Raven’s Progressive Matrices. Their scores were, on average, 30 percentile points, and in some cases more than 70 percentile points, higher than their scores on the Wechsler scales of intelligence. Typically developing control children showed no such discrepancy, and a similar contrast was observed when a sample of autistic adults was compared with a sample of nonautistic adults.” (link)

I am not that interested in this literature so I don’t know if there’s been any kind of follow-up on this one, but it almost goes without saying that if one measure of IQ may differ as much as 70(!) points from another measure using a different instrument in the group you’re looking at, ‘be very cautious’ is, well…

b) Aside from IQ and some of the traits commonly observed in autistics being perhaps directly related, IQ and the probability of being diagnosed are almost certainly also dependent variables, in multiple ways. A diagnosis requires that the relevant symptoms “cause clinically significant impairment in social, occupational, or other important areas of current functioning.” High-IQ individuals are better at compensating for the social deficits, making them all else equal less likely to obtain a diagnosis. This would lead to a smaller proportion of high-IQ than low-IQ individuals with an autistic phenotype being diagnosed. Autistic physics professors are ‘slightly odd’/’eccentric’, the equally socially incompetent grocery store clerk is autistic and got the diagnosis because he was doing poorly. Any sort of comparison between high-IQ individuals ‘in general’ and high-IQ autistics should ideally take this kind of stuff into account. Diagnosis is a very blunt instrument of identification, trait measurements may be much more informative.

Age of diagnosis may also be IQ-dependent. Smarter individuals may be diagnosed earlier in case of severe impairment, because the also-higher than average IQ parents spot it earlier, or they may be diagnosed later (in the case of mild impairment, where they perhaps do not start becoming unable to compensate until they move away from home – Asperger’s diagnosed in high-IQ individuals in their late teens/early twenties is not unusual). These kinds of effects indicate to me that it probably also matters a great deal which age groups one is looking at.

“severe impairment” (…along non-IQ related dimensions, that was). However this one is probably on second thought not very likely to be particularly relevant except for a small subset. Assuming smart parents spot autism in their children earlier and then controlling for parental IQ, you’d probably all else equal expect the probability of diagnosis, conditional on an autistic phenotype being present, to be more or less monotonously decreasing in the IQ of the individual.

My son learned to read numbers and spell letters at age of …one (two months before he learned to walk). Started reading his first sentences when he was year and a half. Started school (in Canada) straight from grade two at the age of 5 and a half; scored 99.875 percentile on Raven as part of gifted children identification program in Canada (consisting of 6 different IQ tests in total) when in 7th grade (school principal requested it, not the parents..). By the times of graduation from high school his results dropped to 95th percentile. Never diagnosed with asperger’s, however, I am not a MD but I have seen (and fighting) each and every simptom of asperger’s in him since he was age 6. I would gladly trade his IQ for better health. And no, this is not something natiural selection would favor. It might slightly tollerate it if there was a niche for very narrowly specialised occupations in the society. Even then it would be doubtfull. Otherwise – not in a zillion years.

James Sikela, an important geneticist at University of Colorado School of Medicine Biochemistry and Molecular Genetics, has answered various of the questions asked in this Neurodiversity thread, and in West Hunter in general. These include answers on brains, evolution, autism, schizophrenia, IQ, and child prodigies. Dr. Cochran should contact Dr. Sikela and write more about these findings.

Wikipedia reports DUF1220 is the most repeated gene sequence in the human genome, with about 289 copies, and is tied in evolution to brain size growth. See “The case for DUF1220 domain dosage as a primary contributor to anthropoid brain expansion.”
Wikipedia says, “Sequences encoding DUF1220 domains show signs of positive selection, especially in primates, and are expressed in several human tissues including brain, where their expression is restricted to neurons.[1]”
“DUF1220 Dosage Is Linearly Associated with Increasing Severity of the Three Primary Symptoms of Autism.”
“DUF1220 copy number is linearly associated with increased cognitive function as measured by total IQ and mathematical aptitude scores.”
“This group had 26-33 copies of [DUF1220 subtype] CON2 with a mean of 29, and each copy increase of CON2 was associated with a 3.3-point increase in WISC IQ (R (2) = 0.22, p = 0.045).”

The same genes appear to allow child prodigies.
“Molecular genetic evidence for shared etiology of autism and prodigy.”

“DUF1220 copy number is associated with schizophrenia risk and severity: implications for understanding autism and schizophrenia as related diseases,” because schizophrenics typically have an unusually small number of DUF1220 copies, while autistics have an unusually high number.

There are two rival hypothesis: one is that people that marry late also have an underlying genetic propensity to nuttiness, the other is that an increased number of de novo mutations in the children of older fathers causes psychiatric trouble (not just autism). That increased number of de novo mutations with older fathers certainly exists – the question is how important they are. The issue is in doubt – both mechanisms may contribute.

A nasty idea when used as in Deepness. But a more interesting one when you contemplate the possibility of using it voluntarily, for short terms, for tasks that benefit from the state.

“Need to finish that paper? With a three-day supply of Focus™, your success is assured! Available at these fine locations….”

“Bored to death painting your living room? Worried you’ll miss details? For a low, low price, a weekend’s worth of Focus™ will ensure you not only paint all the woodwork properly—you’ll actually enjoy it!”

My daughter has been working with autistic secondary school kids (some of whom are evidently bright, judging by some of the stuff that they say, and the fact that they can do fairly clever stuff with laptop computers and such), and informs me that they engage in ‘stimming’ (you can look that up in Wikipedia – it is informative to do so), which is apparently mindless repetitive behaviour which they engage in when they are feeling anxious or depressed (or whatever) due to overload from external stimuli. The stimming evidently serves to relieve whatever bad feelings they are experiencing, by blocking out external stimuli and giving them peace of mind for a while.

Stimming is mostly harmless, but occasionally can be harmful when it comprises repeated head banging against a wall, for example. But while engaged in stimming, the autistic person concerned is totally unproductive. That seems to be the whole purpose of the apparently aimless repetitive behaviour – to shut down the activity in the conscious mind for a while, or to focus it on something pointless and repetitive.

One of her charges was stimming recently by sitting off in a corner by himself endlessly rotating himself on a rotating office chair. When he finally snapped out of it and she was able to talk to him again, he told her that he was feeling ‘depressed’.

So the idea that they (in the generality) remain constantly focused is not correct – they have down time.

Being the greatest mathematical physicist, a great experimental physicist, and a great mathematician might suggest that we would be a bit odd. His attempts to find secret messages in the bible certainly suggest so. Yet he was fit to be trusted to run the Royal Mint, which suggests that the Court saw him as more than merely odd.

His attempts to find secret messages in the bible certainly suggest so

Nothing odd with studying Bible prophecy in Newton’s cultural context. If, living in his time and place, Newton was secret rationalist atheist, this would be strange.

Yet he was fit to be trusted to run the Royal Mint, which suggests that the Court saw him as more than merely odd.

Running Royal Mint was meant as sinecure, reward for his services to the Crown. He was supposed to take the pay and let the Mint employers run the thing as usual, but he, unexpectedly, took the job seriously. This was odd in his time and place.

A lot of intellectuals in that era were pretty good about concealing their true beliefs about sensitive religious subjects. Descartes lived in Holland because he felt safer there. But even in Holland he did not speak candidly of his opinion of say the Bible or the divinity of Christ. Hume was almost certainly an out and out atheist but he never openly admitted it. Schopenhauer was the first really prominent philosopher to make no bones about his total lack of belief in God or the Bible.

I’ve read that one reason he sought the position of Warden of the Mint is that being a big shot at Cambridge or Oxford would have entailed attending a lot of religious functions. He apparently tried to avoid them. Also Newton was a lousy teacher. It was said that he sometimes lectured to an empty room.

Cavendish was super weird (weirder than Dirac) but also served in some official positions and apparently quite satisfactorily.

Isn’t a willingness to engage in extreme violence considered a defect by most of us, while having been considered an asset for warriors over most of history? I think the Globalists still find some of these people useful, and probably hockey teams as well. What do the genetic engineers do with those genes, assuming they have been identified?

What are you, a Quaker or something? It is hardly a “defect.” It is a survival behavior that was selected for. The proclivity for extreme violence is a latent talent everyone possesses. Just consider the rhetoric currently employed in political debate. It does take military training to effectively bring it to full fruition though.

Plenty of pretty normal people have killed the enemy in wartime, and have committed atrocities against the enemy under the right circumstances. (What is an atrocity is also culturally defined–the Romans, Vikings, Crusaders, Napoleonic Armies, etc., didn’t have the same notions of the rules that modern American soldiers do!)

Well you are wrong. When in the heat of battle a psychic switch gets thrown and even draftees with what their sergeants would consider bad attitudes act in a most bloodthirsty way when their lives are threatened.

Indeed. It wasn’t until modern times with the invention of implements of mass destruction such as accurate cannons with explosive shells that wars became particularly dysgenic. Great pitched battles with high casualties and famous lore were not the norm. Armies often fought to standstills or the losing side threw down their arms and ran away (to breed and fight another day.)

Most casualties in war till WWI were by hunger and disease, not KIA and wounds. And when army runs out of food and is hit by plague, it does not matter how brave man is the soldier.
What organized state warfare selected for, if for anything, would be resistance to disease, and personality type resistant to call for “god and country” “patriotism” or any other “higher cause”, someone who never volunteers and, if conscripted, deserts at first opportunity.

Nonsense. Imperial Roman legionnaires were given farmland as a reward for service and during the Republic they already were farmers who were called up to fight during time of war just as was the Greek hoplite or medieval pikeman was during the summer fighting season. The high middle ages were a time of such prosperity and booming population that Crusader armies went off to the Middle East for what seems to be just the sport of it.

You could not pick worse example. Roman soldiers (in the early Empire) were the best men recruited from the Empire, and were picked for not only for physical strength and endurance, but for discipline and patience. “Extremely violent” raging berserker types woud be shown the door at first opportunity.

Crusaders were mostly volunteers, who went to “pilgrimage” to save their souls. It was very serious business, no fun. Since about one of hundred who went to the East ever returned, you can argue that the Crusades helped to rid Europe of “gene of religion” or “gene for belief”.

I dispute your false distinction between the natural violence of so-called berserker types and the average pikeman during a fixed battle. When in fear of life and no easy way out everyone is capable of extreme violence as is proven time and time again in actual wars. The professionalism of sober legionnaires versus loosely organized and drunk barbarians is merely an example of the efficacy of good training rather than a reflection upon a given man’s proclivity towards violence.

I believe that in the early days of the Roman republic there was actually a property qualification for becoming a Roman soldier. If you were poor you couldn’t become the Roman equivalent of “cannon-fodder’ even if you wanted to.

Eventually this system proved to be incapable of raising a sufficiently large army and mass conscription was resorted.

But the high standards for Roman soldiers applied mostly to the time of the Republic particularly the early Republic. By the time of the “Empire” (Augustus was the first to have that title) standards for common soldiers were no longer very high.

You have it backwards. The early legionaries were conscripts (the Latin verb legere means to levy) and were by and large seasonal warriors. The soldiers of the Empire were citizens who voluntarily signed up for 20 year terms of service and conquered the known world or at least the profitably taxable areas of it.

There are still mental-illness advocates who regard clear illnesses as things that don’t really exist – they are just society’s inability to understand and adapt to different modes of thought. I deal with these people in my work with moderate frequency. I am given to understand, though I can’t verify because I don’t go to those conferences, that there is still a considerable percentage of outpatient counselors – professionals of various sorts – who still accept Freud, Jung, or Laing as essentially correct, subject to modern improvements. A northern New England conference on multiple-personality disorder (under its newer name) drew over 300 professionals.
Some days it is hard to keep discouragement at bay.

When you are in New England and feeling discouraged just take heart contemplating the states of the Old Confederacy where the death penalty is still applied irrespective of which one of the perpetrators alter-egos committed the capital offense.

My experience is that people who write things like this don’t actually know many people with serious mental illness – and this included Szasz, surprisingly. Mere variation in humans does not prove that all variations are healthy, if one can only find an example people don’t find pathological and then squint hard enough to pretend it is the same thing. I knew that Szasz was deeply libertarian, but I’m not seeing the “right-wing” part, either by current definition or his own era’s. Nor liberal.
As for Greg’s idea that the advocates are themselves ill, that is true in at least one sense, as they are awash with manics and personality disorders who insist that they do not have an illness, not really, really. It may also be true in the sense I think he was using one of my favorite clinical terms, “screwier.” There are lots of niches in a complicated society, however, and sometimes you can find someone willing to pay you for it anyway.

As for Greg’s idea that the advocates are themselves ill, that is true in at least one sense, as they are awash with manics and personality disorders who insist that they do not have an illness, not really, really.

Someone who understands he perfectly fits the symptoms and definition of personality disorder, but is happy and does not want to be “helped”, “saved” or “cured”, crazy or not crazy in your book?

The criteria for diagnoses aren’t actually rigorous enough for anyone to clearly fit them or not, which is part of why the DSM stresses that only qualified practitioners can make judgments about whether any given person meets the criteria for a diagnostic category. (Another part is job-protection.)
And in practice, even the criteria-as-written tend to be ignored or at least rendered malleable – if a patient can’t get their treatment covered without a diagnosis, and they don’t quite fit any category, professionals tend to just give them a label. What would be the point of being a stickler?

I’ve read that people prone to depression have a MORE realistic view of the world. Since reality is pretty depressing, that makes sense. (I’m not referring to bipolar or anything that includes psychosis. Just garden variety).

I’m prone to mild depression (used antidepressants since 1985). When I read or hear of the rosy outlook of many political leftists, some seem somewhat detached from reality. I think they’re failing to detect real threats because they dream about utopias and won’t listen or read to anything that contradicts it.

If you remember utopian promises from the left, you must be really ancient. The left talks about nothing else than doom and promises nothing than stopping the incoming armageddon, as long as I remember So does the right, and whole popular culture with them.

My prediction is the opposite will happen. If we gain such understanding how the mind works we can repair and change it at will, people will use the knowledge to change their minds and push them into shapes and directions unexpected and unimaginable now. Millions of flowers will bloom and no one would be able to stop them.

“A willingness to engage in extreme violence would be more common if it was a generally good reproductive strategy. Probably works just often enough not to be completely bred out.”

For most of human history, we lived in tribes that made war on each other. The winners killed the males and “married” the young females. So for males, being good at extreme violence (whatever that means — as opposed, say, to a kinder, gentler violence?) was a given. For females, not so much.

Maybe that’s changed. What percentage of conceptions are the result of rape? Every Y chromosome in existence has been passed on thousands of time by rape. We’re probably still pretty good at it.

Hunter gatherer tribes consider a stranger in their territory a thief after their limited four legged food supply. We were not any different than any other top preditor defending it’s territory. How in the world fools came up with the idea of the noble peaceful savage is beyond me. If you did not defend your territory as a member of a tribe you lost it. If you were too aggresive you were killed before you reproduced. It was a constant balancing act between these two choices until the agricultural revolution when all the rules were changed.

“Every Y chromosome in existence has been passed on thousands of time by rape.”

If you mean thousands of times through the line of every male, then even with a generation time of ten years your assertion would require tens of thousands of years even if every insemination was through rape. Seems very unlikely. If you mean each variant has been passed on thousands of times by rape, that would typically be out of millions of inseminations. Not very significant.

I would think “extreme violence” would be violence beyond that needed to accomplish the goal of the violence. Violence for the sake of violence rather than violence to achieve an end.

There are a couple of ways that you can address congenital mental conditions.

One is to attempt cure or even cull when they become problematic, particularly if this has the effect of reducing the frequency of, or suppressing the symptoms of, a phenotype that is ill adapted to current conditions, rather than actually eliminating it entirely. This can present some threats to neuro-diversity but those threats are balanced by very concrete associated costs.

Another is to pro-actively seek to develop people who have only a small portfolio of “ideal types” (e.g. a strategy to get a large percentage of reproduction from artificial insemination from geniuses with particularly effective personality types). This is a huge threat to the survivability and vigor of the species, because often it will be the case that the variant we really need isn’t apparent until we actual need it, at which point the gene pool may be too mutation limited to come up with the kind of variation we actually need.

For example, the of the main beneficial aspects of color blindness, lack of vulnerability to many kind of camouflage, wasn’t discovered until someone happened upon it serendipitously in the middle of a war. High IQ autism is much less problematic in a world with lots of engineering and IT problems to solve than it was in a simpler world.

There are all manner of phenotype variants that we simply don’t even notice or see as minor disadvantages in life, that aren’t the subject of much attention due to their low salience, that would be preserved in a cure or cull strategy but lost in an optimization strategy, particularly low frequency variations which many (most?) neuro-diverse types are.

Also, as in the case of sickle cell genes, there can be a balance of good and bad consequences of a phenotypes that can change over time. If you have lots of mosquitos carrying malaria, a mixed type for that can be good. If you live in the Arctic, its harm is not counterbalanced, but eliminating it from the gene pool might not be good for your tribe if a few generations from now your tribe might migrate to a tropical clime. BRAC which enhances breast cancer risk in middle age but also may enhance mental function was a good thing when people had short life expectancies but may be a negative on balance now. Surely, there are also neurotype genes that involve similar balancing that has shifted over time. A mental ability to predict where moving objects will be in 4D space was much more useful in the days of hunting (even with rifles) than it is today, but perhaps people who are weak in that trait are better at manipulating abstract concepts that are more relevant from a fitness perspective today.

There is also neuro-diversity that is probably obsolete and once served enough of a function to make it a low frequency type even if now the function has little use. For example, it isn’t hard to read the Hebrew Bible in manner that suggests that their society has well established roles for people with schizophrenic hallucinations (“prophets”), and people with OCD (consider the elaborate even manic priestly rituals), and those particular roles no longer exist in our society, which isn’t to say that new ones might someday be discovered.

Seasonal affective disorder was almost surely selective fitness enhancing at some point, when food was scarce and the disease risks of interacting with others in cold seasons was great, even though it may not be now in the days of central heating and electric lights.

An ability to be still and purely live in the intellectual realm for sustained period while calm in body used to be pretty useless and now is in high demand.

On the other hand, there is Darwinian selective fitness which is relevant only when it impacts fertility and mortality, and then there is a broader sense of fitness which impacts well being between birth and death even if a phenotype doesn’t impact either fertility or mortality. Isn’t part of intelligence the notion that it is appropriate to enhance that broader sense of fitness in our population even when natural selection isn’t doing that job?

This whole issue demonstrates one of the cases where Dr. Cochran is clearly wrong. One of the reasons you can’t get a superhuman by creating an individual that possesses only the previously most-common alleles is that you’d only get an individual who was well-adapted to the past. The environment has moved on.

It’s also why biology doesn’t commonly utilize reproductive strategies that allow populations to eliminate less-successful alleles, and the few species that do are generally thought to be likely to go extinct soon. It’s important to keep weird mutations around, so that they have the potential to become adaptive when the environment changes – either externally, internally in interaction with other alleles, or both.

Many species do become finely adapted to a particular environment. And are prone to extinction if that environment changes rapidly. Species not as finely tuned to an environment adapt better to change and generally can inhabit a larger range of environments. But biology does not make some kind of choice to follow the latter strategy. It’s a combination of adaptation and chance.

Yes, yes, there’s no point to stressing that biology isn’t a rational sapient.
If you’d prefer a strictly accurate statement of the principle: reproductive methods that efficiently eliminate less-efficient alleles strongly tend not to be favored by long-term selection, and methods that retain alleles even when they decrease individual fitness are strongly favored.

if you have a multitude of species, one for every available niche then you don’t really need spare part dna as you have spare part species instead

on the other hand if you have a species that can adapt to multiple environments i can see the value of carrying around spare part dna – but once you get to the point where you can edit it do you still need to carry it around?

kinda like a garage storing hundreds of auto parts versus 3D printing them as and when needed

What would such reproductive strategies be? Reproducing younger? (That one is definitely selected for right now.)

Sounds like one of those complicated multilevel issues. A novel allele that reduced mutation rate across the genome (better mutation repair? what mechanisms?) would probably be selected to invade in just about any wild population, because it would reduce genetic load among immediate descendants, and most mutations are not good. That’s true regardless of how rare mutations are. It seems to follow that selection would always favor the lowest possible mutation rate. But a population with no mutation cannot evolve.

So what happens? I’m not sure, but it sounds like some of the theory around the evolution of asexuality. Asexuality in theory can easily invade a sexual population in many cases (because the allele that causes it is passed onto every offspring, instead of 1/2), but the lack of genetic shuffling means that asexual populations evolve more slowly to changing environments and get caught in Muller’s ratchet, so they tend to more often go extinct. So the theory is that you have a dynamic system where asexual lineages are “frequently” (in evolutionary time) arising, but also going extinct. Don’t know the empirical evidence for this.

Plants are better models for the topic than animals, I think, as their systems for genetic regulation are much, much simpler – which makes certain reproductive strategies possible, which in turn makes it more noteworthy when they’re not used.

The simplest and clearest example I can think of involves whether organisms can self-fertilize and the degree to which they do so. Obligatory self-fertilization is rare in nature but relatively common among domesticated crops, and it has some interesting consequences. One of them is that the heterozygosity of a population drops by 50% with each generation, neglecting new mutations.

Some classic studies, I believe, showed that selfing in plants is a lot more common in disturbed, marginal populations of short-lived annuals. If I recall correctly, one reason is reproductive insurance: sure, it’s better to reproduce with someone else (to avoid inbreeding), but if there’s no one else around…

But that’s still sexual reproduction: using meiosis to shuffle chromosomes into gametes, and putting them together. Having sex with yourself. Asexual reproduction is the clonal kind. And any allele that causes that is guaranteed to get into every offspring you make. A competing sexual allele only gets into half.

Another example of such multilevel processes is cytoplasmic male sterility. A mitochondrial mutation that suppresses male function (spreading pollen) and instead focuses all energy into female function (i forget the term – the part that gets fertilized) will might produce more fruit than wild types. Since mitochondria are only reproduced through female parts, the allele increases in frequency. In theory this allele can invade and potentially replace all competitors (assuming they compete just as well for pollen, etc). But a population of only females cannot reproduce. Complicated things can happen, including suppression of such mutations by the nuclear genes, hence intragenomic conflict.

There are plant species for which inbreeding is a minor concern at most. Squash can be inbred indefinitely without fitness loss. And obligatory self-crossers are always inbred, but they thrive – in the short term.
Try thinking carefully about the implications of having heterozygosity decrease by half with each generation.

I assure you I understand the implications. I understand that it increases homozygosity and therefore “brings out” recessive genetic disease. I also know that exposing these phenotypes therefore make selection against rare recessives much more efficient, which is why habitually selfing populations probably have fewer deleterious recessives, and therefore can be inbred indefinitely. Etc. In turn, I’m sure you’re aware of the two-fold cost of sex, which is what I was referring to.

My point at the beginning was just to add some nuance to your statement “It’s important to keep weird mutations around, so that they have the potential to become adaptive when the environment changes”. Yes, in the long term, any population that never mutates is doomed – it either dies off when the environment changes, or it gets driven to extinction by the population that evolves to outcompete it. But selection in the short term pretty much always favors lowering the mutation rate, and the long term is just a repetition of short terms. It’s not as if selection explicitly favors keeping bad mutations in the population for a rainy day. It’s not that simple. I’m sure you’re aware of this, but perhaps others aren’t.

In any case, I don’t understand how any of this contradicts what Greg has said. He never claimed that getting rid of rare alleles would produce a super human (although they’re probably more fit than the average, I bet).

I guess you think I’m pretty dumb, then. Obviously I recognize that yesterday’s bad variant can be tomorrow’s good variant. On the other hand, lots (most?) of mutations have never been beneficial to the human population, and never will be. Consider cystic fibrosis.

And again, I just want to make sure that people understand that selection doesn’t explicitly keep shitty genes around in the off chance that they are beneficial later. Selection works one generation at a time. It has to disfavor “bad” genes right now: that’s what “bad” means! As you know, if a selection process does maintain mutation at some optimum, it is perhaps more about lineage selection. Like asexuality: it can theoretically evolve and invade sexual reproducing populations – but such lineages are more likely to die off. The two forces lead to some long-term equilibrium. Perhaps the same is true of mutation rate.

I’m not an expert, but it’s not obvious to me that “selection clearly HAS favored systems that retain” deleterious mutations. What are you referring to? The fact that mutation exists? On the contrary, what’s obvious to me is that selection has evolved incredibly complex mechanisms to get mutation rates very low. It seems to me that the existence of mutation isn’t mysterious – entropy is sufficient to explain why complex molecules aren’t copied faithfully. That mutation is so low is what requires explanation – and clearly the explanation is natural selection for very low mutation rate. So, for me, that mutation exists is not strong evidence that there is a selection pressure favoring some optimal mutation rate to keep bad alleles around, which is what you seem to be suggesting (perhaps not).

Selection doesn’t ONLY work one generation at a time. And YES, biological systems are known to favor methods of reproduction that conserve even alleles with strong negative fitness, even though there are available strategies that eliminate them quite effectively. The interesting question – why do they do that – has more than one answer, and one of the reasons seems to be so that the genome as a whole retains a library of variations so that it can adapt to future changes, rather than eliminating everything that doesn’t immediately contribute to a local fitness maximum.

Compare peas and beans, which are obligatory selfers under the majority of conditions, with corn, which has strong inbreeding depression and requires a large population of genetic donors to remain healthy. (Seedsavers recommend a minimum of 100 parents as the absolute minimum, with 200 being considered as a better margin, and retaining the diversity of a whole strain requiring more than a thousand plants.) It’s really, REALLY hard to eliminate recessive alleles in corn, even when they have easily visible effects and roguing is carried out for dozens of generations. Selection techniques for eliminating such alleles don’t work especially well, given how easily corn crosses and how much isolation is required.

“I’m not an expert, but it’s not obvious to me that “selection clearly HAS favored systems that retain” deleterious mutations. What are you referring to?”

You’re right, I DO think you’re pretty stupid, for the simple reason that you behave stupidly. 1) What have I just been talking about? Do you think that the things I began speaking about right after you asked the question the first time might possibly be related to your question? Have you considered the possibility that you’re not following the discussion as well as you think you are? 2) Deleterious mutations have nothing to do with the matter. If an allele works wonderfully in an environment and is nigh-universal, and then conditions change and it no longer provides a fitness advantage, it’s not a recent mutation – but it’s still conserved by the reproductive strategies most organisms use. Deleterious traits are conserved, not mutations alone.

“the genome as a whole retains a library of variations so that it can adapt to future changes, rather than eliminating everything that doesn’t immediately contribute to a local fitness maximum.”
What the fuck are you talking about? What does it mean to say that “the genome as a whole retains a library of variations so it can adapt to the future”? What are you actually saying, with respect to mechanism? You know, apart from the usual mechanisms: mutation, selection, recombination, etc. E.g., mutation creates variation, selection is currently acting to remove deleterious variants (by definition), and the interplay of those two forces maintains standing variation. Are you suggesting that the genome magically hides deleterious alleles from selection (by making them non-deleterious???) so that they can used later when possibly helpful (at which point they will have to be magically turned on)? To repeat, what the fuck are you talking about?

“Have you considered the possibility that you’re not following the discussion as well as you think you are?”
I’m quite certain that I’m not following you, but not for the reason you think.

“Selection doesn’t ONLY work one generation at a time.”
Obviously you’re not suggesting that Lady Selection magically peers into the future to see whether some currently deleterious allele might be beneficial in the future so that it can keep the allele around. So what are you saying?

Maybe you’re referring to the fact that temporal variation in selection pressure leads to, e.g., fixation of the alleles that have the highest between-generation geometric mean fitness, and similar results. But, see, I already know that, and it’s just the long-term outcome of one-generation-at-a-time selection.

Rereading the original reply, “you’d only get an individual who was well-adapted to the past” which is “why biology doesn’t commonly utilize reproductive strategies that allow populations to eliminate less-successful alleles”, and the subsequent interchange I realized that what he is claiming is that evolution is non-causal. I.e., it adapts to an anticipated future, not the real past and present. That’s utter bunk. It’s a waste of time arguing with people who think like that.

That’s the point where you err – acting as though there were some essential standard, some divine originating baseline, that mutation inevitably falls away from. There isn’t. The baseline is composed of a swarm of alleles, all of which were mutations at one point or another, certain combinations of which produce results with are more or less evolutionary successful in a given environment. It’s a dynamic equilibrium, only appearing stable if you back off enough to lose the details. The gears are made of clumps of sand, loosely stuck together, and they come apart and reform into new configurations constantly.

I don’t doubt that, out of the total possible combinations of all the alleles present in the genome at any given time, there are one or more combinations which maximize various fitness criteria – and substituting other extant alleles, or mutations, would reduce fitness. But conditions change – especially since the ‘conditions’ depend on the effects of other genes as well as external factors.

Consider the example of ‘Apolipoprotein A-1 Milano’, a recent mutation found in a single Italian family. Presumably it has arisen before in the millions of years human and prehumans have existed; presumably it was less fit in the ancient environment than the standard allele. But in today’s world, it’s pretty obviously superior. And more to the point, it has a variety of effects on our internal chemical stew that differ from the standard. There are probably hypothetical variants of other genes that work more effectively in that alternate environment than the standard one, and because we tend to retain variation, it’s possible some such variations exist right now in the population. By your reasoning, this mutation should be eliminated, because it constitutes ‘grit’ in a presumably optimally-functioning biochemistry. You’d be completely wrong.

In terms of IQ, the aspects of IQ that can be explained by efficiency maximas are the least interesting in my view. Take the Ashkenazi advantage, for example – it’s not global IQ, it’s for specific types of processing. Which means they’ve been selected for neurological architecture that differs from the norm, they’re not just extra-clear and -crisp versions of the standard design. That’s interesting. And whatever the specific genetic differences, they aren’t maximally adaptive given the fitness criteria for the generic human environment, or humans would be that way generally. The definition of what ‘fit’ means changed.

“acting as though there were some essential standard, some divine originating baseline, that mutation inevitably falls away from. There isn’t.”
He’s not saying that. He’s saying if you created a person that only had the common allele at every locus, such a person would be substantially more fit than average right now – given current environment and biology. That’s it.

Even that most likely isn’t true. There are a number of cases where heterozygosity seems to be associated with greater fitness, possibly the same general reason why men suffer more severe psychiatric illnesses than women – they’re all chimeras.

It’s pretty simple, man. Take a person, go through each gene, and if ever he has a rare allele, replace it with the common allele. The only cases where this will tend to reduce fitness or biological function are
(1) where there is heterozygote superiority (which is not thought to be common, I think, but certainly exists) and he only has 1 copy of the rare allele
(2) where the rare allele this person has just happens to be the one that is currently selectively favored

This will certainly be the case for some loci… but on the whole, most rare alleles are probably young neutral or deleterious mutations – which means getting rid of them will be a good thing on the whole.

That would have some very interesting consequences for regulatory sections of DNA. And especially for the brain, given its complexity.

Here’s a point: being ‘late’ entering puberty is highly correlated with greater IQ, possibly because it delays the onset of neural pruning periods. If you put the most common forms of the relevant regulatory genes in place of less-common forms, you’d get a population in which everyone enters puberty at the current average time in development. What implications does that hold for IQ, given the known correlations?

“Consider the example of ‘Apolipoprotein A-1 Milano’, a recent mutation found in a single Italian family… in today’s world, it’s pretty obviously superior… By your reasoning, this mutation should be eliminated, because it constitutes ‘grit’ in a presumably optimally-functioning biochemistry. You’d be completely wrong.”
Again, not what’s he saying. If the allele is superior (with regard to fitness), then selection currently favors it, by definition. He’s saying that on the average across the genome, the common allele is usually the most beneficial, because most loci are not undergoing novel allele sweeps. So it follows that a person without all these rare alleles would be a super hero. Obviously this doesn’t imply that new beneficial alleles are selected against.

But that simply doesn’t work when we’re talking about the brain and genes associated with its structure. We’d expect those sections to have undergone massive upheavals and still be in flux. The most common genes would NOT be associated with extraordinary mental performance of any kind.

Not if you look at individual alleles, no. But we can’t determine the meaning of a paragraph by looking at its individual words in isolation, either. Only the simplest traits are tied to single-gene variations; for obvious reasons, those are the ones we first discovered and started to puzzle out. More sophisticated traits – like brain design – are much, much more complicated.

Sure, maybe the brain is currently actively evolving and undergoing lots of sweeps (though I trust Greg when he says no cases have been found). It’s still almost certainly true, though, that most rare alleles inside a single individual will not be beneficial.

Let’s demonstrate. Suppose that 1000 loci contribute to IQ. Suppose unrealistically that every single one of those loci is currently undergoing selection for a rare, novel allele. This is a case of “massive upheaval and flux”. But there are also 4 other rare, novel alleles present at each locus that reduce IQ, because most mutations are bad. To make things easy, suppose each rare allele (5 total: 1 good, 4 bad) has 1% frequency in the population, and the common allele has 95%.

Now, ideally, a person would be fixed for the rare variant at every locus, but the probability of that is (0.01)^2000, so of course there is no such person (that assumes no linkage disequilibrium, which will of course not be true if the alleles are very young, so even that is an overestimate).

In fact, the average person has 2950 common alleles, 240 rare deleterious alleles, and 210 of the rare beneficial alleles. The total effect of rare mutations (assuming about equal effect sizes) is therefore to contribute 80 Bad and 20 Good: net bad. There is, of course, a chance that a person will happen to have more good mutations than bad – but I simulated 100 million draws from the distribution, and it happened 0 times. It follows that *virtually everyone would benefit from replacing their rare alleles with the common allele.

How smart would the all-common-alleles person be? Suppose that a deleterious allele subtracts amount x from your IQ, and a good allele adds x. The average score would be -60x. The standard deviation turns out to be 9.9x. So someone with no rare alleles would have a score of 0x, or 6 SDs above the average. The chance of such a person existing is .95^2000 = 2.8e-45.

(Obviously the exact result here depends on my arbitrary numerical assumptions, but the point is that most rare alleles that have an effect are deleterious, so getting rid of them would be a good thing for just about everyone.)

“For a long time the water in the cisterns had been honored as the cause of the scrotal hernia that so many men in the city endured not only without embarrassment but with a certain patriotic insolence. When Juvenal Urbino was in elementary school, he could not avoid a spasm of horror at the sight of men with ruptures sitting in their doorways on hot afternoons, fanning their enormous testicle as if it were a child sleeping between their legs. It was said that the hernia whistled like a lugubrious bird on stormy nights and twisted in unbearable pain when a buzzard feather was burned nearby, but no one complained about those discomforts because a large, well-carried rupture was, more than anything else, a display of masculine honor. When Dr. Juvenal Urbino returned from Europe he was already well aware of the scientific fallacy in these beliefs, but they were so rooted in local superstition that many people opposed the mineral enrichment of the water in the cisterns for fear of destroying its ability to cause an honorable rupture.”

Is a person who willfully questions the dogmas and taboos of his society even in the face of criticism and reprisals psychologically healthy or unhealthy? If we found such behavior linked to specific genes, would or would you not be in favor of eliminating such a trait?

I’d say that given the ability to edit the genome so thoroughly as to remove genetic load implies that all the traditional limitations on breeding no longer apply. Everyone can afford to carry around a few rare variants of small effect, probably dozens to hundreds without much effect, and in each engineered person you can choose each one separately from anybody anywhere. Each rare allele no longer carries half a genome as baggage. Not only is preserving genetic diversity vastly easier, one can far better tell what the effect of each rare allele is on its own when they are in a mostly-standard “clean” genome.

I’m well aware of Cochran’s tendency to ignore people pointing out his mistakes, I just wanted to offer him an opportunity not to do so.

There really is no such thing as ‘genetic load’, just combinations of genes that produce various effects, some of which are more adaptive than others in a given environment. Our current genomes work the way they do because traits vital to our ancient ancestors have been erased by mutations. If you disagree, explain what happened to the sections of our DNA that code for the structure and function of the vomeronasal organ, which it was discovered we have only a few years ago.

Fine, I’ll restate: maximizing fitness for a specific environment species-wide cripples long-term adaptability, and is therefore ultimately less successful, than retaining genetic diversity despite the immediate suboptimal fitness that results. That’s why most traits are the result of many genes-of-small-effect.

Additionally, there is unlikely to be a single, optimal genotype for any real-world fitness function (although obviously it’s possible to construct criteria that does have an optimum).

Some mutations have little effect, for example because of the redundancy in the genetic code. Almost all of those that have much effect have a bad effect, one that detracts from function. Those deleterious mutations are gradually removed by natural selection, but that takes time, and at any given moment, some deleterious mutations exist [ genetic load]. Mutation-selection balance.

A recent study indicates that such deleterious variation explains the majority of variance in IQ. If that is correct, getting rid of the genetic load would have a huge payoff.

Now I’m sure that you can think of situations in which having only one working copy of a key brain development gene would favored by selection – for example, if a mad king genotyped his subjects and then executed everyone that didn’t have that broken copy – but I don’t to take that scenario seriously, anymore than I have to take you seriously.

Purging genetic load is, in a sense, harder than choosing the fitness-neural IQ-plus alleles we get from GWAS – harder because those deleterious alleles are each quite rare. Every person and family has a different set. But on the other hand, the reward is greater: first because most of the variance in IQ is explained by mutational load (more than by GWAS fitness-neutral alleles) and because most load alleles will have negative effects on other traits, not just IQ. Eliminating load would make you healthier, as well as smarter: the same reason that smarter-than-the-average-bear individuals are healthier and live longer than average, even though they are more likely to follow medical advice.

You seem to be confusing reductions in fitness with the mechanisms you believe are responsible for reductions in fitness, and referring to them with the same terms. You can’t ‘purge genetic load’ – how do you ‘purge’ the difference between the average fitness of a population and a reference genotype? You can purge minority alleles, including recent mutations, though. Except that fitness changes as environment changes, and the majority alleles we’ve inherited aren’t necessarily maximally fit in the modern environment.

One problem is that we’re realizing a lot of the genome is part of complex regulatory systems rather than directly coding for enzymes and suchlike. And given that the brain is known to have tens of thousands of genes expressed only in it – and is thought to have hundreds of thousands of genes involved with its development and function – it’s pretty clear that most of the genetic material directing the brain is of the regulatory and subtle function type. I bet past selection was for functioning despite poor, low-quality nutrition, frequent head injuries, and the occasional famine, rather than maximizing intellectual performance, so there’s probably some room for improvement there.

We have no idea what the ideal fitness phenotype of the modern world is. I can’t say, biologists can’t say, and you certainly can’t. All we can say with confidence is that the genotype that worked well for most of human existence doesn’t produce it.

We (meaning “not you”) have known that genetic load seemed likely to significantly lower fitness for a long time, but that didn’t give you much specific information about how strongly it affected traits of interest. Although inbreeding studies gave some information.

Recent work indicates that mutational load – which takes the form of rare deleterious alleles – accounts for most of the variance in IQ. Which means that if we could get rid of it, which is surely possible if we alter genes at will (CRISPR), we could gene-clean people, remove the typos, and those gene-cleaned people would have higher average IQ than any existing population, probably by a lot.

In the long run, the response to natural selection would be reduced if mutation stopped. In the short run, people that had been cleansed of rare variation – almost all of which that does anything noticeable is deleterious [‘bad for you’] – would conquer the Galaxy. One could of course deliberate introduce genetic changes – you wouldn’t have to rely on random changes, if you were smart.

“genetic load seemed likely to significantly lower fitness for a long time”

Quoting from Wikipedia: Genetic load is the difference between the fitness of an average genotype in a population and the fitness of some reference genotype, which may be either the best present in a population, or may be the theoretically optimal genotype.

It doesn’t DO anything, because it’s a measure of the difference between fitnesses. Saying getting rid of genetic load will increase fitness is like saying getting rid of proximity will make things further apart.

If one eliminated your rare genetic variation, almost all of which that does anything at all is deleterious, you’d be a lot smarter. That rare variation has an effect. it does something – it makes you dumber than you otherwise would have been.

‘Deleterious’ is only meaningful if you have a specific fitness in mind. In the context of a particular environment, the variations that result in survival eventually come to dominate, so change away from them reduces fitness – IN THAT SPECIAL ENVIRONMENT.

The genes responsible for regulation of brain development exist in an environment which is wildly changing, both because the other genes they have to work with are also constantly changing, and the organism in which they work is radically altering its physical environment. Pretty much the only thing we can conclude about fitness in that context is that the ancestral definition no longer applies.

Ashkenazi Jews demonstrate quite elegantly that significant increases in IQ don’t require purging rare mutations. And the fact that there are simple and effective strategies for eliminating suboptimal genes, which are NOT used by actual organisms in wild ecosystems, and are in fact actively avoided, strongly indicates that your linking rare mutations with ultimate fitness is bunk. Your understanding of ‘fitness’, and the cumulative results of billions of years of evolution on the entire ecosphere, are incompatible.

Which is the more compelling argument: your statements, or our lying eyes?

melemdwyr says:
“bet past selection was for functioning despite poor, low-quality nutrition, frequent head injuries, and the occasional famine, rather than maximizing intellectual performance, so there’s probably some room for improvement there.”

Just because the average professor’s greatest physical risk is tripping over a crack in the sidewalk as he navigates from the local espresso bar to his office doesn’t mean eliminating brain mechanisms for programming around damage from head injuries would be any sort of “improvement” for those of us with more active lifestyles.

Consider for a moment how expensive the brain is, in terms of calories and other resources. Having an exceptionally powerful brain that you can’t afford the next time there’s a famine isn’t an advantage, even if it made you much more adaptive when there was plenty of food. So the most fit genotype overall doesn’t involve maximizing brain function because of the need to survive unpredictable dearths of nutrients.
Of course, in the civilized world, we don’t have so many famines. We might benefit from overclocking the ol’ noggin a bit. But if Dr. Cochran ran the world, there’d be no existing variation lurking in the corners of our genomes that might let us take advantage of changed conditions. Fortunately, biology isn’t nearly as clever as he is.

Oddly enough, it looks as if people with high IQs use less energy in brain function, not more. More efficient. It is not as if I need to wear a baseball hat with a solar-powered fan.

There is a huge amount of crap floating around in the average human genome – someone with less crap than average does better. The fact that a tiny fraction of mutations are advantageous and are the raw material for adaptive evolution does not change that. If that recent report on the genetics of IQ is correct, removing the crap would have a big positive effect. A world-changing effect.

Smarter people having more efficient brains isn’t news – the brainwave and PET scans demonstrating that are generations old, now.

You don’t get it – it’s not that some mutations are good and are the fuel for adaptation, it’s that mutations which are immediately bad are the fuel for adaptation.

The efficiency of cellular metabolism, and the regulation of brain architecture and function, require entirely different approaches to understand. Your suggestion to purge low-frequency alleles would likely benefit one, at least in the short term, but be utterly disastrous for the other, and you can’t tell the difference.

Does the model you’re insisting on in any way match what we know about IQ differences between different ethnic groups? Is there even the slightest sign that groups with lower average IQ have higher numbers of deleterious alleles?

If you’re concerned with “world-changing” implications for IQ, why do you focus on a proposed explanation that, even if true, that offers no opportunities for us to exploit? We can’t splice genes in and out of a genome with any precision, and genetic engineering techniques for doing so might not appear for generations, if ever.

There are plenty of potentially exploitable benefits from studying the actual differences in neural architecture that must give rise to between-group differences. And standard breeding would be sufficient to start making use of them. Why ignore that?

‘There is a range of psyches that has been favored by natural selection’

Nature is what we were placed here to rise above. Genteel nerves are classy because they show we are so loyal to society and have so much social status to display that we’ve driven ourselves nuts. Mere neurotypicals can’t understand.

@Melendwyr
“What you seem to not be grasping is that there’s no such thing as an inherently ‘bad’ variation, and selection clearly HAS favored systems that retain them for “a rainy day”, as you put it.”

So taking your point as i understand it – say in the past there was a mutation that increased IQ at the cost of myopia which made it net “bad” in the HG environment and therefore selected against when it might be net “good” in a future environment – then sure, there will probably be some genes thought of as genetic load now which are like that.

But by definition they are bad now and if the mutation is random it will probably come back and if you’re at the point where you’re editing this stuff anyway then you can recreate it artificially.

No, we can’t recreate it artificially. Even if we had the technical ability to insert whatever we liked into any genome – which we do not – we don’t have the level of understanding required to predict the interactions of myriad genes.

We’re not talking about many independent genes, each of which has a discrete effect, which add together. The effects of genes aren’t independent. That’s obvious in a trivial sense, but it’s also true in a much deeper sense. When the effect of any part of a system depends on interactions with many other parts, simple analysis breaks down. Situations like, say, sickle cell anemia – where a single change leads to many obvious consequences which arise only as a result from that change – are special cases.

With our very limited ability to analyze, we make systems where there are few interactions and simple consequences. Biology is mindless, and has no need to simplify designs to the point that they can be understood.

If you have a stretch of DNA that codes for a particular protein, and you change the DNA, you probably change the nature of the protein – and likely lose whatever function it had, while probably not stumbling across any novel function. That’s a special case, and it’s not how regulatory genes work.

Yet oddly enough, biology generally doesn’t take advantage of simple, available techniques for eliminating variation with suboptimal consequences.
Why do you think that is?
Do you think that constitutes a challenge to your model of how mutations affect fitness?

It absolutely does. But it is a stochastic process, it takes multiple (often many) generations in a (roughly) stable environment, and suboptimal forms have a small statistical chance of surviving (the worse they are for fitness the smaller the chance).

You seem to view evolution as some sort of intelligent, thoughtful, process. (E.g., “I’ll keep this gene around because it might be useful in a future drought.”) It is not. It is a random process. As such, trends can be predicted, but not specific outcomes.

No, ursiform, it doesn’t. I spent several comment posts pointing out ways in which it doesn’t.

Self-crossing is a very, very effective way to purge a genome of suboptimal alleles. But the overwhelming majority of complex organisms practice reproductive strategies that render it difficult or impossible. Either it has disadvantages which are greater than the advantages of mutational purging some of you keep insisting upon, or such a purge isn’t as helpful as you think.

Evolution is neither intelligent nor intentional, but we still speak of design. Biological systems DO keep alleles around in case they’re useful later – and the fact that this is the consequence of eons of random change and selection, rather than intent, is utterly irrelevant.

Sorry, but biology is causal. It has no idea which alleles will be useful in the future. Sometimes a species gets lucky and has some alleles that haven’t (yet, perhaps) been eliminated by adaptation and which happen to be beneficial in a new environment. Many species go extinct because they have adapted very well, and can’t handle an environmental change. You cherry-pick random examples of good luck and back-argue them into a biological processes that doesn’t exist.

ursiform says:
“Sorry, but biology is causal. It has no idea which alleles will be useful in the future. Sometimes a species gets lucky and has some alleles that haven’t (yet, perhaps) been eliminated by adaptation and which happen to be beneficial in a new environment.”

Luck favors the prepared mind. As well as the genome that keeps samples of everything on the off-chance it will prove useful later.

It’s a quite similar principle to having many genes with small effects, some promoting and some hindering the net result, determine phenotypes. Sure, it guarantees that the trait will occur in a normal distribution, so a portion of each generation will be grossly maladaptive and eliminated, but it permits resilience in the face of short-term selection and adaptivity in the face of long-term.

What is m trying to claim? It’s not hard to explain why selfing has a hard time invading many sexual populations: sexual populations accumulate hidden deleterious recessives that come out in a big way when individuals inbreed (and selfing is the most extreme inbreeding). Sure, if everyone decided to self for many generations, such alleles would be purged and the problem would be solved, but only melendwyr thinks selection works like that… ANd good theory exists to explain why selfing does exist in some plant populations, e.g. Disproportionally in annuals. So what is being argued?

Except that humans easily – even inadvertently – took plants that were originally outbreeders and made them obligate inbreeders. It’s not all that hard, often requiring alteration of a few simple traits.

Yet most plants prefer outbreeding, and quite a few mandate it by putting biochemical barriers between their own pollen and their own ova. Animals developed reproductive systems in which selfing is impossible and developed instincts against closely related individuals breeding.

If accumulated mutations rendered self-crossing lethal, or so maladaptive as to be effectively so, we wouldn’t have tomatoes, beans, peas, any kind of squash…

Good thing I didn’t say self-crossing was lethal. In fact I said it was common in many wild taxa. See, for example, page 1275 in http://labs.eeb.utoronto.ca/barrett/pdf/schb_136.pdf, a figure showing that selfing is very common (i.e. the majority) in annuals and relatively rare in perennials. This is thought to be consistent with the reproductive assurance idea I mentioned before: short-lived, highly dispersed plants may encounter fewer pollinators, so self pollination can be favored. In highly outbred diploid populations, selfing is hard to evolve anew, since such populations tend to hide a lot of deleterious recessives. This doesn’t mean it can’t invade – see the work of B. and D. Charlesworth for some good simple models. Once it becomes common, deleterious recessives alleles are purged, so it’s easy to keep around. So it’s not surprising that both systems exist in nature – they can both be stable.

All of this is a lot more helpful than just mentioning a few domesticated crops. Artificial selection can be forward thinking and anticipatory; natural selection can’t. I can select for selfing in plants, and if I select hard enough, I’ll overcome any upfront inbreeding depression and eventually purge rare deleterious recessives. But natural selection can’t wait around for that – if it doesn’t favor selfing strongly enough to overcome the initial inbreeding depression, it won’t invade. Evidently, sometimes this happens in nature, and sometimes it doesn’t.

So, again, it’s not clear what you’re after. There’s a lot of theory on the evolutionary dynamics of selfing vs. outbreeding in plants. None of it requires the theory that mating systems evolve specifically to keep bad alleles around in the off chance that they’ll one day be favored.

We’ve induced several perennial species to become obligatory selfers. It’s not hard.

You’re confusing species which permit self-crossing – and which still encourage out-crosses – with obligatory self-crossing. The latter is rare in nature, with only a few examples.

To reduce rare, harmful mutations, you wouldn’t even need that many self-crosses. Just the occasional one. Yet many plants either put up biochemical barriers that prevent self-fertilization, or make it entirely impossible by making entire plants solely male or female. Why would that ever have arisen, if purging rare mutations was so critical?

More to the point, why would any species ever have developed huge numbers of harmful mutations in the first place? They would only arise if no selfing took place for an extended period. Why would that have ever occurred?

“We’ve induced several perennial species to become obligatory selfers. It’s not hard.”
I’ll take your word for it. Like I said – artificial selection.

“You’re confusing species which permit self-crossing – and which still encourage out-crosses – with obligatory self-crossing.”
No I’m not. When did I make any such distinction? I talked about the frequency of selfing in different taxa. Fact: it’s very common in annuals. That’s it.

“More to the point, why would any species ever have developed huge numbers of harmful mutations in the first place? They would only arise if no selfing took place for an extended period. Why would that have ever occurred?”
Are you serious?
Because mutations produce them, and their deleterious effect isn’t strong enough for selection to have gotten rid of them. Do you understand mutation selection balance? Do you think that selection is favoring the maintenance You seem to be asking why the selection coefficients against rare deleterious alleles isn’t stronger. It is what it is.
(And by the way, selfing doesn’t magically increase selection against all deleterious alleles. It only increases the efficiency of selection against rare recessives. So your apparent belief that infrequent bouts of selfing would magically remove all deleterious alleles is really stupid.)

Now I’m really curious: how do you think all of this works? You seem to think that selfing is really easy to evolve in any outbred population (despite inbreeding depression), and it has magically properties that purge the genome of all badness, but it doesn’t evolve because “evolution” wants to keep hidden deleterious recessives around for the anticipated environmental change in the future…

You don’t need to take my word for it. Go find a book on backyard vegetable breeding – it doesn’t even need to be a textbook – and you’ll find lots of examples.

The deleterious effects of the mutations can’t be obscured in selfers, RCB. Are YOU serious? Selfers segregate into truebreeding strains – half the hypothetical eventual descendants will lack the mutation completely, and half will have two copies of it. If there’s a subtle disadvantage to the mutant, the mutant population will slowly lose out to the healthy one. If there’s a serious-to-lethal disadvantage, that mutant population dies and the population of carriers dwindles swiftly.

“The deleterious effects of the mutations can’t be obscured in selfers, RCB. Are YOU serious?”
When did I say this?
I’ve said that populations that have been selfing for a long time will have brought their deleterious recessives to low frequency. i.e. Selfing populations purge deleterious recessives, which causes inbreeding depression to erode and become much smaller than it would be in an outbreeding population. Inbreeding is bad when selfing first arises in an outbreeding population, which is why it often can’t invade. But if it can invade (and we have models for when it can), then over time inbreeding depression goes away.

I would argue you’re modeling the wrong end. Why did the preference for outcrossing arise in the first place?

I live less than a hundred miles from the fields where hybrid vigor was first experimentally demonstrated, so the topic has had my interest for a long time. The idea that it’s due to masking of many slightly-harmful mutations is widely accepted, but it has some curious problems as an explanation.

If the accumulation of rare variants were truly as harmful as it’s assumed, we would expect that frequent self-crossing would be widely favored, with some out-crossing permitted. Instead, we have the exact opposite. Various human-bred crop plants act that way (through natural selection, btw, not artificial – it’s a side effect of taking the plants out of their pollinators’ ecosystems and growing them in tiny populations) and a few wild plants (like common blue violet). Many plants (and insects, and amphibians, and mammals, and birds, and lizards, and fish) have entirely prevented selfing by separating the genders, or having behavioral and/or chemical barriers to close relatives breeding, or some combination. Among plants, many are technically capable of selfing but put up obstacles so that out-crossing will happen first if possible and prevent it.

The fact that this pattern occurs again and again across countless forms of life, when it’s clear that it would have been easy for them to adopt the alternative – and among plants, it still is easy. But seemingly lethal in anything other than the short term.

I believe this strongly indicates that the “accumulated mutations impair fitness” hypothesis is at best extremely incomplete, and possibly outright wrong. And that’s without even getting into the subtler reasons why you can’t treat a system of tens of thousands of interacting genes the same way you do straightforward biochemical pathways. Most of our can’t mutate in any significant way, which is why we share it with so many other kinds of life. If those sections of the genome changed, the cell would die. The genes controlling brain development are totally different, and you’d need an ecological model. Pleiotropy rules (or at least dominates).

“I would argue you’re modeling the wrong end. Why did the preference for outcrossing arise in the first place?”
Both questions are important. Assuming that selfing was once common, how did outbreeding evolve, invade, and become predominant? I don’t know that theory as well, but I’m sure it’s related to the evolution of sex in general. But just as important is to ask why selfing cannot re-invade, or under what conditions it can happen. Answering that is relevant to your question as to why selfing doesn’t just take over.

“If the accumulation of rare variants were truly as harmful as it’s assumed, we would expect that frequent self-crossing would be widely favored, with some out-crossing permitted.”
No, I wouldn’t expect that. I’ll repeat: in a population that has been outbreeding for a very long time (most species), an allele that increases inbreeding (selfing is the most extreme) will cause its bearers to experience inbreeding depression (increased homozygosity), and therefore the allele will have lower fitness and not invade (assuming no other beneficial effects, which is of course possible). It doesn’t matter if such a population would eventually achieve higher mean fitness 50 generations from now. If the trait can’t get in the door, it won’t evolve.

“I believe this strongly indicates that the “accumulated mutations impair fitness” hypothesis is at best extremely incomplete, and possibly outright wrong.”
I mean, we know that most mutations are not beneficial, and we know that selection can’t magically get rid of them right away. It follows that there are many deleterious alleles segregating in the population at low frequencies right now. I’m not trying to say anything grander than that.

It’s trivially easy to create selfing for certain types of barriers, harder for others. With biochemical barriers, you just need plants that are defective in one or more of the stages necessary for the barrier. With physical barriers, slight changes in physiology are all that’s required. Some tomato wild relatives are biochemically obligatorily outbreeding, and it’s not clear if the tomato originally was. But the (rare) crosses between them aren’t. Enclosing the stigma within a cone of stamens, and having stigma activation and pollen release occurring before the flower opens, are enough to render the plant almost totally selfing – insect crosses are rare.

I will note that tomatoes are one of the crops that exhibits little to no hybrid vigor. And the people who domesticated it had no formal knowledge of biology as we’d consider it and weren’t artificially selecting it for breeding habits. They just slowly moved it out of its original habitat, away from its pollinators, and grew it in small patches. That was enough – plants which selfed themselves could reproduce more reliably, and incidentally would be easier to select for true-breeding desirable traits. So they changed the nature of the plant’s reproductive strategy. Same with peas, which are native to a relatively small region of Afganistan. They’ve been spread across the world, and they’ve virtually never successfully pollinated by insects.

“The fact that this pattern occurs again and again across countless forms of life, when it’s clear that it would have been easy for them to adopt the alternative – and among plants, it still is easy. But seemingly lethal in anything other than the short term.”
Just to reiterate: you say it’s easy, but I say it’s not, for wild outbreeding populations. Again, if a novel selfing allele causes even mild inbreeding depression, it will be very hard for it to invade in wild populations. Not easy.

You’ve ignored the point again. Why do the manditorily-outbreeding populations exist in the first place?

In another response, you evaluated the total cost of the homozygous expressions. But we’re talking about the cumulative effects of many heterozygous pairings. You conveniently left that out of your analysis.

Also, we’re talking as if a selfing population would achieve less load (in a fixed environment) than an outbreeding one. This seems to be why you think that selfing should easily evolve in the short term. My argument so far is that the eventual fitness of a selfing population is mostly irrelevant: it’s the selfing allele‘s fitness upon invasion that matters, and that is often expected to be reduced.

But even the assumption that a selfing population maintains less load is generally false. The load is approximately the same – at least, the load caused by deleterious recessives. Recall that in an outbreeding population, the equilibrium deleterious allele frequency is sqrt(m/s). The frequency of genotypes that actually show this bad phenotype is m/s. The load is the frequency of that type times the fitness cost, which is m/s*s = m. Sum that up across all loci, and you get the usual calculation for total mutational load: it’s the total deleterious mutation rate.

Now in an inbreeding population, the equilibrium deleterious allele frequency is a substantially lower m/s. Virtually all of these alleles are present in homozygous recessives (unless the allele is really deleterious – but these formulas are all weak selection approximations anyway), so the frequency of the deleterious genotype is the same: m/s. Again, multiply by s, and the load is m. Same answer, as far as the approximations go. As I said elsewhere: in inbreeding populations, deleterious recessive alleles are less common, but they are expressed just as often as in an outbreeding population because of high homozygosity.

So that’s the contribution of deleterious recessives to load. What about heterozygote superiority? Consider the case where the heterozygote is superior, and both homozygotes suffer the same fitness penalty s. Then the equilibrium allele frequency in an outbreeding population is 1/2. Half of the population will be homozygous, so the load is s/2. In an inbreeding population, I don’t think there is a stable equilibrium, because virtually everyone will be homozygote, so there is no variation in fitness, therefore no selection. But since everyone is a homozygote, everyone gets the fitness cost, so the load is s. So here we have a case where the selfing population has higher load even in a fixed environment – twice as high, actually. (I seem to recall that most folks think that deleterious recessives contribute more to load and inbreeding depression than heterozygote superiority, though.)

So, the premise that a selfing population will maintain less load even in a fixed environment is flawed.

Much of our argument is about the cumulative effects of recessives in a heterozygous state. Selfers don’t carry recessives in a heterozygous state, at least not for very long. If an embryo aborts because it’s carrying two copies of a lethal recessive, sure, that needs to be considered in the evaluation for the total cost to the species. But what is the cost to the surviving individuals in the two groups, considered over many genes? That aborted embryo loses 100% of its IQ, but we’re not talking about that, we’re talking about effects on IQ on survivors.

“In another response, you evaluated the total cost of the homozygous expressions. But we’re talking about the cumulative effects of many heterozygous pairings. You conveniently left that out of your analysis.”
What the fuck are you talking about? I was modeling load under deleterious recessives, meaning that the only cost occurs under homozygosity. So, no, we’re not talking about the “cumulative effects of many heterozygous pairings.”

“Why do the manditorily-outbreeding populations exist in the first place?”
Yes, that’s an interesting and important question. But if you want to know why selfing doesn’t currently invade and replace all current strategies (which you claim is easy), I’ve addressed that multiple times. If you don’t get it by now, I don’t know what else to say. That is, if you think selfing easily invades in populations with inbreeding depression (i.e. most wild populations), you don’t know shit.

“It’s trivially easy to create selfing for certain types of barriers”
Listen: I know that it’s physiologically easy to create mechanisms that
increase selfing rates. When I say it’s “hard to evolve”, I don’t mean that it’s hard to produce mechanisms that do it. I mean that most of the time selection will disfavor any mutation that causes it, so those that do evolve quickly die off. The fact that you are confused about this suggests that you don’t actually understand the importance of selection to the invasion process, which means talking to you is a waste of time.

None of this is really important to the main point: that most of the rare alleles segregating in human populations today are neutral or bad for biological processes; that most people have a lot of these; and therefore a person with no rare alleles would be a highly functional organism. If you want to talk more about the evolutionary dynamics of selfing, ask Greg for my email address.

“acting as though there were some essential standard, some divine originating baseline, that mutation inevitably falls away from. There isn’t.”
He’s not saying that. He’s saying if you created a person that only had the common allele at every locus, such a person would be substantially more fit than average right now – given current environment and biology. That’s it.

Fit for what?
If you eliminated all new mutations, the result would be perfect man, perfectly adapted to primitive Stone Age conditions, let’s call him “The Stoneman”.
The Stoneman would be the world’s best runner and jumper, tracker and hunter of supernatural power. Unfortunately he wouldn’t be able to put his skills to practice. He would have to spend his life in pressurized bubble, because he will drop dead when any modern just sneezed at him.
There is a new idea for superhero story …

there’s no presupposition – if violence is useful in one environment (e.g. hunter gathering) then it would be selected for and if less useful in another environment (e.g. farming) it would be selected against

You’ve missed the point. Reducing a behavioral tendency IS a loss of function – in one kind of context. In other context, it’s a gain. The point is that the fitness is contextual, and contexts change.
Which is an important part of why “mutations make you inefficient and dumb” is an incorrect argument.

You’ve missed the point, which is that fitness is contextual and contexts change. Whether it’s a ‘loss of function’ is equally contextual.

Lots of traits exist as distributions, with many genes contributing. The extremes of the distributions tend to be less adaptive than the centers. But even if you eliminate the extremes, combinations that create them will continue to arise – which is pretty useful, if circumstances change and the fitness maximum no longer matches that high-probability center.

1) It’s not so useful to think about individual genes. Lots of our biochemistry involves interactions between multiple genes. What if variant A is harmful by itself, and variant B harmful by itself, but having A and B works really, really well? What if A and B work really well in the context of a new environment in which the default doesn’t work all that well?
If only we had some way of keeping variation around, so it could try out combinations without having to wait for multiple improbable mutations at once… /irony

2) What do you mean, ‘added lots of bad ones over the last 10,000 years’? Do you imagine humanity before that time was somehow genetically pure and untainted? Creationists talk that way, not scientists.

What in the world makes you think that the probability of further improvement necessarily decreases the more improvements that are made? That would make some sense – IF the criteria for evaluating function weren’t constantly changing.

We’re dealing with moving targets, GW. We’re not slowly approaching a static perfection, where any further change is necessarily degredation.

“What in the world makes you think that the probability of further improvement necessarily decreases the more improvements that are made?”

probability

#

“That would make some sense – IF the criteria for evaluating function weren’t constantly changing.”

yes – it makes perfect sense at any specific point in time – which is always

#

“We’re dealing with moving targets”

right, and if the target moves then it still makes sense to shoot at where the target is at that time

#

what you’re saying is we shouldn’t make changes because we can’t know the future (which i get and may be relevant to human evolution in the past) but we do partly know the future – we know we’ll keep adding more and more mutations that are bad for now – cos probability and population size

“yes – it makes perfect sense at any specific point in time – which is always” But not ACROSS time, which is the point! If you eliminate everything that isn’t maximally adaptive at ONE point, you’re screwed when conditions change. Which is why biology uses strategies that prevent variation from being lost even though it reduces current fitness.

“if the target moves then it still makes sense to shoot at where the target is at that time”

Biology doesn’t plan, or intend. It doesn’t have goals, and it’s not shooting at a target. It’s not design.

No, what I’m saying is that taking a sufficiently narrow view can make a strategy look good while a broader view can show it to be bad or even fatal. You’re taking a very narrow view.

No, no, no. Whether it’s a loss of function, or a gain of function, depends entirely on the context of judgment. So is ‘usefulness’. Evolution doesn’t make that kind of judgment at all.

If it were actually evolutionarily beneficial to rid genomes of low-frequency alleles, there are effective ways available to do so. But life doesn’t make use of them, biological designs minimize the chance they’ll take place, and the few examples in the wild are thought to be in the process of going extinct. So it probably isn’t beneficial, or there are even greater detriments you haven’t accounted for. And if you don’t understand why, there are probably some important limits to your understanding.

“If it were actually evolutionarily beneficial to rid genomes of low-frequency alleles, there are effective ways available to do so. But life doesn’t make use of them, biological designs minimize the chance they’ll take place, and the few examples in the wild are thought to be in the process of going extinct.”
Selection, of course, is always working to rid us of deleterious alleles, but it may not be able to work fast enough. There’s nothing inherently bad about rare alleles per se – but most new mutations will not be beneficial, of course.
It’s still not clear to me what mechanisms you have in mind. You’ve mentioned selfing a few times, but of course selfing alone doesn’t change gene frequencies, and we already have good explanations for why it might fail to evolve in many populations. Doesn’t require any theory about maintaining deleterious alleles for a rainy day, because that’s bunk.
On the other hand, we know life has evolved very complex mechanisms to get mutation rate very low. This certainly has the effect of reducing genetic variation. How shortsighted of us!

A novel mutation in a corn plant may not encounter another version of itself for multiple generations and will appear in one-half of its offspring. A novel mutation in a pea plant will be present in three-fourths of its offspring, and homozygous in one-fourth of them. And that’s just in the first generation. After multiple generations, assuming it has no immediate effects on viability, the descendants will form two homozygous groups, one purebreeding for the original allele and the other purebreeding for the mutant.

Let’s go back to Genetics 1. Imagine a diploid organism. Imagine a locus with allele A at frequency p, and B at 1-p. Suppose the genotype frequencies of AA, AB, and BB are x, y, and z, respectively. Clearly p = x + 1/2y. Now imagine that suddenly everyone decides to self, and does this forever. Imagine no selection. What is the frequency of the allele p in the next generation?
AA individuals will produce all AA progeny: x(1) = x
BB individuals will produce all AA progeny: z(0) = 0
AB individuals will produce 1/4 AA, 1/2 AB, 1/4 BB (as you mentioned above), so the gene frequency among their progeny will be 1/2 A, 1/2 B, so = 1/2y

Add them up: the frequency in the next generation is x + 1/2*y; gene frequencies unchanged.

Selfing is a form of inbreeding, and inbreeding does not change allele frequencies – only genotype frequencies. So of course selfing does not inherently cause rare alleles to decrease – it only assorts them into homozygotes. What it can do is expose rare deleterious recessives, and selection will cause these to decrease. Of course, exposing your deleterious recessives to selection is a bad thing, which is why often selfing (inbreeding) is selected against – but not always.

Yes, I’m assuming no other effect on fitness, because I wanted to investigate the effect of selfing by itself. Hence my original claim “selfing alone doesn’t change gene frequencies” that got us onto this discussion. Of course if you assume that selfing has other positive side effects, guess what – it will be selected for. Brilliant observation, melendwyr. Have you considered publishing?

(In outbred populations, selfers will generally suffer lower fitness because of inbreeding depression. Of course that’s not always the case – maybe by not waiting around for a mate, they’ll produce more progeny: reproductive assurance.)

Except there are quite a few species like squash, which show no notable depression after being selfed for dozens of generations, and no notable hybrid vigor, either. (Some people report weak versions of both conditions in specific strains of squash.) It seems to be as fit as it can be. Its physiology permits selfing, but encourages outcrossing if that’s possible.

It would seem like an ideal state arising from an ideal strategy. So why would some other species have separate males and females? Or make barriers preventing self-pollination? That seems less than ideal… unless our understanding of ideal is incorrect.

“no notable depression after being selfed for dozens of generations”
That is exactly when we would expect no notable depression: after a long period of selfing. More selfing -> less inbreeding depression. Do you not get that?

Let me be clearer: they show no immediate inbreeding depression, and no inbreeding depression after at least a dozen generations of selfing. The only reason I do not say that they show no inbreeding depression, period, is that some people claim very old varieties that haven’t been selected well begin to manifest decline that might be effects of inbreeding. But we’re talking scores of generations, in lines inbred for a hundred years or more.

I get it. I’m no squash expert, but if squash shows no inbreeding depression, that’s probably because it doesn’t have many hidden recessives left. My guess would be that this is the result of a long period of strong selection and inbreeding by plant breeders, which would produce exactly that outcome – eventually, anyway. So what? It’s still true that selfing would very likely have a very hard time invading a population that has been outcrossing for thousands of generations, and therefore has had time to accumulate lots of hidden recessives. This is the case with many wild plants, but not all.

Let’s address the case most like the mutations which supposedly depress intelligence. Let’s say we have an organism with alleles Aa, with the heterozygous state having only minimal and subtle effects on fitness, and homozygous recessive having massively negative effects, even lethality. Let’s do a repeated self-crossing.

First generation: 1/4 AA, 1/2 Aa, 1/4 aa – the latter of which dies or fails to reproduce. Of the survivors, it’s 1/3 AA and 2/3 Aa, and there’s a loss of 25% of offspring. Next generation: the AAs are truebreeding, while Aa produces 1/3 AA and 2/3 Aa, so it’s 5/9 AA + 4/9 Aa, loss of ~17% offspring. Third generation: 23/27 AA +8/27 Aa. Do I need to continue this, or is the trend obvious?

Compare this to obligatory out-crossing. The math takes up a lot more space, but half of the mutant organism’s next-generation offspring will (on average) carry the gene. There’s no counter-selection until a carrier breeds with another carrier, and 25% of their potential offspring will die/fail and ~66% of their surviving offspring will carry the trait. The trait will slowly spread until the rate of removal from homozygous pairings matches the rate of transmission, at which point a stable equilibrium is reached and the trait sticks around at low levels, probably. So there’s a constant low-level cost to the population in terms of fitness, because some poor schmucks are occasionally going to lose the genetic lottery and get aa.

If the environment ever changes – either externally, or internally – so that aa or Aa are useful, the allele a is present in the population and the species can benefit. IF they’re strongly outbreeding. If they’re strongly inbreeding? Allele a was eliminated in favor of A a very long time ago,

Cool, melendwyr: you’ve assumed selection for A, and then you’ve done some math to show that A has increased in frequency. Am I supposed to be impressed by this? What do you want me to learn, here?

As I’ve now said many times: you and I both recognize (I think!) that selfing increasing homozygosity and therefore exposes rare deleterious recessives to selection. So selection will reduce the allele more quickly. Is this what you are trying to show me?

“If the environment ever changes – either externally, or internally – so that aa or Aa are useful, the allele a is present in the population and the species can benefit. IF they’re strongly outbreeding. If they’re strongly inbreeding? Allele a was eliminated in favor of A a very long time ago,”
Cool story. So what are you arguing? That selfing is disfavored for the effects it will have many generations from now?

No. I’m arguing that, over the evolutionary history of our world, natural selection has tended to eliminate species that settled on selfing-heavy strategies. I am noting a probable reason why that elimination has taken place; additionally, I note that the idea that purging rare recessives leads to fitness increases in anything other than a trivially-short timeframe is not compatible with that probable explanation.

“The trait will slowly spread until the rate of removal from homozygous pairings matches the rate of transmission, at which point a stable equilibrium is reached and the trait sticks around at low levels, probably.”
Yes, I’ve done these models, as has anyone who has studied basic population genetics. Selection always acts to reduce the rare recessive, but very weakly – so it’s not correct to say that “The trait will slowly spread” after introduction. Of course mutation very weakly increases the frequency in the population (by reintroducing the allele anew). So an equilibrium occurs, at p = sqrt(m/s), where m is mutation rate and s is selection coefficient against recessives, if I remember correctly. Of course the trait will only be expressed at frequency m/s. This is called mutation-selection balance. I understand it better than you do.

Okay, now explain why the strategy that purges mutations quite rapidly, leading to a state that Cochran and you suggest is obviously superior, is rare to nonexistent, while the strategy that leads to an accumulation of harmful mutations and a purported global erosion of fitness is ubiquitous.
It should be easy for you, since you understand so much better than I.

“Okay, now explain why the strategy that purges mutations quite rapidly, leading to a state that Cochran and you suggest is obviously superior, is rare to nonexistent”
If you’re referring to selfing, I never claimed that as a long-term strategy, selfing would be obviously superior. I (and Cochran) have only said that an individual today with only the common allele at each locus would probably be much more “functional” (say, fit, smart, whatever), than the average person, in today’s environment. This is not inconsistent with the obviously true observation that a population with less genetic diversity is less able to evolve to new environments.

As to why selfing is generally rare in the world (though common in some wild taxa): you’re asking about the evolutionary dynamics of selfing. You’re asking “why don’t novel alleles that cause selfing invade and replace alternative alleles in outbreeding populations?” Again, I’m no expert, but a strong reason is inbreeding depression, which is the most damaging when the allele first arises. If the upfront selection coefficient of selfing is negative, then it will have a very hard time invading. There are mathematical models of this process. Look them up. None of them require assumptions about keeping bad alleles around for a rainy day.

“No. I’m arguing that, over the evolutionary history of our world, natural selection has tended to eliminate species that settled on selfing-heavy strategies. I am noting a probable reason why that elimination has taken place; additionally, I note that the idea that purging rare recessives leads to fitness increases in anything other than a trivially-short timeframe is not compatible with that probable explanation.”

Again I’ll note that apparently most annuals are mostly selfers, and this probably isn’t some transient thing – selection favors it, and there are good ideas as to why. But, sure, it’s rare in the grand scheme of things. I’m perfectly happy to entertain long-term lineage selection effects, as I did at the beginning of our conversation. E.g. that a sexually reproducing population will generally be able to evolve and adapt faster than an asexually reproducing one (due to recombination), and therefore we might expect them to replace the others in the long term. Is this what you’ve been after all along?

By the way, we’ve been talking this whole time as if selfing reduces variation. But as I’ve explained, it doesn’t reduce allelic variation – it just puts alleles into homozygote genotypes, without changing allele frequency. As a consequence, yes, it’s better at removing rare deleterious recessives. But it’s also better at allowing rare beneficial recessives to evolve. In fact, in a perfectly outbreeding large population, rare beneficial recessives can’t invade, because they only exist in the beginning in heterozygote form. They have to get lucky and drift in to the point where they start meeting each other – inbreeding helps that process a lot. So this is a case where selfing actually allows the population to evolve a beneficial allele faster.

Now consider the scenario you’ve been talking about, where an allele starts disfavored and then becomes favored. Consider the first stage. In an outbreeding population, the equilibrium frequency is sqrt(m/s), but the alleles are only “exposed” as homozygotes, which are of frequency m/s. In a selfing population, it turns out the the equilibrium is also m/s. So the alleles are visible to the same degree with regard to selection. Now when the environment changes and the rare allele is favored, selection will operate on both populations just as effectively.

So, sure, the outbreeding population is “better” at keeping bad recessives around – but that’s only because it “hides” the vast majority of them. It’s not actually any better at responding to selection if the environment does change. So… I don’t think your idea is really well conceived in the first place, with regard to selfing.

“If you eliminate everything that isn’t maximally adaptive at ONE point, you’re screwed when conditions change.”

I get the point and i’m probably in the middle in terms of what the scale of editing ought to be however it seems to me once population size reaches a certain level you need some degree of artificial natural selection.

This conversation makes me wonder how improved in IQ identical twins would be if one was normal and other cleaned of all rare variants via crisper. Sounds like a good science fiction story for now but actually could happen in the not too distant future in a country like China.

Talk is cheap, I want to know if this works and how well it works. If it leads to pitched battles trying to keep the “cleaned Chinese” out of our elite colleges I for one would be amused. I might just dig into my pocket book so that my grandkids or great grandkids get cleaned as well.

The first generation gets cleaned of obvious genetic load. The second generation gets cleaned plus preferred human wide allele selection. The third generation has novel alleles which makes us even smarter. The fourth generation leaves.

I am convinced that Cochran is right, that genetic load is the primary cause of the large variation in human intelligence. I am further convinced that cleaning rare alleles out of a genome would result in a person out on the left hand tail of the bell shaped curve of human intelligence. I doubt that the results would be a super genius, just a very sharp human being. In any case there will be pervasive irrational resistance to any attempt to push scientific research in this direction in the west.

Wouldn’t it be nice if a few billionaires decided their kids were rich enough and pushed their philanthropy towards solving the problem of human misery caused by human stupidity rather than pushing the problem on to the next generation making sure we have more dumbshits reproducing.

Of course this is a mean thing to say. Bill Gates, if you feed the desperately poor third world, if you keep them from dropping dead from horrible diseases then what are you really accomplishing? Aren’t you just helping to create a bigger shit hole with more people stuck in worse human misery for the next generation?

You can’t out talk the the Melendwyr’s of the world, don’t waste your time trying. They will always be there. If this works it works, if it doesn’t work it doesn’t work. As i mentioned earlier you clean one identical twin of genetic load via Crispr and you leave the other one alone. See what happens.

You dont have to worry. For California I have compared the performances of the children compare to that of their parents and both with respective to their relevant peers. The top few cities where the children substantially out perform their parents are mostly Whites, e.g. Chenango Forks (NY), Ross, Atherton, Westlake Village, etc, compare to the mostly East Asians American town of Arcadia. Dont know which town South Asian Americans are dominant. Cupertino is about 60% East and South Asian Americans and they do not do that well and they perform below their demographic percentages. On average while the South Asian immigrants are mostly elites, those from East Asia are not really the best. More analysis later.

I think it far more likely that actual genetic engineering will be first implemented in China rather than in the west. But if works to increase intelligence than no amount of laws or shrieks of abomination will stop it in the west for long.

Neurodiversity is prominent because the force of regression to the mean
is very very weak. Together like the appealing concepts of thermodynamic
equilibrium and efficient market theory they are seldom observed in practice,
for example the South Asian Americans are not going to regress back the
level at their home countries any time soon.

Although the IQ data for children and parents are not available, the effects
should be observable from the comparative performances of the children and
parents with respect to their peers, i.e. the WobegonNdx=NmsNdx/EduNdx where
for the children the percentage of the number of National Merit Scholarship
semifinalist (education level at the top 1%) NmsNdx and for the parent the
ratio of the city population with degrees compared to that for the state
average.

The data below has shown that for California at the high cognitive ability
and socio-economic levels where the force of regression to the mean should
be the strongest, all of the cities at that level were surging forward rather
than regressing back to the overall population mean. At the also ran
faction instead of regressing upward towards the population mean most
of them were droping like lead ballons, on average the children perform
worse than their parents.

The transition region is very tight. When NmsNdx is greater than 1.10 inall the associated cities on average the children were performing
better than their parents (WobegonNdx gt 1). For example in the city of
Atherton, the socio-econmic percentile is at 99.99%, median income at $250K,
EduNdx at 2.44 times higher than state average, it would seem to be very
hard for the children to perform much better than that but still for
the children the NmsNdx is at 49.08, giving the WobegonNdx=NmsNdx/EduNdx=20.11.
For Cupertino, the home bases of Apple Inc. and many other IT companies in
the Silicon Valley, SESpctl=98.72%, IncK=$154.13K and
WobegonNdx=16.82/2.22=7.59, on average the children there are still able
to out perform their parents.

However, not all such cities are at high SESpctl. For example, the city with
the highest WobegonNdx I found so far is from NY state Chenango Forks town
where SESpctl=39.07%, IncK=$65.95K, WobegonNdx=18.31/0.43=42.7 . This is
only a rough comparison since different states have different cutoff scores
for the NmsSF. Incidentally, alledgedly Lake Wobegon MN is in Stearns county
and the nearest noted city is Sortell which has a dismay WobegonNdx=0.73 .

At NmsNdx values between 1.10 and 1.00 there are mixture of both types.
At NmsNdx less than 1.00, all the also ran cities on average the children
perform worse than their parents except for one town Patterson which has only
1 NmsSF and thus an outlier. For city like LA city, the SESpctl=38.0%,
IncK=$55.22 and WobegonNdx=0.09/0.86=0.11 .

There are no indicators for the challenged faction and so they drop off
the scope. Nevertheless, if they are really regressing to the mean
(i.e. NmsNdx=1) they should appear in the analysis but they would be
facing a strong head wind as the trend for NmsNdx less than 1 is on
average for the children to perform worse than their parents.

If these forces are mild, they might fatten both the tails of the
unimodal IQ bell curve. If they are strong they might rip the simple
bell curve into multi-modal distribution with significant dips between
the clusters, as pointed out by Murray.

The distribution of the mean state NAEP scores can be considered as the
process of sample means distribution which by central limit theorem should
exibit normal distribution even when the parent distribution is not normal.
Yet the dip test for the distribution of the mean state NAEP scores for
whites and blacks seperately (thus no race effects) are not significant,
rejecting the null hypothesis of unimodality, i.e. they are separately
multi-modal. For example the test for NAEP whites,

The result is also the same for the distribution of the mean national PISA
scores. Thus what Murray asserted is true. However, there are no simple
stats tests for multi-modal distributions and thus most further analyses
assume that normality can be approximated.

These results if true at the finer grain level are against the general
narrative of regressing to the mean, they are more like ‘assortative
runaway from the mean’, though the later might be true at the macro level
for some situations. This trend is also consistent with the continuous
improvement of PISA results for Singapore since they already are at
the top and there is no other real higher population mean they can
regress upward to.

These results if true would require new interpretation for the breeder equation
R = h^2 S, i.e. how to explain the narrow-sense heredity h^2 of value greater
than 1. Alternatively, the modified problem I just formulated which has
non-linear relationship and the variables are ratios rather than differences.
Thus h^2 does not have to be greater than 1.

log10(NmsNdx) = w h^2 DegNdx + c

where w is the Wobegon factor (not WobegonNdx) and w h^2 got the value from
the regression eqn

Hmm. It is problematic trying to visually determine massive
overlapping points. Anyway a clustering program sorted out
the regions.

NmsNdx le 0.9892 : WobegonDrop ; cases=52

0.9892 lt NmsNdx le 2.1907 : WobegonMix ; cases=35

NmsNdx gt 2.1907 : WobegonSurge ; cases=125

There are another 891 CA towns/cities which do not produce any
NmsSF and presumingly in the WobegonDrop region. Thus regression
to the mean only occurs in 35/1103 towns/cities while the rest are
assortative runaways from the mean.

The scatter plot reminds me of those charts for chaos theory. The
transition to the next generation resembles the mapping of points
to the next iteration. The WobegonMix region resembles those containing
‘strange attractors’ or limit cycles while the other two regions have
‘strange expellors’ or rather valleys of ‘strange saddle points’
going upwards (expels) since the points there are clustered around
the regression line.

In the WobegonDrop and WobegonSurge regions the mapping (regression) lines
are respectively below and above the reflection line and thus iteratively
expels the points away from the population mean. In the WobegonMix
region the log linear assumption is not appropriate as the distribution of the
scattered points resemble the Chinese Ying and Yang symbols and that
produces the ‘strange attractors’ or limit cycles.

Now the question is that is this process that produces the neurodiversity
an opened or closed system? The runaway process does not seem to have
reach a plateau yet.