About Me

Text

Physicist, Startup Founder, Blogger, Dad

Sunday, February 16, 2014

Genetic architecture of intelligence (video with improved sound)

This is video of a talk given to cognitive scientists about a year ago (slides). The original video had poor sound quality, but I think I've improved it using Google's video editing tool. I found that the earlier version was OK if you used headphones, but I think they are not necessary for this one.

I don't have any update on the BGI Cognitive Genomics study. We are currently held up by the transition from Illumina to Complete Genomics technology. (Complete Genomics was acquired by BGI last year and they are moving from the Illumina platform to an improved CG platform.) This is highly frustrating but I will just leave it at that.

On one of the last slides I mention Compressed Sensing (L1 penalized convex optimization) as a method for extracting genomic information from GWAS data. The relevant results are in this paper.

15 comments:

Jon Claerbout
said...

I'm in seismic imaging. I steer away from L1 penalizing instead use hyperbolic. It's asymptotically L1 for large residuals and L2 for small ones. An interesting aspect of it is that you get to pick the threshold. Conjugate direction coding is a slight variation on L2.

There may be improvements one can make on L1. Elastic Net uses both L1 and L2 penalties. What is nice about L1 is the theoretical results concerning the phase transition behavior (Donoho-Tanner) and other performance guarantees, along with the existence of fast computational methods (coordinate descent). My main interest is in providing an upper bound on the required amount of data necessary to select the full set of causal variants for a complex trait of given sparsity. This threshold is probably of order 1 million individuals for g, but with better methods one could perhaps improve it somewhat.

At the end of your talk, you mentioned the Gattaca scenario of choosing offspring. Is the time estimate for being able to distinguish zygote IQs also within 5 years? Could you share what company or institution is doing this?

No one can do this yet. You probably need data on roughly 1 million individuals (genotype + phenotype) to extract a good predictive model for IQ. This could be possible in 5 years but it is hard to predict.

I know next to nothing about biology apart from what i read in the newspapers but i admit to be very interested in bgi cognitive genomics study. I have however one question. I agree that it will probably be inevitable that one day parents will be able to select their baby's intelligence but would it be possible to do the same thing for adults? one of the main reasons why so many people find the idea so controversial if you ask me is because they themselves can not use it. creating an elite class of humans who are significantly smarter than their peers is a horibble thought for many people when they themselves can not be part of this elite.

How fruitful do you think efforts to locate intelligence loci using the loci of other traits correlated with intelligence would be, such as schizophrenia? Well, this approach uses validated SNPs from other traits that have been demonstrated to show a phenotypic effect for that respective traits, and it would be reasonable to assume that those SNPs would influence intelligence compared to a random SNP on an array. At least this would reduce the number of genes that need to be surveyed since one does not need to examine the entire genome.

steve and behavioral geneticists' dreams will never come true. they're chasing ghosts. and just like ghost chasers their chase results from a fundamental misunderstanding.

as joel fuhrman has said, "many accept that disease is the result of genetics or luck...but nutrition, exercise, and environment simply overwhelm genetics."

all steve can say to the heritability of bmi, blood pressure, etc. is, "i think these are qualitatively different." but given that iq is a psychological trait one should expect that that putative qualitative trait is MORE mutable not less.

one can find very deleterious alleles. so what? binet developed his test to identify the retarded. apparently he thought this was a natural kind and not merely the far left of the bell curve. and he was right. in order to save the iq bs steve hypothesized essentially that everyone but people like him are a little retarded. that is, the continuum is due to varying numbers of rare and deleterious alleles only.