Why is this post inappropriate?

Late last year, the excellent Neurobonkers blog covered a case of 'Profiteering from anxiety'.It seems one Nader Amir has applied for a patent on the psychological technique of 'Attentional Retraining', a method designed to treat anxiety and other emotional problems by conditioning the mind to unconsciously pay more attention to positive things and ignore unpleasant stuff.For just $139.99, you can have a crack at modifying your unconscious with the help of Amir's Cognitive Retraining Technologies.It's a clever idea... but hardly a new one. As Neurobonkers said, research on these kinds of methods had been going on for years before Amir came on the scene. In a comment, Prof. Colin MacLeod (who's been researching this stuff for over 20 years) argued that "I do not believe that a US patent granted to Prof Amir for the attentional bias modification approach would withstand challenge."Well, in an interesting turn of events, Amir has issued just Corrections (1,2) to two of his papers. Both of the articles reported that retraining was an effective treatment for anxiety; but in both cases he now reveals that there wasan error...in the article a disclosure should have been noted that Nader Amir is the co-founder of a company that markets anxiety relief products.Omitting to declare a conflict of interest... how unfortunate.Still, it's an easy mistake to make: when you're focused on doing unbiased, objective, original research, as Amir doubtless was, such mundane matters are the last thing you tend to pay attention to.Amir, N., and Taylor, C. (2013). Correction to Amir and Taylor (2012). Journal of Consulting and Clinical Psychology, 81 (1), 74-74 DOI: 10.1037/a0031156Amir, N., Taylor, C., and Donohue, M. (2013). Correction to Amir et al. (2011). Journal of Consulting and Clinical Psychology, 81 (1), 112-112 DOI: 10.1037/a0031157... Read more »

Why is this post inappropriate?

It's not been a good few weeks for Adrian Owen and his team of Canadian neurologists.Over the past few years, Owen's made numerous waves, thanks to his claim that some patients thought to be in a vegetative state may, in fact, be at least somewhat conscious, and able to respond to commands. Remarkable if true, but not everyone's convinced.A few weeks ago, Owen et al were criticized over their appearance in a British TV program about their use of fMRI to measure brain activity in coma patients. Now, they're under fire from a second group of critics over a different project.The new bone of contention is a paper published in 2011 called Bedside detection of awareness in the vegetative state. In this report, Owen and colleagues presented EEG results that, they said, show that some vegetative patients are able to understand speech.In this study, healthy controls and patients were asked to imagine performing two different actions: moving their hand, or their toe. Owen et al found that it was possible to distinguish between the 'hand' and 'toe'-related patterns of brain electrical activity. This was true of most healthy control subjects, as expected, but also of some - not all - patients in a 'vegetative' state.The skeptics aren't convinced, however. They reanalyzed the raw EEG data and claim that it just doesn't prove anything.This image shows that in a healthy control, EEG activity was "clean" and generally normal. However in the coma patient, the data's a mess. It's dominated by large slow delta waves - in healthy people, you only see those during deep sleep - and there's also a lot of muscle artefacts which can be seen as 'thickening' of the lines.These don't come from the brain at all, they're just muscle twitches. Crucially, the location and power of these twitches varied over time (as muscle spikes often do).This wouldn't necessarily be a problem, the critics say, except that the statistics used by Owen et al didn't control for slow variations over time i.e. of correlations between consecutive trials (non-independence). If you do take account of these, there's no statistically significant evidence that you can distinguish the EEG associated with 'hand' vs 'toe' in any patients.However, in their reply, Owen's team say that:their reanalysis only pushes two of our three positive patients to just beyond the widely accepted p=0.05 threshold for significance - to p=0.06 and p=0·09, respectively. To dismiss the third patient, whose data remain significant, they state that the statistical threshold for accepting command-following should be adjusted for multiple comparisons... but we know of no groups in this field who routinely use such a conservative correction with patient data, including the critics themselves.I have to say that, statistical arguments aside, the EEGs from the patients just don't look very reliable, largely because of those pesky muscle spikes. However, a new method for removing these artifacts has just been proposed. I wonder if that could help settle this?Goldfine, A., Bardin, J., Noirhomme, Q., Fins, J., Schiff, N., and Victor, J. (2013). Reanalysis of "Bedside detection of awareness in the vegetative state: a cohort study" The Lancet, 381 (9863), 289-291 DOI: 10.1016/S0140-6736(13)60125-7... Read more »

Why is this post inappropriate?

We know quite a bit about how long-term memory is formed in the brain - it's all about strengthening of synaptic connections between neurons. But what about remembering something over the course of just a few seconds? Like how you (hopefully) still recall what that last sentence as about?Short-term memory is formed and lost far too quickly for it to be explained by any (known) kind of synaptic plasticity. So how does it work? British mathematicians Samuel Johnson and colleagues say they have the answer: Robust Short-Term Memory without Synaptic Learning.They write:The mechanism, which we call Cluster Reverberation (CR), is very simple. If neurons in a group are more densely connected to each other than to the rest of the network, either because they form a module or because the network is significantly clustered, they will tend to retain the activity of the group: when they are all initially firing, they each continue to receive many action potentials and so go on firing. The idea is that a neural network will naturally exhibit short-term memory - i.e. a pattern of electrical activity will tend to be maintained over time - so long as neurons are wired up in the form of clusters of cells mostly connected to their neighbours: The cells within a cluster (or module) are all connected to each other, so once a module becomes active, it will stay active as the cells stimulate each other.Why, you might ask, are the clusters necessary? Couldn't each individual cell have a memory - a tendency for its activity level to be 'sticky' over time, so that it kept firing even after it had stopped receiving input?The authors say that even 'sticky' cells couldn't store memory effectively, because we know that the firing pattern of any individual cell is subject to a lot of random variation. If all of the cells were interconnected, this noise would quickly erase the signal. Clustering overcomes this problem. But how could a neural clustering system develop in the first place? And how would the brain ensure that the clusters were 'useful' groups, rather than just being a bunch of different neurons doing entirely different things? Here's the clever bit: If an initially homogeneous (i.e., neither modular nor clustered) area of brain tissue were repeatedly stimulated with different patterns... then synaptic plasticity mechanisms might be expected to alter the network structure in such a way that synapses within each of the imposed modules would all tend to become strengthened.In other words, even if the brain started out life with a random pattern of connections, everyday experience (e.g. sensory input) could create a modular structure of just the right kind to allow short-term memory. Incidentally, such a 'modular' network would also be one of those famous small-world networks.It strikes me as a very elegant model. But it is just a model, and neuroscience has a lot of those; as always, it awaits experimental proof.One possible implication of this idea, it seems to me, is that short-term memory ought to be pretty conservative, in the sense that it could only store reactivations of existing neural circuits, rather than entirely new patterns of activity. Might it be possible to test that...?Johnson S, Marro J, and Torres JJ (2013). Robust Short-Term Memory without Synaptic Learning. PloS ONE, 8 (1) PMID: 23349664... Read more »

Why is this post inappropriate?

A couple of months ago, the BBC TV show Panorama covered the work of a team of neurologists (led by Prof. Adrian Owen) who are pioneering the use of fMRI scanning to measure brain activity in coma patients.The startling claim is that some people who have been considered entirely unconscious for years, are actually able to understand speech and respond to requests - not by body movements, but purely on the level of brain activation.However, not everyone was impressed. A group of doctors swiftly wrote a critical response, published in the British Medical Journal as fMRI for vegetative and minimally conscious states: A more balanced perspectiveThe Panorama programme... failed to distinguish clearly between vegetative vs. minimally conscious states, and gave the impression that 20% of patients in a vegetative state show cognitive responses on fMRI.There are important differences between the two states. Patients in a vegetative state have no discernible awareness of self and no cognitive interaction with their environment. Patients in a minimally conscious state show evidence of interaction through behaviours... The programme presented two patients said to be in a “vegetative state” who showed evidence of cognitive interaction on assessment using fMRI but the clinical methods used for the original diagnosis were not stated. In both cases, family members clearly reported that the patient made positive but inconsistent behavioural responses to questions... one of these patients was filmed responding to a question from his mother by raising his thumb and the other seemed to turn his head purposefully.So Panorama stands accused of passing off patients who were really minimally conscious, as being in a vegetative state. To see signs of understanding on brain scans from the latter would be truly amazing because it would be the first evidence that they weren't, well, vegetative.However if they were 'merely' minimally conscious patients, it's not as interesting, because we already knew they were capable of making responses.Now the Panorama team - and Professor Owen - have replied in a BMJ piece of their own. Given that they're charged with misleading journalism and sloppy medicine, they're understandably a bit snarky:Just by viewing this one hour documentary the authors felt able to discern that both the patients “said to be in a vegetative state” are “probably” minimally conscious... One of these patients, Scott, has had the same neurologist for more than a decade. Professor Young, who appeared in the film, made it clear that Scott had appeared vegetative in every assessment... The fact that these authors took Scott’s fleeting movement, shown in the programme, to indicate a purposeful (“minimally conscious”) response shows why it is so important that the diagnosis is made in person, by an experienced neurologist, using internationally agreed criteria.In other words, they were vegetative, and the critics who said otherwise, on the basis of some TV footage, were being silly.In other words...it's on.Turner-Stokes L, Kitzinger J, Gill-Thwaites H, Playford ED, Wade D, Allanson J, Pickard J, & Royal College of Physicians' Prolonged Disorders of Consciousness Guidelines Development Group (2012). fMRI for vegetative and minimally conscious states. BMJ (Clinical research ed.), 345 PMID: 23190911Walsh F, Simmonds F, Young GB, & Owen AM (2013). Panorama responds to editorial on fMRI for vegetative and minimally conscious states. BMJ (Clinical research ed.), 346 PMID: 23298817... Read more »

Why is this post inappropriate?

A form of brain abnormality long regarded as permanent is, in fact, sometimes reversible, according to an unassuming little paper with big implications.Here's the key data: some rats were given a lot of alcohol for four days (the "binge"), and then allowed to sober up for a week. Before, during and after their rodent Spring Break, they had brain scans. And these revealed something remarkable - the size of the rats' lateral ventricles increased during the binge, but later returned to normal.Control rats, given lots of sugar instead of alcohol, did not show these changes. This is really pretty surprising. The ventricles are simply fluid-filled holes in the brain. Increased ventricular size is generally regarded as a sign that the brain is shrinking - less brain, bigger holes - and if the brain is shrinking that must be because cells are dying or at least getting smaller. So bigger ventricles is bad.Or so we thought... but this study shows that it might not always be true: alcohol reversibly increases ventricular volume over a timescale of days. It does so, the authors say, essentially by drying brain tissue out; like most things, if you dry the brain out, it gets smaller (and the ventricles get bigger) but when the water comes back to the tissues, it expands again.As you can see here in Figure 2...Maybe. I admit that just eyeballing this, it looks more like the ventricles are getting brighter, rather than bigger, but I'm not familiar with the details of water scanning. Maybe some readers will know more about it.If it's true, this is big - maybe it's not just high doses of alcohol that does this. Maybe other drugs or factors can shrink or expand, the ventricles, or even other areas, purely by acting on tissue water regulation, rather than by anything more 'interesting'.Take the various claims that some psychiatric drugs boost brain volume while others decrease it, just for starters...could they be headed for a watery grave?Of course, this is in mice - and it might not translate to humans... we need to find out, and I for one am keen to apply for a grant. Here's my draft:Participants: 8 healthy-livered neuroscientists.Materials: 1 MRI scanner, 1 crate Jack Daniels.Methods: Subjects will confer to pick a Designated Operator, who will remain sober. If no volunteers for this role are forthcoming, selection will be randomized by Bottle Spinning. All other participants will consume Jack Daniels ad libitum, and take turns being scanned. Once all Jack Daniels is depleted, participants will continue to be scanned until fully sobered up (defined as when they can successfully spell "amygdalohippocampal"). Instructions to Participants: i) what happens in the magnet, stays in the magnet. ii) If you 'dirty' the scanner, you clean it up. iii) Bottle caps are not MRI safe!Er... seriously though, someone should check.Zahr NM, Mayer D, Rohlfing T, Orduna J, Luong R, Sullivan EV, and Pfefferbaum A (2013). A mechanism of rapidly reversible cerebral ventricular enlargement independent of tissue atrophy. Neuropsychopharmacology PMID: 23306181... Read more »

Why is this post inappropriate?

In psychiatry, "a rose is a rose is a rose" as Gertrude Stein put it. That's according to an editorial in the American Journal of Psychiatry called: The Initial Field Trials of DSM-5: New Blooms and Old Thorns.Like the authors, I was searching for some petal-based puns to start this piece off, but then I found this "flower with an uncanny resemblance to a MONKEY" which I think does the job quite nicely:Anyway, the editorial is about the upcoming, controversial fifth revision to the Diagnostic and Statistical Manual (DSM) of the American Psychiatric Association (APA).A great deal has been written about the DSM-5 over the past few years, as "the rough beast, its hour come round at last / Slouches towards Bethlehem to be born" (see, I can reference early-20th-century poetry too).But now the talk has moved into a new phase, because the results of the DSM-5 'field trials' are finally out. In these studies, the reliability of the new diagnostic criteria for different psychiatric disorders was measured. The new editorial is a summary and discussion of the field trial data.Two different psychiatrists assessed each patient, and the agreement between their diagnoses was calculated, as the kappa statistic, where 0 indicates no correlation at all and 1 is perfect.It turns out that the reliabilities of most DSM-5 disorders were not very good. The majority were around 0.5, which is at best mediocre. These included such pillars of psychiatric diagnosis like schizophrenia, bipolar disorder, and alcoholism.Others were worse. Depression, had a frankly crap kappa of 0.28, and the new 'Mixed Anxiety-Depressive Disorder' came in at -0.004 (sic). It was completely meaningless.The American Journal editorial was written by a group of senior DSM-5 team members. I'm sure they wanted to write a triumphant presentation of their work, but in fact the tone is subdued, even apologetic in places:As for most new endeavours, the end results are mixed, with both positive and disappointing findings...Experienced clinicians have severe reservations about the proposed research diagnostic scheme for personality disorder...like its predecessors, DSM-5 does not accomplish all that it intended, but it marks continued progress for many patients for whom the benefits of diagnoses and treatment were previously unrealized.Remember: this is the journal published by the organization responsible for the DSM and even they don't much like it.But the real story is even worse. The previous editions of the DSM also conducted field trials. These trials had a system to describe different kappa values: for example, 0.6-0.8 was 'satisfactory'.However, the new DSM-5 studies used a different, lower threshold. They simply moved the goalposts, deeming lower kappa values to be good. At one point, they wrote that values of above 0.8 would be 'miraculous' and above 0.6 a 'cause for celebration', yet this wasn't the view of previous DSM developers.The indispensable 1boringoldman blog has a nice graphic showing the results of the DSM-5 trials, with the kappas graded according to the old vs. the new criteria. As you can see, the grass is greener on the new side.The fact is that the DSM-5 field trial results are worse than the results from DSM-III, the 1980 version that's served mostly unchanged for 30 years (DSM-IV made fairly modest changes.) The reliabilities have got worse - despite the editorial's claims of 'continued progress'. It's true that the DSM-5 field trials were a lot bigger and conducted rather differently, but still, it's a serious warning sign.Finally, there was great variability in the results between different hospitals - in other words the reliability scores were not, themselves, reliable. Some institutions achieved much higher kappa values than others, but it's anyone's guess how they managed to do so.Still, there's great news: the DSM-5 is just a piece of paper (well, a big stack of them). Any psychiatrist is free to ignore it - as the creator of the more reliable DSM-III is now urging them to do.Freedman R, Lewis DA, Michels R, Pine DS, Schultz SK, Tamminga CA, Gabbard GO, Gau SS, Javitt DC, Oquendo MA, Shrout PE, Vieta E, and Yager J (2013). The Initial Field Trials of DSM-5: New Blooms and Old Thorns. The American Journal of Psychiatry, 170 (1), 1-5 PMID: 23288382... Read more »

Why is this post inappropriate?

The questionable validity of self-report measures in psychiatry has been the topic of a few recent posts here at Neuroskeptic.Now an interesting new study looks at the question in issue from a new angle, asking: what kind of people report feeling more or less depressed? Korean researchers Kim and colleagues found that intelligence and personality variables were both linked to the tendency to self-rate depression more severely.The study involved 100 patients who'd previously suffered from an episode of depression or mania and who, according to their psychiatrist, had now recovered and were back to normal. Kim et al looked to see what the patient thought about their mood, by getting them to complete the Beck Depression Inventory (BDI) self-report questionnaire.This was compared to the clinican-administered HAMD scale (another Neuroskeptic favourite) which is meant to be independent of self report.It turns out that the BDI and HAMD scores were only weakly correlated - with a coefficient of just r=0.32. That's really not very good considering that, in theory, they both measure the same thing: 'depression'. Many people reported being considerably depressed when their clinicians rated them as fine.But more interestingly, certain characteristics of the patients were correlated with their self-report/clinician-rating discrepancy. Specifically, patients with a lower IQ, who were more impulsive, and less conscientious, tended to self-report more severe depression.Now, the uncharitable interpretation of these people is that they were just too sloppy to complete the form properly... the uncharitable interpretation of the psychiatrists is that it's their fault for underestimating depression in people less inclined to express themselves in 'the right way'. There's no way to know.Either way, it's a serious problem because it shows that self-report and observer-report measures of depression aren't just poorly correlated, they're actually measuring different things for different people.It could be even worse than it appears because the HAMD, although supposedly not a self-report measure, does in fact heavily rely on the patient's cooperation. So a 100% clinician-rated scale might be even further removed from self-report.Kim EY, Hwang SS, Lee NY, Kim SH, Lee HJ, Kim YS, and Ahn YM (2012). Intelligence, temperament, and personality are related to over- or under-reporting of affective symptoms by patients with euthymic mood disorder. Journal of affective disorders PMID: 23270973... Read more »

Why is this post inappropriate?

I recently wrote about anti-NMDA receptor encephalitis, a neurological disorder that often manifests with psychiatric symptoms, such as depression and hallucinations.The latest American Journal of Psychiatry features a strange series of four drawings made by a 15 year old girl during an episode of the disease, which presented as psychotic symptoms but later progressed to severe insomnia and epilepsy before it was diagnosed and treated."As she gradually recovered we asked her to draw something. She did not know what to draw, so we suggested an animal, such as a dog, but she did not know how to start. When we told her that a dog has four legs, a tail, two ears, two eyes, and a mouth, she drew an abstract figure that consisted of a head with four legs (A). Her next drawing, of a cat, looked exactly the same, apparently since they share the same basic features. Two weeks later the dog now looked more recognizable but like a human, standing upright, with two arms and four legs...All body parts were listed beneath the figure in the same color as they were drawn (B).Two months after the patient was transferred to a local rehabilitation center, the cat was catlike for the first time; it had four legs, was normally proportioned, and was correctly positioned. Colors were used adequately. However, this drawing still looked like one by a primary school child instead of a 15- year-old girl (C). Finally, after 5 months of rehabilitation her drawing had a normal composition. She still had the urge to write down what she drew, she did not encircle the figures anymore (D)."Esseveld MM, van de Riet EH, Cuypers L, and Schieveld JN (2013). Drawings During Neuropsychiatric Recovery From Anti-NMDA Receptor Encephalitis. The American journal of psychiatry, 170 (1), 21-2 PMID: 23288386... Read more »

Why is this post inappropriate?

A popular method for detecting abnormalities in the shape and size of individual brains is seriously flawed, and is almost guaranteed to find 'differences' even in normal people.So say Italian neuroscientists Scarpazza and colleagues in an important new report: Very high false positive rates in single case Voxel Based Morphometry.Voxel Based Morphometry (VBM) is a way of analyzing brain scans to detect structural differences. It's most commonly used to compare groups of brains to find average differences, but some neuroscientists have started using VBM to check for abnormalities in a single brain. Scarpazza et al list 34 pieces of research about that, including 13 since 2010.So it would suck if there were a problem with individual VBM... but there is. This pic tells the tale: The authors took 200 normal brains and compared each one of them in turn to a control group of 16 normal brains. Because all of them were healthy, the comparisons ought to show no significant differences.The technique was set up so that, in theory, only 5% of the brains should have been wrongly labelled as containing an abnormality. But in fact, a full 93.5% of the normal brains gave at least one false positive.So 5% is more like the rate of not being wrong. Oops.The image shows that in some brain areas, almost 25% of the normal brains were branded as 'abnormal' just in that region alone - the hotter the colour, the higher the proportion of false 'hits'. The top row is for false reports of brain volume increases, while the bottom row is decreases; false 'increases' were more common.So what's going wrong? It's not entirely clear and several factors are probably at play, but the authors say that the main issue is that VBM makes the assumption of statistical normality which doesn't in fact hold.Either way, it's a serious problem, and Scarpazza et al point to one especially worrying implication: some people have proposed using single-subject VBM in a legal context, to reinforce insanity pleas by showing subtle 'brain abnormalities' not obvious to the naked eye. Yet if this paper's right, such evidence could be entirely meaningless, almost guaranteed to give a positive result.P.S. Last time I posted about this kind of analysis flaw, the internet went crazy because they didn't understand it. So just to be clear, this is not a problem for clinical scans - the kind you'd get to check whether you have a brain tumour.Scarpazza, C., Sartori, G., De Simone, M., and Mechelli, A. (2013). When the single matters more than the group: Very high false positive rates in single case Voxel Based Morphometry NeuroImage DOI: 10.1016/j.neuroimage.2012.12.045... Read more »

Why is this post inappropriate?

The idea of an 'autism epidemic' has a lot of people very worried.No-one disputes that diagnosed rates of autism have increased enormously over the past 15 years or so, around the world. However, other people write it off as essentially a cultural phenomenon: we're getting better at detecting the disorder and more willing to label kids as having it.I subscribe to the latter view, but there's very little hard evidence for it. To prove that diagnostic changes have occurred, rather than a true increase in autism, you'd have to know what would have happened to today's kids, say, 20 years ago. Would they have been diagnosed? We have no way of knowing. At least not until someone invents a time machine.However, a new study just out offers a valuable new perspective on the debate: Spatial clusters of autism births and diagnoses point to contextual drivers of increased prevalence.According to authors Soumya Mazumdar and colleagues, there's a zone of high autism prevalence in California, areas where kids aged 0-4 years old are more likely to be diagnosed with the condition. The epicentre is L.A.; there's actually three overlapping hotspots centred on Santa Monica, Alhambra and North Hollywood.In these clusters, autism rates are between 2 and 6 times higher than the rest of the state. Now an interesting thing about these areas was that they're rich in paediatricians, autism advocacy organizations, and money. In other words, there's better access to health services and probably more awareness of autism. This is suggestive evidence that the reason lots of kids get diagnosed here is about diagnosis, not autism per se.But the blockbuster result is that children born outside the cluster, who later moved home into one, had a higher chance of getting a diagnosis than those who stayed out. The effect was smaller than for kids born inside the hot zone, but it was significant.That's also consistent with the idea that the clusters are clusters of diagnosis, not autism.It's not proof. You could argue that there's some toxic chemical, say, present in the rich parts of L.A. that causes autism, even if you move into the toxic area only at age 3 or 4, and that's been getting worse recently, leading to rising rates.But it seems a stretch. What's the chemical? And why hypothesize one, when the diagnostic services hypothesis nicely accounts for these findings? As the authors say:The findings reported in this article do not fully reject the possibility that environmental toxicants drive some of the risk of autism ... since there are a plethora of possible toxicants, it is impossible to falsify all hypotheses that researchers have started to explore. Mazumdar S, Winter A, Liu KY, and Bearman P (2012). Spatial clusters of autism births and diagnoses point to contextual drivers of increased prevalence. Social Science And Medicine PMID: 23267775... Read more »

Why is this post inappropriate?

As if on cue, a major study about the relationship (if any) between mental disorder and crime has appeared just when everyone's talking about that.Although having said that, people seem to be interested in that issue most of the time nowadays, in the UK at any rate, with schizophrenia topping the list of supposedly scary syndromes.So - should we be worried?The new research, from Australian team Morgan et al, surveyed everyone born in the state of Western Australia between 1955 and 1969. About 1.6 million people lived there over the course of the study so this was a big project.By linking local records of arrests over the period 1985 to 1996 to the database of psychiatric diagnosis, the researchers were able to examine disorder-crime correlations in the entire population - meaning that there was no possibility of bias.So what happened? Here's some highlights:32% of psychiatric patients had been arrested at least once. Unfortunately, it's not clear what the rate was in the general population, but that falls into the range of overall arrest rates in most countries.11% of those arrested had a psychiatric diagnosis. This rose to 20% of arrests for violent offences.0.8% of suspects had schizophrenia, rising to 1.7% for violent offences.The number of arrests in people without a disorder fell over the period 1985-1996, reflecting the well-known fact that people commit fewer crimes as they get older. However, in psychiatric patients, there was no change over time.For murder, 30% of suspects had a psychiatric history while 3% had a diagnosis of schizophrenia.Both substance abuse and personality disorders were associated with higher arrest rates than schizophrenia, but schizophrenia in turn was higher than depression, anxiety, and other miscellaneous disorders.Although only 1.7% of violent offenders had schizophrenia, those with the disorder were somewhat more likely to involve strangers, and to take place in public places, and less likely to target family and partners.Overall this confirms that the great majority of crimes, including violent ones, are not committed by people with mental illness, and that your chance of getting 'murdered by a lunatic' is incredibly low. This strikes me as the only statistic that matters to most people.There's a long-standing debate over whether people with various disorders are more likely to commit crimes than they would be if they didn't have one, the relative risk. While interesting, this is a purely academic question. What the rest of us need to know is the absolute risk, and this is low.Morgan VA, Morgan F, Valuri G, Ferrante A, Castle D, and Jablensky A (2012). A whole-of-population study of the prevalence and patterns of criminal offending in people with schizophrenia and other mental illness. Psychological medicine, 1-12 PMID: 23234722... Read more »

Why is this post inappropriate?

People turn to religion after natural disasters - but it doesn't actually provide much solace.So say researchers Sibley and Bulbulia, who examined the population of Christchurch, New Zealand, before and after the 2011 earthquake. 185 died and many city landmarks were damaged in the disaster.The paper, Faith after an Earthquake, opens with a Biblical quote.Sibley and Bulbulia took advantage of the fact that a longitudinal study of the 'health and values' of the New Zealanders was already underway when the quake struck, and the survey included questionnaires about religious beliefs.They found that, compared to before the event, residents of the affected Canterbury region were more likely to report becoming religious (8.6%) than of losing their faith (5.3%); in the rest of the country religion declined from 2009 to 2011, so the earthquake-hit area was exceptional.The authors say:Philosophers have plausibly argued that natural disasters such as the Christchurch earthquake are rationally incompatible with the existence of an all-powerful, all-loving God, because natural disasters cause pointless suffering to innocents... though faith eroded elsewhere in New Zealand, there was a significant upturn in religious faith among those who experienced the misery of New Zealand's most lethal natural disaster in eighty years. But did faith help people cope with the disaster?No - believers reported no better subjective well-being compared to the non-religious, either before or after the earthquake, although those who both lost their faith (apostates) during the period and were personally affected suffered a decline.What's rather odd about this, however, is that other results showed that apart from the apostates, well-being wasn't affected by the earthquake at all. So it's no surprise that the religious coped no better: the irreligious already coped very well, so there was no room for improvement.Sibley, C., and Bulbulia, J. (2012).Faith after an Earthquake: A Longitudinal Study of Religion and Perceived Health before and after the 2011 Christchurch New Zealand Earthquake PLoS ONE, 7 (12) DOI: 10.1371/journal.pone.0049648... Read more »

Why is this post inappropriate?

"If your IQ is somewhere around 60 then you are probably a carrot'', according to a British spokesman for high-IQ club Mensa. IQ's in the news at the moment thanks to a paper called Fractionating Human Intelligence from Canadian psychologists Adam Hampshire and colleagues. Some say it 'debunks the IQ myth' - but does it?The study started out with a huge online IQ test...Behavioral data were collected via the Internet between September and December 2010. The experiment URL was originally advertised in a New Scientist feature, on the Discovery Channel web site, in the Daily Telegraph, and on social networking web sites including Facebook and Twitter.The test involved 12 different cognitive tasks, based on the usual IQ test kind of things, and they got a huge 45,000 usable responses.However, the main part of the study used functional MRI (fMRI) to measure brain activity caused by each of the 12 tasks. There were only 16 volunteers in the brain scan study, which is pretty small.The key finding was that although each of the 12 tasks made a different pattern of brain regions light up, there were two main components of this: one lit up mostly in response to tasks requiring short-term memory, and the other was associated with reasoning and logic:They did various other analyses that confirmed this, and they also found evidence for a third network responsible for language (verbal) skill.Finally, the killer conclusion was that there was no reason to introduce the imfamous 'g factor' - a number representing general intelligence affecting performance on all tasks. Although there was a 'g factor' statistically, it was explained by the fact that tasks required both the memory and the logic networks (although to different degrees).g is the most controversial aspect of IQ testing, because if it exists, that means that some people are just smarter than others across the board - not just better at a particular kind of thing. So has this study killed g?Well, not by itself. There's a huge literature on IQ and g, going back almost 100 years. This stuff is not based on brain imaging, but just on IQ test scores, and it's a complex topic. I don't think one brain study with 16 people can really overturn that, although it does lend weight to the anti-g camp who have been arguing against g for decades.There's a sense, though, in which it doesn't matter. If all tasks require both memory and reasoning (and all did in this study), then the sum of someone's memory and reasoning ability is in effect a g score, because it will affect performance in all tasks.If so, it's academic whether this g score is 'really' monolithic or not. Imagine that in order to be good at basketball, you need to be both tall, and agile. In that case you could measure someone's basketball aptitude, even though it's not really one single 'thing'...Hampshire, A., Highfield, R., Parkin, B., and Owen, A. (2012). Fractionating Human Intelligence Neuron, 76 (6), 1225-1237 DOI: 10.1016/j.neuron.2012.06.022... Read more »

Why is this post inappropriate?

There's a theory that 'psychiatric diseases' like depression and schizophrenia aren't diseases because they're not diagnosed on the basis of any kind of biological abnormality, but purely on symptoms - unlike 'real' diseases like cancer and AIDS.Now, in my view there's quite a bit of truth in that - but there's also a serious flaw in the argument. Sometimes, disorders diagnosed on the basis of psychiatric symptoms do turn out to have had a clear biological cause. So the original diagnosis of a psychiatric disease was correct: there was indeed a disease.This is happening more and more often now because of biomedical advances.A group of German neurologists and psychiatrists recently wrote about a case of a man diagnosed with bipolar disorder: In February 2009, a 28-year-old presented to our clinic with a first episode of depression. He reported depressed mood, anhedonia, decreased drive, reduced alertness and concentration. The symptoms responded well to quetiapine 100 mg. Fourteen months later, a first manic episode with logorrhea [excessive speech], aggressive and disinhibited behavior occurred... it completely remitted after treatment with quetiapine 1000mg. A diagnosis of bipolar I disorder was made.Two months later, the patient presented with another depressive episode... Despite treatment with quetiapine, aripiprazole, lithium, valproate and escitalopram, the patient did not improve...So far, seems like a fairly typical case of bipolar. However, it turned out that...Neurological examination was remarkable for extrapyramidal symptoms with left-sided rigor and bradykinesia [slowed movements]. On initial and concurrent magnetic resonance imaging (MRI), numerous subcortical lesions in the frontal lobes were detected... Screening for autoimmune antibodies detected NMDAR antibodies.It turned out the guy had autoimmune encephalitis: his body was generating antibodies that blocked the brain's key NMDA receptors; the drug ketamine does that too. Treatment with immunosuppressant drugs was started and he recovered fairly quickly. For a first-hand account of the disease, in which it was also diagnosed as a psychiatric disorder initially, see the recent book Brain On Fire.Now, let's imagine that this had happened in 1960. What would the guy's story have been then?He'd have been seen by a psychiatrist and diagnosed with bipolar, just as he was today. Depending on how severe the depression was, and whether or not he had any more episodes, he might well have ended up in a psychiatric hospital.But he probably wouldn't have been diagnosed with a neurological disorder. He'd have tested negative for all the neurological diseases known at the time. No-one tested for NMDA antibodies back then, because NMDA receptors weren't even discovered until 1981. It's true that his neurological exam showed a movement disorder (left-sided rigor and bradykinesia) but this might well have been written off as a side effect of the high dose antipsychotics he was taking, which cause similar movement disorders.50 years ago this guy, and many others like him, could well have ended up committed to an asylum. 100 years ago, I think it would have been almost certain he'd have been deemed 'insane' and locked up at some point.If so, some of the people in psychiatric hospitals 50 or 100 years ago will have had this disease - or others. And if we didn't know about anti-NMDA encephalitis until recently, who's to say what we'll discover next? Choe CU, Karamatskos E, Schattling B, Leypoldt F, Liuzzi G, Gerloff C, Friese MA, and Mulert C (2012). A clinical and neurobiological case of IgM NMDA receptor antibody associated encephalitis mimicking bipolar disorder. Psychiatry research PMID: 23246244... Read more »

Why is this post inappropriate?

There's a lot of interest in the idea that ketamine provides unparalleled rapid, powerful antidepressant effects, even in people who haven't responded to conventional antidepressants.Earlier this year, I asked:Ketamine - Magic Antidepressant, or Expensive Illusion?There have now been several studies finding dramatic antidepressant effects of ketamine, the "club drug" aka "horse-tranquilizer". Great news? If you believe it. But hold your, er, horses... there's a problem.My concern was that although depressed patients certainly do report feeling better after an injection of ketamine, compared to people given placebo, that doesn't prove that the drug is actually an antidepressant.Rather, patients might be experiencing an enhanced, 'active' placebo effect, because ketamine causes subjectively powerful hallucinogenic experiences. So the placebo-controlled trials weren't really blinded.To settle the question, I suggested a three-way trial comparing it both to an inert placebo, and to some other hallucinogen; if ketamine has a specific antidepressant effect, it should produce more improvement than the comparison drug.This has never been done.Given this background, a new trial from NIMH's Ketamine King, Carlos Zarate, makes interesting reading: A Randomized Trial of a Low-Trapping Nonselective N-Methyl-D-Aspartate Channel Blocker in Major Depression.Zarate et al tried a novel drug, AZD6765 in depressed people. AZD6765 works much like ketamine in that it blocks brain NMDA receptors. But it is a less powerful trapping blocker than ketamine, meaning that AZD6765 causes less dramatic effects on the target receptors, in some respects.In practice, this makes AZD6765, much less hallucinogenic than ketamine.So it's interesting that, compared to placebo, the new drug only produced small benefits. On the MADRS depression symptom scale, patients felt a little better on AZD6765, but the boost only lasted a few hours.The effect was far smaller than in an earlier ketamine trial as my crudely-mashed-up graph shows (although note that the patient populations were somewhat different, one bipolar and one unipolar depression, although their baseline severity was the same.)While on ketamine people experienced significant subjective effects, with AZD6765 they didn't, and couldn't tell whether they got drug or placebo. Is that why they got a smaller benefit?This is what we'd see if NMDA blockers do have a modest antidepressant effect but the dramatic improvements seen on ketamine are largely active placebo phenomena. Then again, it's also consistent with ketamine being a powerful antidepressant and AZD6765 just being less effective because it's a milder blocker of NMDA - effectively, a low dose of ketamine.To tell the difference, we need... an active placebo controlled trial, like I've been banging on about for ages. But I wasn't the first one to suggest it - that was none other than Carlos Zarate et al in 2006.Zarate CA Jr, Mathews D, Ibrahim L, Chaves JF, Marquardt C, Ukoh I, Jolkovsky L, Brutsche NE, Smith MA, and Luckenbaugh DA (2012). A Randomized Trial of a Low-Trapping Nonselective N-Methyl-D-Aspartate Channel Blocker in Major Depression. Biological psychiatry PMID: 23206319... Read more »

Why is this post inappropriate?

US states with more Google searches for suicide-related things actually have a higher suicide rate, according to a study just out.Researchers Gunn and Lester write that, across the 50 US states,The association between suicide rates and the search volume for ‘‘commit suicide’’ was significant and positive[r=0.31, p=0.01]... ‘‘how to suicide’’ was marginally significant and positive [r=0.21, p=0.07]... Finally, ‘‘suicide prevention’’ was significant and positive [r=0.61, p=0.001]. This seems pretty convincing although it's hard to know whether this represents suicidal people making the searches, as opposed to people searching in response to local suicides that already happened.The fact that "suicide prevention" was the closest correlated with suicides while "how to suicide" was weakest makes the latter seem more plausible to me.Previous suicide-search research has given mixed findings:Sueki (2011) looked at variations in the volume of Google searches about suicide and depression in Japan by month from 2004–2012 and found that the monthly search volume for‘‘suicide’’and‘‘suicide method’’was not significantly correlated with the monthly suicide rate. However, searches for‘‘depression’’ were positively associated with the monthly suicide rate especially with a time lag of 1–3 months.Over the past couple of years there's been a flurry of studies based on analyzing Google and Twitter trends. What's interesting to me is that we're really in the early days of this, when you think about likely future technologies. What will happen when everyone's wearing a computer 24/7 that records their every word and move, and even what they see?Eventually, psychology and sociology might evolve (or degenerate) into no more than the analysis of such data...Gunn III, J., & Lester, D. (2012). Using google searches on the internet to monitor suicidal behavior Journal of Affective Disorders DOI: 10.1016/j.jad.2012.11.004... Read more »

Why is this post inappropriate?

Neither medication nor psychotherapy is effective in improving the prognosis for youngsters considered to be at high risk of developing psychosis, according to a major study just published.The idea of identifying and treating young people at risk of becoming psychotic - because of a family history of schizophrenia, or because they're showing some mild symptoms - has become very fashionable lately. But can we really do anything to pre-empt the disorder?In this trial, 115 "ultra-high risk" Australian subjects were randomized to three different treatment conditions, or if they didn't agree to treatment, they were just followed up to see what happened.The treatments didn't work. Here's the smoking gun:This shows all four of the subject groups did pretty much the same in terms of their likelihood of becoming psychotic. Neither cognitive therapy, nor the antipsychotic drug risperidone (at a low dose) had any effect: those given 'supportive therapy' (basically: sympathetic chats) and a placebo pill did just as well.There probably wasn't even a placebo effect: none of the three treatment groups did better than people who got no treatment at all (monitoring group), although people weren't randomly assigned to that group, so that's a little less clear.Is this a surprise? Yes, if you believed the early studies to examine this question which claimed great things for drugs and therapy. But the current findings are no shock if you've been following the (much larger) recent trials - for example the British one from earlier in the year, which found zero benefit of cognitive therapy.Early small trials have a nasty habit of not working out in the long run.The other lesson here is that even "ultra-high risk" folks usually don't get psychotic: only about 10-20% of them, in fact, became ill in the first two years of this study; the British results I mentioned are very similar.So is this really "ultra high"? Relatively, yes it is; even a 10% risk is far higher than the chance that a random person on the street would have. But in absolute terms, perhaps not.A concern here is that rounding these folks up, labelling and 'treating' them might make their lives worse, or even increase the risk of psychosis. That's not just my opinion: that's what the very cognitive therapists who eagerly run these trials believe (or ought to, if they're being consistent with their own theories).One of the key ideas in cognitive accounts of psychosis is that the belief and fear that one is 'going crazy', or that you're otherwise abnormal, is itself a major source of stress that actually leads to worsening of symptoms.What could be scarier than being told you're at "ultra high risk"?Preventing psychosis is a great idea in theory. But most bad ideas are. McGorry, P., Nelson, B., Phillips, L., Yuen, H., Francey, S., Thampi, A., Berger, G., Amminger, G., Simmons, M., Kelly, D., Thompson, A., and Yung, A. (2012). Randomized Controlled Trial of Interventions for Young People at Ultra-High Risk of Psychosis The Journal of Clinical Psychiatry DOI: 10.4088/JCP.12m07785... Read more »

Why is this post inappropriate?

Collembola or "Springtails" are a common group of bugs - they're technically not insects although much like them - found all over the world.There's no evidence that these critters are parasites for humans - except for one strange scientific report claiming to have found Collembola body parts in skin scrapings from people diagnosed with delusional parasitosis - a psychiatric disorder characterised by the belief that one is infested with parasites.According to said 2004 paper by Altschuler et al, these patients are not delusional after all. This paper has been popular in the delusional parasitosis community.However, insect expert Matan Shelomi says that Altschuler et al's best photo of the so-called Springtails was probably Photoshopped. He explains that in the only pic to clearly show anything resembling a 'bug' (there were many others, but none look convincing), the raw microscope image shows nothing but a blurry blob.Altschuler et al claimed to have enhanced the contrast, but when Matan did that, there was still no visible critter. However, in the published image, a rather sinister bug is clearly seen. How did it get there?Either the image contrast was somehow selectively enhanced just for the 'bug' part - which, of course, presumes that the bug was there, and is quite invalid - or more likely, The level of detail present in Altschuler et al.’s enhanced image, particularly in the areas of the legs and a very odd pair of stripes along the abdomen, does not appear when contrast is applied equally. Such detail, however, can easily be created using functions such as Burn, Dodge, and Colorize on Photoshop®,when applied to select portions of the image manually as if via paintbrush.However, Shelomi says, even if such fraud is proven, there may be nothing anyone can do: the journal the original paper was published in has since folded, so it would be impossible to retract it, and the author runs an independent non-profit and is hence not subject to scientific misconduct regulations.Thanks very much to @benmeg for sending me a copy of this paper.Shelomi M (2012). Evidence of Photo Manipulation in a Delusional Parasitosis Paper. The Journal of parasitology PMID: 23198757... Read more »

Why is this post inappropriate?

There's a belief that the colours we associate with the genders - pink for girls and blue for boys - used to be the other way around.About 100 years ago, we're told, boys wore pink clothes, but then during the early 20th century, it flipped over. This is often used as an example of how arbitrary gender stereotypes are.However according to psychologist Marco Del Giudice, the whole "pink-blue reversal" is an 'urban legend'. He argues that there's really only anecdotal evidence for the existence of the previous blue-girls pink-boys association.The exceptions are four magazine articles - quoted in the paper that started the whole debate - which seem to provide documentary evidence. These associate girls with blue, but Giudice says that two of these might have been accidental typos, that swapped 'pink' with 'blue', while the other two may have represented a sneaky campaign by early feminists to subvert the blue-pink patriarchy.Hmm. I don't really buy that. However Giudice has a stronger argument, which is that according to Google NGram, a searchable database of over 5 million books, there are lots of instances of the terms "blue for boys" and "pink for girls" going back to 1890, but none for the reverse at any time point:Fair enough. However it's a big step from that to the idea thatthe pink-blue convention may ultimately depend on innate perceptual biases toward different regions of the color spectrum in the two sexes (Hurlbert & Ling, 2007)Del Giudice, M. (2012). The Twentieth Century Reversal of Pink-Blue Gender Coding: A Scientific Urban Legend? Archives of Sexual Behavior, 41 (6), 1321-1323 DOI: 10.1007/s10508-012-0002-z... Read more »

Why is this post inappropriate?

There's been lots of interest in the idea that ADHD meds reduce crime rates.No doubt that, even as we speak, worried pundits are writing of how this is a worrying Orwellian scenario and yadda yadda. But what's really going on?The research is from Sweden and published in the New England Journal of Medicine: Medication for Attention Deficit–Hyperactivity Disorder and Criminality. The first thing to note is that the study is not about giving medication in order to prevent crime; it was purely looking at what happened to people given ADHD treatment for their ADHD.In a nutshell, the authors found that people diagnosed with ADHD were about 10% less likely to be convicted of a crime during periods when they were on medication for the disorder. This was true of both men and women, and the effect was greater for the more serious offences.It was a huge study with over 25,000 ADHD patients and the data comprise pretty much everyone in Sweden over the relevant period so in that respect it's a very good study - although speaking of Orwellian, these studies are only possible because of the Scandinavian tendency to make national registers of everything.Now the big criticism here is that it's just a correlation, it doesn't prove that the meds were what prevented crime. It might be that ADHD meds have no effect on crime, but that people are less likely to commit crimes at periods when they have their lives sorted out (when they're 'on the rails'), one marker of which is that they're seeking treatment for their ADHD.However, the authors found that periods of use of SSRI antidepressants were not associated with changes in conviction rates. This is quite good evidence against the 'on the rails' critique, assuming that being prescribed SSRIs is as much a marker of being on the rails as being prescribed Ritalin is.So, in my view, this is pretty good work, as good as any observational non-randomized study. However, remember: this is just about treating ADHD. Not drugging criminals to stop crime.Lichtenstein P, Halldner L, Zetterqvist J, Sjölander A, Serlachius E, Fazel S, Långström N, and Larsson H (2012). Medication for attention deficit-hyperactivity disorder and criminality. The New England journal of medicine, 367 (21), 2006-14 PMID: 23171097... Read more »

dates

View

Language

Languages to Display

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.