Wednesday, 10 August 2016

Reading what other people are feeling is an important skill that helps us navigate conflicts, deepen relationships, and negotiate effectively. So what’s the best way to approach this? New research published in the Journal of Personality and Social Psychology suggests that most of us believe that the best approach is to trust our instincts. But the paper goes on to show that, on the contrary, accurate empathy comes from operating deliberately and analytically.

Christine Ma-Kellams of the University of La Verne and Jennifer Lerner of Harvard began by asking participants recruited online to consider a simple choice: whether to advise a struggling subordinate to take a more analytical approach or a more intuitive one. When the subordinate’s problems were framed generically, participants didn’t on balance prefer one answer over the other. But when the problem was to “become better at inferring the feelings of other people” then the advice to go on instinct was three times as popular as the analysis option.

The participants’ intuition isn’t so crazy. After all, other people’s emotions provoke an emotional reaction in us: consider the way infants mimic the expressions of their mothers, and the immediate visceral reaction we experience when we witness strong emotions in others. Also it’s known that an over-analytical approach can be counterproductive in other contexts – for instance, describing details of faces can lead to poorer later recognition of faces. But is it true in this case – is gut feeling really the better strategy?

To find out, Ma-Kellams and Lerner asked 72 managers and professionals on an executive training programme to participate in a mock interview, either as the interviewee or interviewer, and then to rate their own emotions, as well as those of their counterpart. Participants also demonstrated their tendency towards deliberate analysis by completing a cognitive reflection test, a set of problems where the intuitive answer is misleading. In one problem “a bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?” A 10c ball comes readily to mind, but some mental labour shows the true answer to be otherwise. Crucially, the better that participants scored on this measure, the higher their accuracy in rating their partner’s feelings.

In a larger sample (n=449) from the training programme, the researchers found a similar association between cognitive reflection and accuracy in judging emotions from photographs, using the standardised Reading the Mind in the Eyes Test where participants select the best fitting emotion from a menu for each face viewed. In this case, the participants also completed an intelligence test, as the researchers thought higher intelligence could be driving scores both in the cognitive reflection test and the empathic task. But controlling for intelligence, the association between cognition reflection and emotional accuracy remained.

In a final study, Ma-Kellams and Lerner showed that experimentally inducing a more reflective, analytic mindset produced better emotion-reading skills. Seventy-four participants from the same executive course were randomly allocated to one of two conditions: to write about either a situation in which following instincts led in the right direction, or one where reasoning through the issue led to the right outcome. Participants then participated in interviews as before, and those in the second group were significantly better at reading the emotions of their counterpart.

Why is the popular belief at odds with the reality? There is a strong association between concepts of instinct and emotion generally, and popular culture often depicts rational people as confoundedby emotional displays. In addition, under some conditions instinct may give us all we need, as when an emotional display is very pronounced and unmasked. But more often, what we get is subtle or ambiguous, and it can take work to read it. Whether a weak smile is an attempt at warmth or an expression of regret depends upon the details, and requires us to pull together the available clues. After all, the realm of human emotion is rarely an open and shut case.

Tuesday, 9 August 2016

This is Episode Seven of PsychCrunch, the podcast from the British Psychological Society's Research Digest, sponsored by Routledge Psychology.

Can psychology give you a competitive edge in sport? Our presenter Christian Jarrett learns about the importance of having the right competitive mindset, and how to use self-talk and positive imagery to boost your sporting performance.

Being watched encourages us to be nicer people – what psychologists call behaving “pro-socially”. Recent evidence has suggested this effect can even be driven by artificial surveillance cues, such as eyes pictured on-screen or painted on a donations jar. If true, this would offer up some simple ways to reduce low-level crime and, well, to encourage us all to treat each other a little better. But unfortunately, a new article in Evolution and Human Behavior, calls this into question.

The research background on this topic is a mix of positive and negative findings. Also, while some studies purport to explain the inconsistent results, such as surveillance cues being effective only when few other people are present to apply social pressure, other studies have found the opposite effect. All this raises the question of whether an effect exists at all.

The new article consists of two meta-analyses, which look at the big picture by combining results from past work. Stephanie Northover and her colleagues decided to focus on the effects of surveillance cues on people’s generosity – the behaviour that’s been most studied in the past (other studied behaviours include littering and handwashing).

One meta-analysis (combining results from 26 experiments involving 2,700 participants) looked at the effect of artificial surveillance cues like watching faces or eye-like symbols upon the mean amount given in generosity tasks; the other meta-analysis (27 experiments, nearly 20,000 participants) at whether people at least give something rather than nothing. Based on all this data, there was no effect of surveillance cues on people’s generosity (in statistical terms the effect sizes were .03 and .13 respectively). For anyone hoping to increase good behaviour by sticking a pair of eyes on the wall, the results are not promising. “Skepticism is warranted,” the researchers said.

Monday, 8 August 2016

When viewing a film genre that supposedly “matches” our gender, we build up stronger memory “schema”

Psychology research has shown that men and women usually remember things differently. For instance, women on average are better at recalling emotional and social stimuli, whereas men are better at remembering episodes of violence and recognising artificial objects such as cars. The usual explanation is that men and women have different interests and motivations.

Now a study in Applied Cognitive Psychology has added to this literature by testing whether men and women differ in how much they remember of clips from rom-com movies and action films.

Across two experiments, the researchers Peter Wühr and Sascha Schwarz recruited hundreds of German men and 80 women, mainly students, to watch a thirty minute clip of a genre classic. In the first experiment, this was either Die Hard with a Vengeance’s cat-and-mouse game between terrorist and embattled everymen, or the will-they-won’t-they love affair of Notting Hill. The second experiment featured clips from German versions of the French romance Amelie or the action flick Gomez & Tavares.

After watching the film clip, participants answered questions about character attributes, storyline, visual details, events, and places – essentially a film trivia quiz – and said how much they liked the film. Men in both studies showed relatively better recall for the action movie than the romance, whereas women remembered more of the romantic comedy than the action film. The analysis controlled for previous familiarity with the film, so this didn’t appear to be driving the effect.

One potential explanation for the gender difference is that women are paying more attention to romantic comedies, and men to the action. Wühr and Schwarz used the ratings of film liking to judge how engrossed participants were in the clips. Although women showed a relative preference for Notting Hill over Die Hard there was no female preference for Amelie in the second experiment, and anyway, in both experiments, how much a participant said they liked a film was not correlated with how much of it they remembered.

Wühr and Schwarz suggest an alternative explanation: that when viewing a film genre that supposedly “matches” our gender, we build up stronger memory “schema” – that is, plot/event skeletons of “what’s supposed to happen” that we can then easily fill in, whether we enjoy the specific instance or not. They said this could be due to higher past exposure to films that match our gender, or to greater mental effort while watching a gender-matched film.

Whatever the reason, the research suggests that our minds are conditioned in a gendered way to process different kinds of stories differently. This is consistent with the idea that our existing interests and history prepare us to capture and remember personally relevant information more easily. This is unsurprising, perhaps, for technical areas – that a chess master could easily recreate a move sequence that novices could not – but it’s more remarkable that this applies even within mainstream, supposedly fully accessible activities like watching blockbuster films.

Saturday, 6 August 2016

Our editor's pick the 10 best psychology and neuroscience links from the last week or so:

How To Talk So That People Listen
At the recent Latitude Festival Psychologist magazine editor Jon Sutton was in conversation with Elizabeth Stokoe, Professor of Social Interaction at Loughborough University – follow the link for a transcript of the event (a related podcast will be available soon).

Friday, 5 August 2016

According to Hollywood stereotypes, there are the clever, nerdy young people who spend most of their time sitting around thinking and reading, and then there are the jocks – the sporty, athletic lot who prefer to do as little thinking and studying as possible. This seems like a gross over-simplification and yet a new study in the Journal of Health Psychology suggests there may be a kernel of truth to it.

The researchers, led by Todd McElroy at Florida Gulf Coast University, gave an online test of "Need For Cognition" to lots of students, to find 30 who expressed a particularly strong desire to think a lot and 30 others with a strong preference to avoid anything too mentally taxing. This test has been around for over three decades and it involves people rating how strongly they agree with items like "I really enjoy a task that involves coming up with new solutions to problems" and "I only think as hard as I have to". The 30 thinkers and 30 non-thinkers then wore an accelerometer on their wrist for 7 days, to provide a constant measure of how physically active they were during that time.

The thinkers were "far less active" Monday to Friday than the non-thinkers – a difference that the researchers described as "robust" and "highly significant" in statistical terms. At the weekends there was no difference in activity levels between the groups.

The weekday result makes sense in light of past research from the 90s that showed non-thinkers are more prone to boredom than thinkers, and find boredom more aversive. Perhaps non-thinkers resort to physical activity as a way to escape their inner worlds.

Remember this research featured just 60 people, and the results might not generalise to non-students or to other cultures. The lack of an effect over the weekend also awaits explanation. Nonetheless, given the adverse health effects of a sedentary life, McElroy and his colleagues said that more cerebral folk might want to take note, and they added that a quick visit to the gym is not enough – you need to raise your overall activity levels, maybe think about investing in a standing desk.

"Ultimately, an important factor that may help more thoughtful individuals combat their lower average activity levels is awareness," the researchers said. "Awareness of their tendency to be less active, coupled with an awareness of the cost associated with inactivity, more thoughtful people may then choose to become more active throughout the day."

Thursday, 4 August 2016

The message from recent surveys is that it's not just people with a diagnosis of schizophrenia who hear voices in their heads, many people considered mentally well do to. This revelation may have a welcome de-stigmatising effect in terms of how people think about some of the symptoms associated with a diagnosis of schizophrenia, but a new study published in Psychosis asks us to hang on a minute – to say that one "hears voices" can mean different things to different people. You might assume that "hears voices" means that a person has an hallucinated auditory experience just like someone is talking to them. But what about hearing an inner voice that is experienced like an out-of-control thought rather than an external voice? Or a heard voice that's not like either a thought or an external voice?

Our knowledge of the experience of voice hearing among patients has been limited by the fact that a lot of psychiatric research in this area (though not all) has been categorical in nature. For instance, a typical psychiatric scale used in research or the clinic includes a vague item like "[Patient] Reports voices than no one else hears" and a tick here can conceal a huge range of different experiences.

For the new research, Nev Jones and Tanya Luhrmann conducted in-depth interviews with 80 people diagnosed with schizophrenia in the US, India and Ghana about their first-hand experiences of hearing voices. There was great variety between participants in their descriptions of voice hearing and also within each individual participant's own descriptions. Perhaps most importantly, while 79 per cent of the participants reported at least some limited experience of the hallucinated sound of external voices – as if someone was audibly speaking to them – a much smaller proportion (17.5 per cent) said this was their dominant experience of "voice hearing".

The most common dominant experience for the participants was to say they had a mixture of auditory and thought-like voices – 29 per cent reported having this. Another 15 per cent said their dominant experience was to hear thought-like voices that seemed "foreign and alien" but clearly "non-auditory". Other categories were "in-between", being neither auditory or thought-like; "limited auditory" (an auditory experience but just simple words or sounds); "transformed", which is when real voices or sounds were misheard as saying something different; and "multi-sensory", involving visual experiences as much as or more than auditory – for example, one participant described how "the voice will show me all my enemies".

Intriguingly, some participants described how a voice could begin as thought-like, but if they ignored it, then it became more auditory: "If I try ignoring them inside my brain, like they come out. They start telling me things."

The researchers said the interviews showed there is huge variety in the ways that people diagnosed with schizophrenia hear voices, and that it is very difficult for both patients and clinicians to find ways to accurately describe these experiences. Another thing – many of the participants said that what they found most difficult about "hearing voices" was the disruption to their thoughts, rather than the sensory aspect of the experience. Indeed, by emphasising the normality of the auditory aspect of hearing voices, the researchers said there was a risk of mental health professionals presenting a "misleading view of what at least some patients are in fact struggling with".

Wednesday, 3 August 2016

Just as Paul Simon sang about 50 ways to leave your lover, a new study published in the Journal of Applied Psychology suggests there are seven ways to leave your employer. This research is the first to give us a map of "resignation styles," as well as their causes and their impact on others.

The researchers Anthony Klots and Mark Bolino generated their taxonomy of resignation styles from several sources, including a survey of 53 students who described the way they resigned from a previous full-time job position and 423 more people surveyed online who had resigned from their full-time job in the past year, 38 per cent of whom were women, average age 38. Here are the seven styles of saying “I quit” that the researchers identified:

The most common (31 per cent) was By The Book, involving a face to face conversation accompanied by a resignation letter, openness about the reason for departure, and a standard notice period.

29 per cent took a Perfunctory approach – going through the motions above, but in a clinical fashion, without elaborating on their reasons for leaving.

Grateful Goodbyes (nine per cent) were positive and involved willingness to make the departure as painless as possible for the supervisor and team.

In eight per cent of cases, the supervisor was kept In The Loop, with the resignation the culmination of a process that was out in the open, such as applications and acceptance to graduate school.

Nine per cent of cases were Avoidant, minimising contact with the boss through a third party like HR, or by sending a message over the weekend to break the news.

Impulsive Quitting, a style that has been described in past research, occurred in four per cent of cases, and typically involved a precipitating incident or building frustrations reaching breaking point. In these cases, the quitting conversation was the end of the relationship: “she screamed and cussed at me and hung up the phone ... I left and never picked up the phone for her again.”

Finally, ten per cent were Bridge Burners – one short and sweet description being “Told my boss to —— off.” In such cases notice periods were, unsurprisingly, short.

Klots and Bolino found that people who burned bridges or impulsively quit reported higher levels of abuse from their supervisors, and a perception that they were treated unfairly, compared to those leaving more gracefully.

In a final study, the researchers were interested in the impact of these different types of departures on managers and supervisors. Nearly five hundred adults were asked through an online experiment to reflect on the resignations they had received (median two experiences) and then pay attention to a hypothetical scenario where a subordinate resigned in one of the seven styles, before reporting their levels of positive and negative emotions.

Overall participants on the receiving end of a resignation reported more negative emotions than positive ones, but their positive emotions were higher, and negative ones lower when the approach used was by the book, grateful, or signified being kept in the loop (where positive emotions actually outweighed the negative). The researchers call attention to perfunctory resignations, which were far more common than the overtly hostile styles, but led to similarly negative reactions; at least in this experimental context, a clinical but formally impeccable departure can still be upsetting.

When research looks at the impact of different workplace practices, a key measure is often employees’ ‘intention to leave’. The current work reminds us that once you decide to leave, you also have a choice in how to leave, with implications for supervisors, team members, and the organisation more broadly. Klotz and Bolino open their paper with a particularly dramatic example of these ripple effects – banking executive Greg Smith’s NY Times article, published after his resignation, deriding former employer Goldman Sachs in a way that shocked customers and rocked the company. In addition, organisational turnover can show contagion-like effects, and by differentiating different forms of turnover, this work helps us understand when resignations are likely to matter most.
_________________________________

Friday, 29 July 2016

In a sense we're all amateur psychologists – we've got our own first-hand experience at being human, and we've spent years observing how we and others behave in different situations. This intuition fuels a "folk psychology" that sometimes overlaps with findings from scientific psychology, but often does not. Some erroneous psychological intuitions are particularly widely believed among the public and are stubbornly persistent. This post is about 10 of these myths or misconceptions. It's important to challenge these myths, not just to set the record straight, but also because their existence can contribute to stigma and stereotypes and to misinformed public policies in areas like education and policing.

1. We learn more effectively when taught via our preferred "learning style"
This is the idea that we each learn better when we're taught via our own favoured modality, such as through visual materials, listening or doing. A recent survey of British teachers found that over 96 per cent believed in this principle. In fact, psychology research shows consistently that people do not learn better when taught via their preferred modality, and that instead the most effective modality for teaching usually varies according to the nature of the material under study. There are also issues around defining learning styles and how to measure them. Most published scales for measuring learning styles are unreliable (they produce different results on each testing), and they often fail to correlate with people's actual learning performance.

2. Human memory is like a recording of what happenedThe metaphor of memory as a recording is inappropriate because it implies an unrealistic level of accuracy and permanence. Our memories actually represent a distorted version of what happened, and they change over time. And yet a survey of nearly 2000 people from a few years ago found that 63 per cent believed "memory works like a video camera". This misunderstanding fuels related misconceptions, for example around the trustworthiness of eye-witness testimony. For example, many judges and police believe that the more confident a witness is in their memory, the more accurate they are likely to be, even though psychology research shows that confidence and accuracy are not correlated or only weakly correlated.

3. Violent offenders usually have a diagnosis of mental illnessWhen people with mental health problems commit violent crimes, the media takes a disproportionate interest. No wonder that surveys show that most of the public believe that people with mental illness are inherently violent. In fact, as Scott Lilienfeld and his colleagues explain in the 50 Great Myths of Popular Psychology, the evidence suggests that at least 90 per cent of people with mental illness do not commit violent acts, and the overwhelming majority of violent offenders are not mentally ill. Some patients with specific conditions (such as command-based hallucinations "telling them" to commit acts) are at increased risk, but actual acts of violence are rare. A telling meta-analysis from 2011 concluded that 35,000 high-risk patients with a diagnosis of schizophrenia would need to be permanently watched or incarcerated to prevent one killing of a stranger by a patient.4. Crowds turn people stupid and dangerousAfter a mass emergency, it's typical for reports to describe the crowd as "stampeding" in blind panic. There's an implication that when we're in a large group, we lose our senses and it's everyone for themselves. This characterisation is refuted by psychology research on crowd behaviour that's shown panic is rare and people frequently stop to help one another. Cooperation is particularly likely when people feel a shared sense of identity. Psychologist John Drury made this finding based partly on his interviews with people caught up in real-life emergencies, such as the overcrowding that occurred at a Fatboy Slim concert on Brighton beach in 2002. Drury and his colleagues argue this has implications for the handling by authorities of emergency situations: "Crowds in emergencies can be trusted to behave in more social ways than previously expected by some involved in emergency planning," they wrote.

5. Autism is caused by "broken" mirror neurons (and numerous other autism myths)Writing in 2011, the famous Californian neuroscientist VS. Ramachandran stated "the main cause of autism is a disturbed mirror neuron system". Mirror neurons are cells that respond when we perform an action or see someone else perform that action. The "broken mirror" autism hypothesis is a catchy idea that attracts plenty of coverage and is frequently recycled by popular science writers (for example, writing in the Daily Mail, Rita Carer said "autistic people often lack empathy and have been found to show less mirror-neuron activity"). However, a review published in 2013 of 25 relevant studies found no evidence to support the hypotheses, and just this month another study provided yet more counter evidence. This is just one misconception about autism – others are that it is caused by vaccines and that everyone with autism has a rare gift.

6. Vision depends on signals emitted from the eyesIn reality, human vision depends on light rays hitting the retina at the back of the eye. Yet the ancient and wrong idea that it works the opposite way – with rays coming out of the eyes into the world – is still believed by many people, at least according to surveys conducted in the 1990s and 2000s. For example, roughly a third of university students were found to believe that something comes out of the eyes when we see things. Quite why this misconception remains so stubborn is unknown, but we can speculate that it is because, from a subjective perspective, things appear "out there" and also because of the widespread experience people have of "feeling" that they are being stared at. In fact, controlled experiments have shown that while many people clearly do think they've felt someone's stare, they can't actually detect whether someone is staring at their back or not.

7. The Stanford Prison Experiment shows how the wrong situation can turn anyone badOne of the most infamous studies in psychology, the Stanford Prison Experiment conducted in 1971, involved student participants being allocated to the role of prisoner or guard, and it had to be aborted when the guards became abusive. Philip Zimbardo who led the study said it showed how certain situational dynamics can turn any of us bad, and this meme of "bad barrels" rather than "bad apples" has entered the public consciousness. Zimbardo even acted as an expert witness for the defence in the real-life trial of one of the abusive guards at Abu Ghraib. But the Stanford Experiment was highly flawed and has been misinterpreted. Later research, such as the BBC Prison Experiment, has shown how the same situation can lead to cooperative behaviour rather than tyranny, depending on whether and how different people identify with each other. Unfortunately, many modern psychology textbooks continue to spread a simplistic, uncritical account of the Stanford Experiment. 8. The overwhelming majority of acts of domestic violence are committed by men

A British survey published in 2014 found that over 65 per cent believed it was probably or definitely true that domestic violence is overwhelmingly committed by men. It's easy to understand why – being bigger and stronger, on average, men are seen as more of a threat. Yet official statistics (cited by Scarduzio et al, this year) show that violence against men by women is also a major problem. For example, The National Intimate Partner and Sexual Violence Survey in the US found that one in four men had experienced physical violence, rape, and/or stalking from a partner (compared with one in three women) and that 83 per cent of the violence inflicted on men by partners was done so by women. This is not to diminish the seriousness or scale of the problem of partner abuse by men toward women, but to recognise that there is also a significant, lesser known, issue of women being violent toward men.9. Neurolinguistic Programming is scientificIt's true that a minority of psychologists are trained in neurolinguistic programming (NLP) and advocate its use, but it is a serious error to think that NLP is grounded in scientific findings in either psychology or neuroscience. In fact the system – which is usually marketed a way of achieving greater personal success – was developed by two self-help gurus in the 1970s who simply made up their own psychological principles after watching psychotherapists working with their clients. NLP is full of false claims that sound scientific-ish, such as that we each have a preferred "representational system" for thinking about the world, and that the best way to influence someone is to mirror their preferred system. A forensic trawl through all the claims made in NLP programmes found that the overwhelming majority are piffle. In many contexts, this may be harmless, but in 2013 a charity was called to book for offering NLP based therapy to traumatised war veterans.

10. Mental illness is caused by a chemical imbalance in the brainOne survey in the US from a few years ago found that over 80 per cent of people believed that mental illness is caused by a chemical imbalance in the brain. In fact, ask any psychiatrist or neurologist and if they're honest they'll tell you that no one knows what the "correct" balance of chemicals in the brain should be. Part of the support for the imbalance idea comes from the fact that anti-depressant medication alters levels of neurochemicals in the brain, but of course that doesn't mean that a chemical imbalance causes the problems in the first place (any more than a headache is caused by a lack of paracetamol). The myth is actually endorsed by many people with mental health problems and by some mental health campaigners, partly because they believe it lends a medical legitimacy to conditions like depression and anxiety. However, research has shown that biological accounts of mental illness (including the chemical imbalance theory) can increase stigma, for example – by encouraging the idea that mental health problems are permanent.

Thursday, 28 July 2016

In the 1950s, the American psychologist Harry Harlow famously showed that infant rhesus monkeys would rather cling to a surrogate wire mother covered in cosy cloth, than to one that provided milk. A loving touch is more important even than food, the findings seemed to show. Around the same time, the British psychoanalyst John Bowlby documented how human children deprived of motherly contact often go on to develop psychological problems. Now this line of research has entered the neuroscience era with a study in Cerebral Cortex claiming that children with more tactile mothers tend to have more developed social brains.

Jens Brauer and his colleagues videoed 43 mum-child dyads as they sat together on a couch and played with a Playmobil Farm. The mothers knew they were being filmed but didn't know the aims of the study. There were 24 boys and 19 girls and their average age was 5.5 years. Coders then watched back the videos and counted every instance that the mothers touched their child or vice versa. Finally and within the next two weeks, the researchers scanned each child's brain while they lay as still as possible looking at a lava lamp screensaver (a brain imaging technique known as a resting-state scan).

The researchers were particularly interested in levels of resting activity in the children's brains in a network of areas known to be involved in functions such as empathy and thinking about other people's mental states – sometimes referred to as the "social brain". They found that the children who were touched more by their mother in the ten-minute play session tended to have more resting activity in the social brain, especially the right superior temporal sulcus (STS). Children who received more touch also showed more resting connectivity between different functional nodes within their social brain, such as between the STS and the inferior frontal gyrus and the left insula.

Children touched more by their mother also usually touched their mothers more, but the links between mothers' touch and the children's neural activity were still significant after factoring this out.

Previous research has found that greater resting activity in a person's social brain is linked with their social and emotional abilities, such as being able to take other people's perspective. Based on this, the researchers said "one may speculate that children with more touch more readily engage the mentalizing component of the 'social brain' and that, perhaps, their interest in others' mental states is greater than that of children with less touch."

The research has some serious limitations, most obviously – and as the researchers' acknowledged – that the results are correlational, so it's possible unknown factors are driving differences in amounts of motherly touch and in the children's brain development. For example, perhaps some mothers are more engaged on many levels, including talking to their children more. Such mothers might be more tactile, but it could be, for instance, the way they talk to their children that is responsible for the brain differences. Another major factor, not mentioned by the researchers, is potential genetic effects. The same genes driving tactile behaviour in mothers might be passed down to their children influencing their brain development. It's also worth noting that it remains to be seen if similar results would be found for levels of touch from a father or other caregiver.

These issues aside, Brauer and his colleagues ask us to consider their results in light of animal research that is able to experimentally control how much motherly touch different individual animals are exposed to. This has shown that greater maternal touch is associated with important brain changes in rats, for example in the way their brains respond to stress, and that rats raised with more touch go on to be more tactile towards their own offspring. "On the backdrop of this work then, it is not unreasonable to suspect a potential causal role of touch for human development," the researchers said.