Archive for the ‘Thinking about psychiatry’ Category

Writing generally, in its approach to the study and treatment of mental disorders, Western psychiatry has tended to ignore socio-cultural factors, preferring instead to conceptualize the illnesses with which it is concerned as having a biological basis and a single aetiology and presentation. Mental disorders as seen by the West are universal and those elsewhere marginalized and considered culture bound. All cultures do have recognised human behavioural breakdowns – sustained anomalous behaviours judged negatively and regarded as disruptive to organized social life – but this does not mean that there is a single ‘true’ psychiatry.

Whilst there are likely to be some universal – biological – processes involved in the aetiology of mental disorders, to conceive human behavioural disturbance simply in terms of chemical processes is simplistic. Their causes are multi-determined, with socio-cultural factors playing a crucial role. Human beings and their cultures are not separable but interdependent and reflective of one another. The culture of individuals will interact with biological, psychological and environmental variables to determine the causes and manifestations of mental disorder. The symptoms, meanings and appropriate treatments of mental disorders are then likely to vary across cultures. That the dividing line between the sane and insane is culturally determined is clear, as it is being constantly readjusted even in Western medicine; homosexuality for instance crossed over from mental disorder to normal behaviour in 1974*. A behavioural disturbance seen in another culture may resemble that identified by Western medicine, and a Western treatment may even be of assistance in its resolution, but to disregard a local viewpoint and impose a Western one risks medical imperialism.

Concepts of mental illness in non-Western cultures can be markedly different. Non-western cultures for instance appear often to emphasize somatic symptoms when presenting with a depression-like illness, perhaps because of beliefs about the integration of body and mind. Furthermore, emotional states that appear quite fundamental from the perspective of an English speaker are not always mirrored in the lives and languages of other cultures. It is often very difficult to find words or phrases for ‘depression’ in the non-Western lexicon. Such difference is even plainer when we consider that whilst the English language contains over 2000 emotional words, most languages contain fewer than 200.

Of course the reality is much more nuanced than this ‘West vs. the rest’ scenario and in any one place or culture explanatory models that are used to account for disease and illness often vary between different social groups occupying the same location at a point in time. For example there may be differences in explanatory models between townsfolk, traditional healers and Western trained professional elites. Western ideas, however have great influence not least in scientific inquiry where much cross-cultural psychiatric research has been undertaken by academic psychiatrists using Western psychiatric concepts to explain behaviour in non-Western people. Such activity often uses definitions of mental illness as stated in DSM and ICD-10 manuals. These are supposedly objective accounts, but are in fact in themselves culture bound documents, representing the attempts of one particular group of people to make sense of human behavioural breakdowns. It is social anthropologists rather than psychiatrists who have been interested in exploring concepts of normality and abnormality in different cultures.

Ideas of the universality of the Western psychiatric model are extremely powerful, but we cannot assume that because Western mental phenomena can be identified in non-Western settings, they mean the same as they do in the Western world. This is an important topic, as the WHO has said that, within a decade, depression will globally be second only to cardiovascular disease in terms of disease burden. A more culturally relativistic approach would find this concerning, as ‘depression’ is merely a description syndrome and is highly heterogeneous and socially shaped. It is therefore unsuitable to be regarded as a universally valid mental health disorder. Framing people’s difficulties as being in the realm of mental health raises a familiar concern that to act in this way is to draw attention away from other causes of their distress, for instance poverty or lack of rights.

Consider then that despite its prominence, Western psychiatry is simply one of a number of ethno-psychiatries. It possesses however one important difference: it is the only psychiatric paradigm with the power to project its conclusions onto the rest of the world.

* According to the American Psychiatric association who decided to remove homosexuality from the DSM following a vote

The idea of ‘self’ is difficult to define but represents a set of ideas, representations and beliefs that are held about what it is to be a person. As psychiatry is a subject concerned with thoughts, feelings, behaviours and relationships, how people view themselves and the accompanying attitudes of psychiatrists to this (our ‘gaze’) are central to its execution. In Western cultures, we overwhelmingly choose to define ourselves in terms of our individual direction and achievements. This orientation is often portrayed as an objective truth, but is in fact simply an extremely powerful cultural construction.

Construction or not, individualistic ideology has had a substantial influence on thinking about mental distress. Psychiatrists have based much of their work on individualistic notions which as a consequence assume that emotional problems can be studied and understood separately from any other context. When seeking to diagnose an individual as having a mental health disorder, current classification systems, when rigidly interpreted, require no consideration to be given to circumstances beyond a patient’s psychopathology. Forms of emotional distress are then defined in terms of disordered individual experience and social and cultural factors are seen as secondary and may or may not be taken into account.

This approach sits ill at ease alongside patient experience. The lives of people with mental health problems have often been very eventful, and normally not in a good way. The message that life stories are largely irrelevant is then not always popular. Gail A. Hornstein writes in OpenMind on this subject:

Many patients feel deeply wounded by the assumption that madness has no link to life experience. As Jacqui Dillon, Chair of the National Hearing Voices Network, England, said at a recent conference, “Pathologising the experience of people like me, who have suffered terrible trauma, only adds insult to injury and protects those who have abused us. Instead of asking, what’s wrong with you? people should ask, what’s happened to you?”

Our individualistic beliefs are understandable. They are welcomed by some patients as they allow entry to the sick role and it can be comforting to regard suffering as something separable from the self and which for amelioration can be passed over to an expert. It would also be strange if psychiatry had been immune to this central tenet of capitalist societies and the approach also proves expedient to research, where individual phenomena can be captured by way of surveys and rating scales.

However as a profession with regards to this, I would hope that we could, collectively, be more ‘self aware’. This is not to suggest that mental health professionals are deliberately ignoring patients’ stories, that they are bad people, or even that mental health systems have been purposely set up in order to ignore the needs of vulnerable groups but it is interesting how dominant and rarely questioned ideas and discourses can work to render us blind to systemic inconsistencies and inadequacies.

The current paradigm allows the social and ideological origins of distress to be ignored and its implications side-stepped. Our helpful – but not too helpful – approach makes possible the propagation of mental health services, who are actually supported by a fragmented and individualistic society.

In order to be truly transformative, mental health services would then need to be honest about the social, political and ideological conditions that often lead to mental distress. Alas even if this were to magically happen, our message would be lost unless there was a corresponding move in greater society toward a value system where people seek satisfaction more from helping others rather than pursuing private advantage.

When constructing the self, the child internalizes historically and culturally determined values. It is therefore possible that the self as known to people of the past may have been quite different from the self as known to people living in the modern world. Roy Baumeister has argued that for medieval Europeans, the self was relatively transparent, and was equated with visible manifestations and actions. As life on earth was, at that time, believed to be a preamble to eternal bliss, there was no need to search for self fulfilment. In modern Western societies, in contrast, the self is often viewed as a hidden territory that can only be known with difficulty, but which must be explored (perhaps with the technical assistance of a psychotherapist) if its special talents are to be fostered and self-actualization achieved.

The medical model of mental illness has facilitated the move towards greater restriction by cloaking it under the mantle of treatment. This process of medicalisation of deviant behaviour conceals complex political issues about the tolerance of diversity, the control of disruptive behaviour and the management of dependency. It enables a society that professes liberal values and individualism to impose and reinforce conformity. It disguises the economics of a system in which human labour is valued only for the profit it can generate, marginalising all those who are not fit or not willing to be so exploited.

Well before the rise and fall of Michael Jackson, aka ‘Wacko Jacko’ the idea that the gift of exceptional creativity or ‘genius’ is all too often packaged with mental ill health, has long had cultural currency. If someone mentions this within our earshot at a party, should we mercilessly expose their naivety, or is there substance to this?*

There’s an immediate problem with definition; ‘exceptional creativity’ or ‘genius’, ‘madness’ or ‘mental disorder’ are in themselves difficult to exactly define and a full examination of their meanings would amount to a weighty tome. All these terms are in fact more or less vague and at best we can try to offer them a degree of precision by anchoring them within a set of terms we hope are more exactly understood. There is no agreed definition of mental ill health, and indeed the various ways to which psychiatric problems as a category are referred – mental ill health/disorder/disease are three – are not synonyms. Equally, the Oxford English Dictionary offers eight meanings for ‘genius’, the most relevant of which for this purpose is ‘native intellectual power of an exalted type, such as is attributed to those who are esteemed greatest in any department of art, speculation, or practice; instinctive and extraordinary capacity for imaginative creation, original thought, invention, or discovery’.

With this difficulty noted, what can studies tell us? Looking in the past the mood disorders of poets in Britain and Ireland born in the hundred years 1705 – 1805 have been investigated. This time period includes esteemed figures such as Lord Byron, Samuel Johnson, William Blake and William Wordsworth. A high rate of mood disorders was found, with this group 30 times more likely to suffer bipolar disorder and five times as likely to commit suicide. These results are striking, but problematic. It can be difficult enough to determine whether someone whom you are directly interviewing is mentally disordered, so the reliability of a diagnosis made over the passage of centuries from biographical data is seriously in question. Furthermore we have really no idea what minds were like in the past, and in diagnosing historical figures with mental disorders characterised well after their deaths, we must recognise that we project ourselves onto them.

Looking at living people avoids some of these difficulties and another study interviewed a group of 47 eminent British writers and artists and found that 38% had been treated for mood disorder. The poets involved were particularly unfortunate and half had needed hospitalization.* In line with speculation that bipolar patients are particularly creative, many of the subjects reported changes in mood, cognition and behaviour either preceding or coinciding with the creative process. In a similar study on the other side of the Atlantic, a group of 30 creative writers living in Iowa was interviewed. The researcher was actually expecting to find a correlation between creativity and schizophrenia but actually no such was seen. There was however abnormally high levels of mood disorder in both the writers and their relatives; eighty percent of the sample had experienced at least one episode of major depression, hypomania or mania compared with 30% in the control group. The group was followed for the next 15 years and it was found that 43% had bipolar disorder compared to only 10% of the control group and 1% of the general population.

A further two ore studies seem to confirm these findings. In Denmark bipolar patients and their relatives were interviewed about their lives and their responses were evaluated using a standard measure of lifetime creative achievement. The patients and their relatives both scored higher than the control group. A Stanford university study found that people with bipolar disorder and creative discipline controls scored significantly higher than healthy controls on a measure of creativity called the Barron-Welsh Art Scale.

As well as mood disorders at least one study has suggested that schizophrenia may also be implicated. An investigation of the occupations of the relatives of Icelandic patients with schizophrenia found evidence of high levels of creativity. Do then psychosis and creativity have common genetic roots?

I haven’t looked at these studies of living patients closely but they do suggest that the correlation between creativity and mental ill health cannot be dismissed as their findings are quite consistent. It is interesting that the creative process does not appear to be restricted to a single category of mental ill health; this may either mean that the distinctions we make between different mental states are overconfident, or that it is the altered state that is important, but not its precise nature. The studies are still relatively few however and the numbers of patients included appear limited. Their definition of creativity is also narrow, being restricted to the arts and such a one dimensional view of creativity may reflect familiar prejudice against the merits scientific disciplines. It seems unlikely that a person who is successful in science, business or politics will not have to show creative thinking. There is also no discussion of the direction of causation; those with mental health problems may choose to work in creative areas as the discipline required for full time employment is not necessary. Equally it is also possible that the isolation, rumination and mental effort required for the act of artistic creation will also have an effect on mental health. Also note that if there is a connection between mental disorder and exceptional creativity, these may not necessarily both be in the same individual; it is possible that there could exist an excess of mental disorder within the family of the creative individual who is him/herself in fact largely unaffected.

Yet even if studies were uniformly unsupportive I think that the idea of the madness and genius being co-dependent would persist. The creative process is generally romanticized, a phenomenon in itself unremarkable as this maintains privilege, impresses patrons, and recruits muses. Perhaps there something mysterious and unexplainable about the creative process such that we feel it requires something equally mysterious and unexplained – mental illness – to account for it; or do we feel that dramatic works must necessarily have dramatic conceptions? Or in order to soothe the doubts we have about our own achievements do we wish to see talented artists as in some way ‘other’. Another advantage of mental ill health and creativity being in some way connected – and one that is more likely to mean that a possibly spurious correlation is paraded as fact – is that this allows something positive to come from mental illness. Note also the idea of ‘genius’ is in itself culturally dependent, being as it is a Western individualistic notion that genius exists within a single person, a great man or woman without whom society would not move forward. A discussion of the good fortune that lead to their recognition is not generally undertaken. What constitutes either genius or madness is of course highly subjective and hostage to the gaze we bring and the assumptions and values that gaze has implicit.

Presumably, if the association is genuine, mental ill health must at some level help with the creative process. A creative person may differ from others in that he or she is more open to experience, is more exploratory, shows increased risk taking, and is more tolerant of ambiguity. A particularly creative person may experience the order and structure that others find comforting as inhibiting and may feel the need to confront norms and conventions. Such traits may make him or her more perceptive but also more vulnerable to emotional turmoil. It does seem likely that artistic creativity will benefit from a variety of experiences and perhaps the struggle to come to terms with personal emotional extremes supports the process as certain thoughts may only be accessible to us when in certain states of mind. Times of mental health could draw on times of mental ill health for inspiration as Lewis Wolperthas commented. Depression could help put into perspective thoughts and feelings that have been generated during a more manic phase and in this way it could take an editorial role.

Mental health is however also necessary for great work, as this requires concentration, discipline and great effort. Mental ill health is clearly neither necessary nor sufficient for genius given that not every creative person has a mental health problem. There does seem to be something to ‘madness and genius’ but how strong a correlation is unclear and is likely to remain hostage to where we choose to draw our lines in the sand.

None of the speakers were particularly perceptive alas. It’s available as a podcast, so no need to take my word for this.

** Comedians are classically seen as depressives. Oliver James certainly thinks so. In their book The Naked Jape, Jimmy Carr and Lucy Greeves discuss the ‘sad clown’ stereotype and basically disagree with it. They quote a 1992 study by psychologist James Rotton which found that comedians were actually no more prone to suicidal depression than any other group and there was no difference between the life expectancy of a comedian and any other sort of entertainer.

Assuming that we buy the line that childhood trauma or hardship can, in some cases, spur individuals on to high-profile achievements, it’s not surprising that many successful and famous jokers have less than Walton-esque family backgrounds. But would you find any fewer damaged individuals if you were to look at rock musicians, or actors, or any other deeply competitive profession where the stakes are high, your personality is exposed to harsh public criticism and you have a bit too much time on your hands?

Lucy Greeves was kind enough to reply to my emails and said that she’s found from her own experience that the trait that most exemplifies comedians is competitiveness rather than melancholy.

I think the thing that strikes a lot of people as odd when they first realise it is how serious most professional comics are in “real life”. I’m not sure why this surprises us, though. We don’t expect opera singers to converse in arias. But because a really good comedian’s trick is to convince his audience that he’s not using a script, we buy into that illusion that he’s just a really hilarious guy who has agreed to be our mate for the evening. Imagine our disappointment when he doesn’t say funny stuff all the time – perhaps he’s depressed?

‘If we say that we are working to develop user-centred services, training and research programmes then is it simply unethical to carry on as if the user movement did not exist’.

True to this insight, during my time as a psychiatric trainee I’ve had very little to do with user organisations, and they have therefore had little or no impact on my thinking or clinical practice*. Just like Bracken says, for me they have not existed. I am unable to say if this is the experience of all psychiatric trainees, or whether my training establishment is particularly indifferent, but I fear that I am not a unique case. This must be a regrettable oversight. Any sensible commercial entity (to which health services are becoming increasingly compared) listens to people who take the time to lodge a concern, knowing that if they do not, not only will their disgruntled customer brief others of their dissatisfaction, but also that they will be missing an opportunity to improve. Within psychiatry, patients can make complaints and are sometimes asked to participate, but they act predominantly as advisors and expertise still resides with professionals.

Why, you might say, does this matter, and why should we single psychiatry out on this? Perhaps we should not; I personally have seen from working in other medical specialties that psychiatry’s reluctance to engage with user groups is shared by other branches of medicine where there reside doctors who are very unwilling to engage with patients. Many people return from a stay in a hospital medical or surgical ward with reports of offhand medical staff and have been so uninvolved in their care that they are barely aware of what has happened to them. However, whilst psychiatric disorders resemble those of physical medicine in many ways, their formulation cannot easily be captured with the same lexicon and the interaction between psychiatrists and their patients is different. You can, at least in theory treat, a patient’s coronary arteries without so much as exchanging the time of day with them. A cardiologist who takes into account their patients’ community role and psychological well being may have more satisfied patients, but it is not their primary business. Psychiatry, on the other hand, deals with thoughts, feelings and behaviours and is entirely cited in the social world. Our outcomes are less mechanical and more nuanced than those of other parts of medicine. We have power to define normality, to bestow stigmatizing labels and to take freedoms where we think fit**.

Psychiatric disease is often chronic, so a beneficial relationship between doctors and patients can only be to mutual benefit. The fuller dialogue with patients and with user groups could lead us to devise services that genuinely engage people with mental health problems and inform our theories as to the nature and boundaries of psychiatric illness. Such engagement will lead to responsibilities for our patients too; they, as well as the wider public need to be will to be understanding over the particular areas of difficulty in our practice, such as the use of the mental health act. Recognition will also be needed of the fact that user groups do not speak with one voice and potentially have contradictory messages.

If you have worked with user groups in any capacity, please leave a comment below and tell of your experience.

***

*Criticism of psychiatry from former users is, of course, not new. In 1620 for instance the House of Lords received the ‘Petition of the Poor Distracted People in the House of Bedlam’ a complaint against the inhumane treatment of the Bedlam Asylum inmates.

** Not that I was there, but this transcript of a 2006 debate organised by the James Naylor Trust gives an idea of how upset some people are with psychiatrists.

Today I saw a female patient who has problems with use of multiple recreational drugs and alcohol. I was the first psychiatrist that she has ever seen, she has however for the past two years been taking mirtazapine – an antidepressant – and this is prescribed by her hospital physician. I almost never prescribe medications outside a psychiatric remit, however antidepressants are regularly prescribed by doctors whose area of expertise is not psychiatry. GPs, ITUs and stroke wards often start their patients on these medications, and hospital physicians can also be very fond of them.

The notion that there is a very common disease called ‘depression’ that can be addressed with the use of antidepressants is very prevalent in our society and although psychiatrists are ‘experts’ in it, the general abandon doctors show with antidepressant prescribing would suggest that its treatment is something on which all doctors have purchase and is not just the preserve of shrinks. Yet can this be a good idea? Many doctors’ insight into this area may be no more nuanced than that gleaned from their teaching at medical school, which from my recollection was simplistic and dogmatic. Is low mood such a problem that we cannot but afford to have all doctors tackling the problem, or has the diagnosis gone feral and now needs to be tamed by expert tamers with chairs and whips?

In truth ‘depression’ is a very difficult thing to define and any doctor who says that they can reliably differentiate it from sadness is deluding themselves. Our current best shots at a definition, or at least the one that most people agree on, are the vague aggregation of symptoms offered by DSM-IV and ICD-10. These definitions are so broad however that they stand accused of pathologizing everyday sadness and have in part lead to the ridiculous notion, useful to some, that one in four of our population suffers from a disorder of their mental health.

Standing aside whether widely used criteria are worthy, most doctors – including psychiatrists – pay little heed to operational criteria, and instead simply going to a doctor once or twice and stating that you’re ‘not quite yourself’ is most often sufficient for a prescription of antidepressants, which is a de facto diagnosis of depression. It’s illuminating often to ask people who say that they are ‘depressed’ what meaning they attach to this; the selection of responses I have had range from those equating to mild dysphoria to those expressing unremitting misery. It is also not unusual for a question about someone’s supposed mental distress to be framed in more concrete terms: ‘I’ve got a lot of trouble with my housing’ being an unfortunate favourite. If the first doctor won’t provide you with antidepressants, the second surely will. Doctors we feel they must help and antidepressants allow them to avoid admitting the boundaries of their efficacy.

Thus a patient who entered a consulting room simply sad, and often unfortunate, leaves anointed as ‘depressed’ having now a stigmatizing mental health disorder, and as this is a disease that sits independent from a life narrative, other avenues of relief which might have otherwise been explored are tacitly discouraged. Now take the patient we started with. Anyone standing next to you at a bus stop would tell you that if someone was already taking four psychoactive substances on a daily basis, then addressing these might be the first place to start. This is what I’d have said to them, but in this rights-based society if I think this and a patient thinks differently, who’s right?

You might think then that this is a call for psychiatrists to act as gatekeepers to the prescribing of antidepressants. Actually no, depression and antidepressants are one of the stories of our age, which means that they effect everybody and everyone has a part to play in their sensible use. I’m not going to go so far as to say that there is no such thing as ‘mood disorder’ but in recently years we have all reimagined humans as intensely vulnerable beings, which inevitably means that people will view themselves in this light. As the prominence of religion in European communities fades and market capitalism continues to propagate the excluded, medicine has become the place to turn for suffering of all kinds, social, physical and mental but this is no substitute for a supportive community. They don’t teach us at medical school how to know the limits of our business, so we’ve been simply blundering on. If all doctors can prescribe antidepressants, then all doctors should be part of the conversation about when we’ve gone too far and we should tell people that they’re a lot tougher than they think.

It has been estimated that approximately 1 000 000 people die of suicide yearly worldwide and whilst most studies indicate that people who commit suicide have a disturbance of mental functioning this does not exclude a relatively small number of people who, for whatever reason, might express the wish for an early death but yet lack any state that may impair their mental function. For these people the paternalistic approach applied to many with a desire for suicide appears less appropriate and has lead to the notion of a ‘rational suicide’. Many people feel strongly that this option for rational thinkers to end their lives should be available and argue that there is a historical precedent; it was in reference to manner of Socrates’ death that Compassion and Choices, an American euthanasia pressure group, was initially called the Hemlock Society.

The emergence of rational suicide as a concept has happened within a framework of contemporary era cultural, technological and philosophical shifts where individualistic attitudes lead people to treat their own goals and desires as paramount whilst advances in medical treatments have lead to increased lifespan. Therefore at the end of life we are both encouraged, and afforded more opportunity, to contemplate the manner of our own passing. Judgement of suicide has simultaneously moved away from assigning a successful suicide to be a moral or religious failure towards one where most suicides have come to be seen as the result of disturbance of mind.

Werth and others have suggested criteria under which a rational suicide should be allowed. That these are notably circumscribed reflects the negative value that suicide generally holds and the concerns of others with this approach. Proposed are that for a suicide to be considered rational the person in question must have an unremittingly hopeless condition, should make their decision as a free choice and have engaged in a sound decision making process, including assessment by a mental health professional.

Despite the face validity of this line, analysis of what is meant by ‘rational suicide’ and its implications reveal a more nuanced situation than the casual inquirer might anticipate. From the definitions of the word ‘suicide’, taken from the latin sui meaning ‘of oneself’ and cidium meaning ‘to slay or kill’, and that of rational, an act that it is characterized by reason or is intelligible, sensible, or can be understood , one can surmise that ‘rational suicide’ is self slaying that is characterized by reason or ‘makes sense’ to others . The arguments in favour of rational suicide generally come in two flavours. The first emphasizes the need to respect an individual’s autonomy, the modern meaning of which was developed by the philosopher Kant. In common usage it implies ‘being one’s own person or being able to act according to one’s beliefs or desires without interference’. Kant expressed it as a respect for persons and wrote that to violate a person’s autonomy is to treat them as a means rather than as an end in themselves. The ‘right to die’ is then an expression of the most extreme form of autonomy, that is the right to choose the time and manner of one’s passing. The second argument in support of rational suicide involves the ability of an individual to make rational assessment of utility or ‘good’ that is gained by ending their life and here proponents argue that suicide can provide freedom from painful and hopeless disease. In this argument the consideration that an individual has for their quality of life is of paramount importance.

However the concepts of autonomy, utility and rationality alone are inadequate arguments for the acceptance of rational suicide as none are ever identifiable in so pure a form as to be considered a philosophical trump card. Werth’s guidelines are first and foremost pragmatic and with an irreversible decision at stake the standards of rationality must of necessity be high. To come to a conclusion that an act or intention of suicide is reasonable is not a straightforward matter.

We must also recognize that in seeking a rational suicide, the components that inform this decision are culturally determined, thereby introducing considerable subjectivity and possible external disagreement. Furthermore if the decision to end one’s life is informed by persistent suffering, then it is unlikely to be made on entirely non-emotional grounds and likely to be subject to cognitive distortions. It is a curious position to seek to solve a problem in life, by ending the life itself and those intending a rational suicide would presumably actually prefer to be alive, just not under the current circumstances, indicating the presence of significant ambivalence regarding their decision.

There are few people who would argue that autonomy for a patient, at any stage of care, is not important. However when we respect autonomy we are respecting a person’s right to exercise their right to make independent decisions about their life and these decisions will be made on the basis of considerations which are consistent with a person’s moral values or a personal code. These values or code would ideally be independently derived; however this is not possible as people are heavily influenced by such things as their culture, parents and friends. Thus the sense of autonomy as the exercise of independent thought is compromised.

Alternatively, if one wishes to frame rational suicide as the outcome of an audit of a life’s merits and demerits a pertinent question is what the continuation of this life is to be weighed up against. If the decision is to be truly informed this should involve gaining all possible facts and imagining all consequences. However since the experience of being dead is entirely unknown it is questionable whether it is possible to adequately foresee the outcome of one’s actions in this regard.

These concerns indicate that it may be difficult to satisfactorily reach a conclusion that rational suicide is possible. The concept of a suicide being ‘understandable’ is probably more meaningful and suitable although may not carry the same weight.

One of the constant criticisms of psychiatry that it is heavily influenced by pharmaceutical companies in whose interest it is to encourage as many people as possible to take medication. This is not to say that the benefit to society from the products of drugs companies has not been massive, but we should not, simply on this basis, assume that the interest of drug companies and the desires of doctors and patients are identical. There are significant overlaps, but in one fundamental respect they come into conflict: pharmaceutical companies are answerable to their shareholders and thus above all need to maximize profit and their market share. Put another way, human beings can survive without endless drugs to cure every possible ill but the companies that provide them cannot.

Psychiatric prescribing has been a particularly rich picking for pharmaceutical companies. A large proportion of the global drugs spend is on psychoactive drug and in the UK between 1991 and 2001 prescriptions of antidepressants rose by 173%. Partly off the back of this drug companies have become some of the most profitable organizations in the world. In 2002 the combined profits for the ten drug companies ($35.9bn) in the Fortune 500 were more than the profits for the other 490 listed businesses put together ($33.7bn). As their profits have increased, so have the amount governments and individuals have spent on their products. In the UK the per-person government health care spending went up from $84 in 1960 (3.9%GDP) to $977 in 1980 (5.6%GDP) and reached $2160 in 2002 (7.7% GDP – all figures adjusted for inflation). The global spend on drugs increased from $20bn in 1972 to $500bn in 2004 (not adjusted for inflation).

Drugs are central to modern psychiatric practice and also to thinking about the nature and aetiology of mental disorders. Arguably the primacy of concepts such as depression, social phobia, attention deficit and hyperactivity disorder owe much to pharmaceutical company intervention and psychiatric disorders provide opportunities for increasing product sales as, being poorly understood, they allow scope for expanding definitions of sickness to include more and more areas of social and personal difficulty not previously within the medical realm. In addition the influence that the pharmaceutical industry wields has helped to create and reinforce a narrow biological approach to the explanation and treatment of mental disorders and has led to the exclusion of alternative explanatory paradigms. Furthermore the coercive function of psychiatry has also been strengthened by the continued promotion of the idea that that psychiatric disorders are akin to medical conditions and therefore amenable to technical solutions in the form of drugs the benefits of which we have a duty to impose.

Pharmaceutical companies spend a vast amount of money on marketing, an activity which is often aimed at doctors, on whom they must rely in large part for the distribution of their products. Sponsorship is given to academic meetings providing access to the leading doctors of the future. Although a representative’s gifts may be relatively small, even ones of negligible value can influence the behavior of the recipient in ways the recipient does not always realize. There is also disquiet about various aspects of research and the licensing process. Drug companies have repeatedly been shown to bury unflattering data. Even drug trials can be considered as a form of marketing as they introduce physicians to drugs early.

Any invective on this subject should not of course just be levelled at psychiatrists, as it effects all branches of medicine but psychiatry has arguably been one of the most vulnerable specialties. We are rapidly becoming, or indeed have become, a society that seeks a ‘pill for every ill’ and one that looks for simplistic, technical solutions to complex social problems. This helps to divert attention away from the profound social and political changes that have occurred during the last few decades. But for our part as psychiatrists and doctors we should, whilst recognizing the contribution that pharmaceutical companies make, seek not to collude with practices that are against the best interests of patients and of the wider society.

For further information on this subject the following are excellent:

Is Psychiatry for sale? – the astute reader will see that I’ve pinched some of her arguments. It’s a short closely argued polemic. But be warned – I once presented it in a journal club and found few supporters.

Bad Science by Ben Goldacre. Everyone’s favourite exposer of folly has a chapter on this in his book. I note with some envy that it’s the 13th best seller on Amazon. I’d be lucky to get 13 comments on this blog in a month. One can but dream….

Up until the end of the 18th century mental disorders were divided into roughly four categories: idiocy (congenital intellectual impairment), dementia (acquired intellectual impairment), mania (insanity associated with many delusions and disturbed behaviour), and melancholia (insanity associated with circumscribed delusions and social withdrawal).

Morel in France was one of the first people to put forward the view that that mental disorders could in fact be further separated and classified. In 1852 he gave the name démence précoce to describe a disorder which he described as starting in adolescence and leading first to a withdrawal, odd mannerisms, self-neglect and eventually to intellectual deterioration.

Kraepelin working in the late nineteenth century took inspiration from general paralysis of the insane – a disease with unity of cause, course and outcome – and argued that there were a discrete and discoverable number of psychiatric disorders. He sought to distinguish between ‘dementia praecox’ and affective psychosis. Dementia praecox described patients with a global disruption of perceptual and cognitive processes (dementia) together with early onset (praecox). Affective psychosis contrasted with relatively intact thinking, later onset and episodic nature of the illness.

It was Bleuler who first used the phrase ‘schizophrenia’. It is commonly thought that this means ‘split personality’ but Bleuler actually meant the name to reflect the ‘loosening of the associations’; he thought this the essence of the disease. He described four fundamental symptoms which he deemed essential for the diagnosis: loosened associations (between different functions of the mind, so that thoughts become disconnected and co-ordination between emotional and volitional processes become weaker), ambivalence (the presence of conflicting emotions and desires), incongruous affect (e.g. vacuous giggling on hearing sad news), and autism (active withdrawal from reality in order to live in an inner world of fantasy)
Unlike Kraeplin, Bleuler felt that affective psychosis and schizophrenia were not strictly delineated but on lay on a continuum. He also demoted hallucinations and delusions, which to Kraeplin were central, to ‘secondary symptoms’.

More recently, working in the 1950s, Kurt Schneider’s work was fundamentally pragmatic. He lent on the earlier work of Karl Jaspers – a philosopher psychiatrist who had concentrated on the phenomenology of mental disorders, in particular the un-understandability of psychotic delusions – and aimed to identify characteristics that were peculiar to schizophrenia and which would therefore provide the best guide to the practising clinician. He identified eleven first rank symptoms of the disorder, all of which were forms of hallucination, delusion or passivity experience.

We can see from the above brief summary of the evolution of schizophrenia as an idea that what is central to the diagnosis has significantly altered as it has passed through the hands of these thinkers. For Kraepelin the crucial features were intellectual, for Bleuler cognitive and emotional, whereas Schneider pinpointed hallucinations and delusions. Their ideas are still important as DSM IV/ICD-10 criteria for schizophrenia are a patchwork of the ideas of all three. Therefore although operationalized criteria have improved the reliability of the schizophrenia diagnosis and outside psychiatry it is considered to be a crystallized entity, not only does there still remain no firm aetiology or diagnostic test for schizophrenia but its very character is still up for question.

The history of the last two hundred years of humankind is the history of the city. In the world there are now more than 90 cities with populations in excess of 3 million people and 19 megacities with populations over 10 million. By contrast two thousand years ago, when the world population was approx 200 million, there were only 40 cities with more than 50 000 inhabitants. The population density of central London is now in excess of 10 000 people per square kilometre.

Their invention is relatively recent. Initially we, humans, lived our lives as hunter gatherers, living off nuts and berries with a population density a roomy one person per square kilometre. Then, seventy thousand years ago we began migrate from the African plains and ten thousand years ago nomadic societies began to give way to those which were settled and agricultural. These pastoralists were advantaged in that they could feed greater numbers of people and support a higher density of population. The downside was that their diet was less varied and that they were a sickly bunch, as with people now living in close proximity to their domesticated animals many diseases like influenza jumped the species barrier.

Cities have provided their inhabitants with an enormous number of benefits. There are improved opportunities for jobs, education, housing, and transportation. Universities have been founded and specialised health centres are possible. The breath of entertainments can satisfy every whim. Urban areas can also have much more diverse social communities allowing others to find people to whom they relate whom they might not be able to meet in rural areas. In fact it’s hard to imagine that many of the things we regard as everyday parts of modern life if people had not been able to live in the close proximity that city life makes a possibility.

But, the story of cities is not only the story of the people they serve ably. Life in a shanty town on the edges of a Sal Paulo or on a on the outskirts of Manchester sink estate is unlikely to offer any of these advantages. For many people, especially those in less developed countries, greater urbanization is likely to bring only poverty and disease. Even for people not so far down Maslow’s hierarchy, problems can abound as social bonds are often much looser and more fluid in cities than in smaller rural communities and rather than fit into those prexisting, city dwellers are forced to build their own social networks. Furthermore, modern social forces, mostly city based, have lead to an increasingly flexible employment market with more reliance on short term contracts and part time positions. This breeds uncertainty, stress, fuels competition and encourages us to see our colleagues as rivals and potential threats.

Thus, for almost anyone, cities place complex demands with concomitant stress. These circumstances appear to affect the proportion of the population suffering from mental illness. This urban settings effect is most acutely observed for schizophrenia, a disorder which occurs more commonly in cities. There are two competing hypotheses as to why this should be so. The ‘drift’ hypothesis suggests that urban environments attract selective migration of preschizophrenia individuals. On the other hand the ‘breeder’ hypothesis suggests that cities precipitate psychosis in genetically vulnerable people by the stress of social isolation and complex cognitive demands that characterise inner city life. Ultimately both are likely to contribute, and mental illness may be a cause or consequence of social isolation. A 2004 survey of all Swedes between ages 25 and 64 revealed that people living in the most densely populated had almost twice the rate of psychosis of those in the least populated areas.

Cities also tend be the home of migrants. In 2001 4.9 million people in the UK were born overseas, twice 2.1 million in 1951. Decade 1991 to 2001 saw the biggest leap in immigration to the UK – 1.1 million – since before the second world war. Migrants suffer the travails of city living, only more so; the upheaval of being uprooted from their homeland, having to cope with a strange new culture, learning a new language. Studies in London, Nottingham and Bristol found that schizophrenia is nine time as common in African Caribbean people and six times as prevalent in black Africans as in the white British population. Non migrant Afro-Carribeans and Africans do not have similar rates of illness; misdiagnosis by racist doctors has mostly been discounted as the cause for this difference. Soberingly the UN Global commission on international migration notes that:

Migrants are often viewed with suspicion by other members of society. In parts of the world certain politicians and media outlets have found it easy to mobilize support by means of populist and xenophobic campaigns that project systemically negative images of migrants…first generation migrants suffer disproportionately from physical, mental and reproductive health problems…they have lower educational attainments than nationals and generally live in poorer quality accommodation. Migrants also tend to occupy low-wage and low-status jobs and are more likely to suffer from long-term unemployment than other members of society (chapter 4)

Our species, homo sapiens, is thought to have originated 200 000 years ago. Full behavioural modernity, including language and music is thought to have emerged 50 000 years ago. Thus, compared to the age of our species, the city as a place to live is a relative new comer and it is perhaps small wonder that organise city living to everyone’s benefit, and that the project as a whole is still causing problems. The connection of mental illness to city dwelling suggests that we will be unable to fully address the problems of this problem until we have address wider issues of poverty.

***

An interesting fact (gleaned from a Robin Murray lecture):The incidence of schizophrenia in a particular area is predicted by the proportion of the population who vote in a General Election. The thinking is that If you live in an area where there is a sense of community and cohesiveness, then there is generally a higher percentage of people who vote, and lower incidences of schizophrenia. In a disorganised area, where nobody votes, and nobody knows their neighbours, there is lower ‘social capital’, and higher rates of schizophrenia. (see comment below for clarification on this)

There was a page advert in the Metro this week for a three days seminar with TV hypnotist Paul McKenna and pals which promised to ‘Change your life in three days’.

In just 3 days, you will learn quick and lasting techniques to change your life, allowing you to:

Let go of your old emotional baggage
Have supreme self-confidence
Move past the obstacles to change
Supercharge your creativity
Deal with the past, once and for all
Become the person you’ve always wanted to be

If these claims could be substantiated, then this is pretty impressive work. As someone who tries to address at least some of these problems with my patients on a daily basis, for me changing someone’s life permanently in three days would be nothing short of a miracle. Is the self help industry really helping people or merely offering over simplified solutions to complicated personal and social problems?

The wise woman in the cave has been providing guidance since the beginning of time, but the pioneer of the modern self help movement was Dale Carnegie who published his book How to win friends and influence people in 1936. It sold 15 million copies and continues to sell in an updated version today. The industry it spawned is estimated to be worth $11bn. These riches are not surprising; in a world where dissatisfaction is rife and those without personal contentment or possessions are failures, the best self-help materials appear to make the keys to a successful life appear within reach and the world deliciously simple. ‘Self-help’ and the reward it promises therefore represent an undoubtedly attractive proposition.

Closer scrutiny reveals a more mixed picture. In order to promote their product, the marketing for self help materials must necessarily engender or at least encourage personal deficit within potential clients which their product then promises to satisfy. Not only is this (like most advertising) socially corrosive, but represents a circular proposition. Furthermore the self help industry is the product of, but also fans the flames of, the West’s culture of individualism. In this way they work so as to discourage people from acting as part of collective movements to solve their problems but to see them as individually based. Attention and scrutiny is thereby attracted away from gross social inequalities and the myth promoted that health and wealth are largely self generated.

The initial and perhaps laudible aim that one should reach one’s full potential be has been replaced by a continuing demand that maximize oneself as ‘human capital’. Self help can be considered as colluding in a cultural trap cultural trap whereby people are in endless cycles of self-invention and overwork as they struggle to stay ahead of the rapidly restructuring economic order. The tacit assumption that people need help, until proven otherwise, is also troubling and can only exacerbate our burgeoning culture of victimization.

In terms of content self-help can be seen as equally wanting and has a tendency to present ancient clichés as revolutionary new strategies. The majority of the wisdom self help books provide is actually repackaged common sense and platitudes. This repackaging is necessary as any author is unable to lay claim to ideas that might be considered common property and with ownership in mind therefore seeks sophistication via the appearance of scientific method and intellectual rigor. Anthony Robbins has, for instance, called one of his most successful books Unlimited power: the new science of personal relations. As well as the message, the medium is also vital as with relatively little insight to impart, a theatrics will help (just like a brightly coloured sugar pill acts as a placebo). If your friend tells you something you might not listen. However if you’ve paid £400 for the honour then the situation may be quite different and large expensive personal development seminars are widespread. Anthony Robbins is famed for using firewalking demonstrations to illustrate that the main quality shared by those who achieve greatness is the ability to take action.

When the dust settles, my concern about self help overlaps with my concerns about the worst excesses of psychology and psychiatry practice. It promises to do much more than it can ultimately deliver, the financial motivation behind it is rarely scrutinized and its individualistic focus draws attention from socially based solutions that might ultimately provide more permanent succour. I would however concede that it is far preferable that people spend their money on McKenna’s show than on, say, a three day booze and drug fuelled bender – an altogether more dysfunctional coping strategy.