The current preference is for emphasizing that psychiatry is ‘just another branch of medicine’ like cardiology or oncology. In part this is to try and make psychiatry properly respectable by highlighting its scientific credentials, its commitment to precise diagnoses and evidence-based treatments, increasing its status within medicine and in society generally. It is also to reduce the stigma which has always been associated with mental illnesses. Stressing that these are illnesses like any other illness (‘mental illnesses are brain diseases’) should reduce prejudice experienced by sufferers and the sense of responsibility and shame felt by so many patients and families. We don’t feel ashamed or blame ourselves if a family member develops arthritis, so why do we if they become depressed? It is against this backdrop of unnecessary additional suffering that the medical legitimacy of psychiatry is, quite rightly, stressed. But it is not that simple. Psychiatry is different. Even those of us who work in it are treated as different. Psychiatry can also inspire fear. It is, after all, the only branch of medicine which can force treatment on individuals. Special laws exist in all developed countries, both to protect the mentally ill against punishment but also to force them to have treatment. There appears to be a remarkable consensus about the reality and importance of mental illnesses.
Open to abuse
There is a growing unease about the relationship of the medical profession with the companies which research, manufacture and sell the drugs we use. The cost of developing a prescription drug in the US is estimated at $800,000,000. So the pharmaceutical industry is increasingly concentrated in a small group of immensely powerful multinational businesses. The statistics are staggering. It takes on average up to 10 years from isolating and patenting a molecule, through tests and trials to its first routine prescriptions to patients. Only 1 per cent of new molecules make it from test tube to prescription. The research and development budget is consequently enormous. That of Pfizer (the largest pharmaceutical company in 2005) is greater than the whole national research budget of some European states.
Not surprising then that the marketing of these drugs is ruthless. The financial relationships between doctors and these companies are murky. Over half of all educational meetings for psychiatrists in the US are funded by pharmaceutical companies and luxurious hospitality and travel are routinely offered to doctors as barely concealed inducement for them to prescribe. Until recently psychiatry was immune from this as our drugs cost so little. However the new generation antipsychotics and antidepressants are vastly more expensive (newer ‘atypical’ antipsychotics cost $2000–$3,000 a year per patient in the US compared to $100–$200 or less for the older drugs; newer antidepressants also cost several hundred dollars a year as opposed to ‘pennies’ for the old antidepressants such as tofranil and amitryptiline). The patent on a new drug is strictly time limited and the companies have to recoup all their development costs usually within 10–15 years from launch. With the financial muscle of the pharmaceutical companies brought to bear on the profession it is hardly surprising that social and psychological interventions (which have no such financial backing) have a lower profile.
‘Big Pharma’ has been accused of stretching the boundaries of what are treatable psychiatric disorders to increase the sales of its drugs. It has been accused of creating a need for its drugs rather than developing drugs for existing needs. The enormous success of prozac has lead to an expansion of the concept of clinical depression. Milder and milder cases get treated. Prozac’s iconic status has helped reduce stigma against depression but has made it a ‘lifestyle’ drug. Most university students will know class-mates on antidepressants – inconceivable only a generation ago. Diagnostic patterns have changed in response to the marketing of these drugs. There has been a striking increase in the diagnosis of disorders such as PTSD (post-traumatic stress disorder) and social phobia (a disorder which some would consider just extreme shyness) since drugs have been licensed to treat them.
Even more worrying is the massive growth in psychiatric prescribing for children. Once a rarity, child psychiatrists now regularly prescribe psychotropic drugs for children. The most dramatic increase has been in the diagnosis and treatment of ADHD (attention deficit hyperactivity disorder): 7 per cent of US schoolchildren are diagnosed with ADHD (one in ten boys as they are three times more likely to be diagnosed), with half of these on stimulant drugs.
The prescribing of ritalin (methylphenidate) increased six fold in the 1990s in the US and accounts for 85 per cent of world prescriptions but Europe is rapidly catching up (150,000 prescriptions in the UK in 2002). Child psychiatrists insist that the diagnosis is made carefully and that drugs are used only after psychological treatments have been tried but the figures simply don’t stack up. Irrespective of the controversy about the legitimacy of the diagnosis, there can be little doubt that this is an example of psychiatric practice being rushed by commercial agendas. Before leaving the pharmaceutical industry we need to acknowledge its very positive contribution to human health and welfare. It would be naive to ignore the financial imperatives that flow from such staggering R&D costs and to profess surprise at the marketing practices. The dramatic increase in both its scale and power, however, raises ethical problems which are not restricted to psychiatry. They include the exploitation of poorer countries for research where ethical standards may be less strict and where the patients in their trials may never have the resources to benefit from the drugs developed. The temptation to create spurious health needs to sell products is particularly potent in the psychological sphere as almost everyone would like to ‘feel a bit better’. Honest debate and tighter guidelines are needed.
Reliability versus validity
Diagnosis in psychiatry has moved towards a criterion-based system. The traditional approach of pattern recognition and reflective empathy informed by extensive familiarity with normal and abnormal behaviours has been replaced by a process of carefully listing features of the disorder that are present. The change was a response to unacceptable variations in diagnostic practice. The new diagnostic system (laid out in the Diagnostic and Statistical Manual – DSM III, now DSM IV) also strove to avoid relying on the psychiatric theories which had caused such conflicts in the past. Whether one really can have an entirely ‘a theoretical’ diagnostic system is, of course, open to debate.
The new system emphasizes reliability (i.e. ensuring that different psychiatrists faced with the same symptoms will always come to the same diagnosis) more than it does validity (i.e. ensuring that patients with a particular diagnosis will have similar outcomes and responses to treatment). The goal would be, of course, maximal reliability and maximal validity. Good reliability does not, however, necessarily guarantee good validity. The fact that we all agree on the defining characteristics doesn’t mean it really is ‘something’. For example, 17th-century witch-finders were very reliable - they all agreed on the tell-tale signs and so consistently agreed on who was a witch before they burnt her. We would not now say that they had really ‘identified’ a witch, because we don’t believe in them, but their methods were certainly very reliable.
Reliability can mistakenly imply validity so that a condition gets accorded the status of a diagnosis essentially because psychiatrists can agree on how to define and recognize it. I have already mentioned a couple of these controversial diagnoses – social phobia and ADHD – but there are several more which really stretch credibility. Nicotine and caffeine ‘use disorders’ are now both official psychiatric disorders, but few of us would consider these mental illnesses. Similarly there is a range of behavioural patterns which have acquired the highly questionable status of a diagnosis (and therefore may receive ‘treatment’). An example is adolescent ‘oppositional defiant disorder’, which is suspiciously close to the description of a difficult teenager who simply refuses to do what his parents want.
Psychiatrists on the whole are trusting souls. They tend to take their patients’ stories at face value. This was vividly demonstrated by the psychologist David L. Rosenham’s famous study, ‘Being Sane in Insane Places’. In 1973 he got eight volunteers to go to emergency rooms in America complaining of a voice in their head which said ‘empty’, ‘hollow’, or ‘thud’. All eight were admitted to psychiatric units where they then behaved absolutely normally. The most amazing thing was that they stayed in hospital for an average of just under three weeks each before they were discharged. Even worse, most of them got a diagnosis on discharge of ‘schizophrenia in remission’. Not surprising then that there is such a call for reliability. So there are several forces acting on psychiatry (including the natural curiosity of researchers) which threaten continued expansion. Whether this is a desirable development is one that should not be left to the profession alone to decide but requires debate within the broader society (i.e. you).
Personality problems and addictions
Psychiatrists have always dealt with the consequences of drug and alcohol addictions. They have also always recognized that there are groups of individuals whose personalities are markedly abnormal and can cause endless problems. The degree of human misery associated with these problems is beyond dispute, and such individuals are found in large numbers in mental health services. There are, however, strong arguments for and against whether these are primarily psychiatric disorders and whether psychiatrists should be responsible for treating them. This is no simple academic argument that could allow both sides to just make individual decisions that suit them. People with these problems may be, and are, treated against their wishes.
Coercion in psychiatry
Compulsory treatment is permitted in psychiatry in every society - including Western societies whose very founding principles are respect for individual liberty before the law. This very striking exception stems from the observation that during periods of illness an individual’s judgement is impaired and they are not able to make rational decisions; mental illnesses often involve a ‘break’ with normal functioning and a change that estranges the patient from their normal self. Unlike, for instance, a learning disability where the individual may also not be able to make informed and rational decisions because they have never developed the capacity, the striking characteristic of mental illnesses is the change. Most societies have sanctioned a paternalistic provision for coercive treatment from a humane desire to protect an individual who is clearly ‘not themselves’. This resolve is strengthened by the repeated observation that patients recover and express the same concerns as the rest of us about their behaviour when unwell. Many are even grateful that they were forcibly treated.
Lawyers find these areas difficult. The standard assessment of ‘capacity’ to make treatment decisions (the ability to understand the information, the ability to trust the individual giving the information, and the ability to retain and make a decision based on that information) works well for children, the learning disabled, and those with dementia. However, it doesn’t work well where the problem is one of judgment and mood rather than intellectual ability. Imposing treatment against a patient’s will rests ultimately on the psychiatrist’s conclusion that the patient is suffering from a mental illness such that their current decisions are not those they would usually express. Note that this involves the psychiatrist making a judgment on what he believes that the patient would usually do or want when well. Compulsion is also sometimes used as a brief safety measure with people who are ‘temporarily unbalanced’ – a terrified individual in a strange place or young people attempting to kill or harm themselves in despair after a relationship break-up.
Severe personality disorders
Psychiatry’s attitude to psychopathic and antisocial personality disorder usually in men, and borderline personality disorder, usually in women, presents ethical and conceptual concerns. Psychopaths are cold, callous individuals who lack empathy for others and consequently can commit awful crimes. They give no thought to the consequences for others and show no remorse afterwards. They are often recognizable early on (death of pets, arson, etc.). Being self-centered and not caring about others’ feelings they can be extremely successful; it is jokingly proposed that mild psychopathy is an essential for being a successful politician. Psychopaths are often lumped together with explosive and violent individuals as antisocial personality disorder. This group is a massive problem for the prisons and criminal justice system. In some countries psychiatrists detain these individuals under the same conditions as the mentally ill and this has been criticized as an abuse of power. Compulsory treatment is justified mainly by the belief that the patient is not making the decisions that they would normally make and which they will make again after recovery. To warrant coercion the condition is usually time-limited and it is believed with some confidence that the treatment will speed recovery. None of these conditions are met for severe personality disorders. Their behaviour reflects their personality – their real identity; they are not aberrant or temporary, and to date there is no convincing evidence that forced treatments will significantly change them.
Such people pose profound challenges for society. They have often committed serious sexual and violent crimes and it is obvious to prison staff that, as little has changed, they will offend again. In England they are labeled as having a dangerous severe personality disorder (DSPD) and highly staffed new units have been built to treat them. But is their potentially indefinite detention by psychiatrists (as opposed to a prison sentence when they break the law) any less an abuse than the detention of political prisoners in the Soviet system was? The humanitarian sentiments of those involved do not remove the ethical dilemma.
The Western world has experienced an upsurge in chaotic self damaging behaviour in young women. Overdosing and cutting have become common features of female inmates of mental hospitals and prisons. Patients seem out of control, are clearly distressed, and damage themselves in what often seems like a mixture of anger and a desperate plea for help. Psychiatrists feel responsible but impotent and often try to ‘contain’ the situation by keeping the patient compulsorily on a ward offering supervision and support. Unfortunately things may go from bad to worse – the patient self harms more and the psychiatrist increases the restrictions to control the situation. A vociferous pressure group argues that what these women do to their bodies is their own affair and psychiatry is overstepping the mark in treating them against their will. They point to the cultural precedents for self-mutilation (religious and ritual scarring are common in many societies) and underline how medicine, and psychiatry in particular, has consistently denied women’s self-determination over their own bodies.
Drug and alcohol abuse
A similar set of arguments holds for drug and alcohol abuse. Both can be associated with mental illness and both can also cause mental illnesses. Fine for psychiatry to be involved then. But are drug or alcohol abuse mental illnesses in themselves? The rebranding of addictions as illnesses was a humanitarian impulse in the 1940s after the founding of Alcoholics Anonymous in 1939, to provide help to detoxify addicted individuals and support sobriety. The world’s largest self-help groups (Alcoholics Anonymous, AA, and Narcotics Anonymous, NA) both consider addiction a lifelong illness, although they rely on personal and spiritual support rather than medical treatment.
AA and NA view the addict as fundamentally different from other individuals, never able to use drugs or alcohol sensibly. Within psychiatry, however, there are divided views. Many view addiction as an illness to be treated like any other mental disorder. Others see drug and alcohol abuse as dangerous habits that can lead to mental illnesses but are not themselves illnesses, and ultimately are the responsibility of the individuals themselves. The medicalization of substance abuse is criticized as a distraction from effective public health measures such as raising the price and restricting access. Both of the latter have been shown to reduce drinking and drink-related illnesses and deaths. Offering help such as prescribing medicines to cope with withdrawal and support to build up a sober lifestyle are uncontroversial. Concerns arise from the use of compulsion which is common to a limited degree in most countries. In much of Scandinavia, Eastern Europe, and Russia, however, there has been extensive use of specialized mental hospitals for longer term detention and treatment of alcoholics and drug addicts. Can this be justified? The consequences of heavy drinking or drug abuse can undoubtedly be disastrous, even fatal. But many of us make foolish decisions and suffer the consequences – smoking is probably more dangerous than drinking but we don’t compulsorily treat smokers. The confused thinking and poor judgment when intoxicated is also a questionable justification for psychiatric intervention as the express purpose of becoming intoxicated is to alter judgments by blurring an unattractive reality.
Increasing sophistication in genetics and epidemiology has helped identify those who are at greater risk of alcoholism and drug abuse. There are well recognized ethnic variations in the ability to tolerate and metabolize alcohol. These findings strengthen the contention that these are not simply personal choices but disorders, much in the same way that schizophrenia is a disorder – we just don’t yet know as much about it as we do about schizophrenia. Some even propose that self-destructive drinking and drug use must be the result of a mental illness. Clearly the issue is still open and psychiatric engagement with drug and alcohol abusing patients will continue to attract some controversy.
The insanity defense
The coercion controversies in psychiatry are about unfairly depriving individuals of their rights. An important motive in early mental health legislation, however, was to protect patients from being punished for crimes they committed when unwell. Society has always felt uncomfortable about such punishments. The crime of infanticide was distinguished from murder because 19th-century juries refused to convict and send to the hangman mothers who killed their babies while suffering post-partum psychoses. The importance of establishing criminal intent (‘mens rea’ or ‘guilty mind’) has guaranteed a long and tortured relationship between psychiatrists and the courts. Agreeing whether or not someone was insane at the time of the crime (i.e. unable to judge the significance of their acts and realize that they were wrong) has in principle been fairly straightforward. However it is often far from easy in the individual case. Similarly floridly ill patients, unable fully to understand what is going on in court, may be judged unfit to plead and admitted to hospital for treatment. Most countries will accept the decision of unfit to plead on the basis of a psychiatric assessment or will return a not-guilty verdict on the grounds of insanity.
The real problems in court concern diminished responsibility on the grounds of mental illness – particularly where the criminal behaviour itself is the clearest manifestation of the disorder. It is less a problem with a grossly disturbed individual whose crime is just one among many signs of the illness (such as a manic patient in court for reckless driving but who also at that time is not sleeping, dressing in outrageous clothes, and spending all his money). Proposing personality disorders as a defense (i.e. because a psychopath does not notice or care about the distress caused) strikes at the concept of free will and personal responsibility that is the very foundation of criminal justice systems. Most criminals have had dreadful childhoods. Many have been abused. Few have skilled jobs or stable families to fall back upon. So it is not surprising that we may temper justice with mercy. But is there not circularity in citing the very qualities that give rise to the crime as an excuse for a reduced punishment? This ethical dilemma is particularly sharp in individuals with Asperger’s syndrome (a mild form of autism) who cannot see the world from the other’s perspective and cannot interpret others’ motives even though they may desperately want to. In practice the more serious the crime and the greater the risk, the easier the decision. Where the alternative to a guilty verdict and prison is hospital care (and sometimes secure hospital care) courts and juries feel more comfortable to make the allowance. In lesser cases, where punishment is not so severe, and might just deter a repetition, it is argued that a psychiatric defense is unjustified and probably does the individual no favours in the long term. Thomas Szasz insists that the psychiatric defense is a denial of the fundamental rights and obligations of the individual. A psychiatric defense is generally accepted for individuals where the disorder is plainly there for all to see.
Sometimes the only evidence of a disorder is the crime. There have been several high-profile cases of murder where the perpetrator denies any memory, claiming it occurred during an ‘automatism’ (a dream-like or dissociated state). In even more extreme cases ‘multiple personalities’ have been proposed where a single individual has several fully developed identities, each completely independent of each other. This is a very attractive concept which captures the popular imagination (e.g. Robert Louis Stevenson’s 1885 novel Dr Jekyll and Mr Hyde, and the 1957 film The Three Faces of Eve).
The postulated mechanism is that some mental functioning is so successfully repressed that it is only accessible through deep psychotherapy or ‘triggered’ in highly specific situations. This is of enormous psychiatric/legal significance in cases of alleged childhood sexual abuse. The extent to which children are exposed to sexual abuse by family members has long been controversial in psychiatry. The pendulum has swung back and forth between considering it a common trauma that causes neuroses to the alternative belief that it is rare and most reports are ‘false memories’ arising from current distress and confusion. Currently the presumption is in favour of believing the adult who complains of child sexual abuse. This has resulted in high-profile cases splitting families when ‘recovered memories’ have been unearthed. Psychiatrists appear on both sides of the case, stressing either the damaging impact of abuse, repressed over many years, or, conversely, the patient’s suggestibility in over-enthusiastic therapy.
Psychiatry: a controversial practice
Psychiatric practice will probably always be risky and controversial. Many psychiatrists argue for a more limited approach, restricting it to clearly identifiable and agreed mental illnesses: ‘We should stick to treating diagnosed illnesses, schizophrenia, anorexia nervosa, depression and accept that there are many other causes of human distress beyond mental illness.’ ‘We should leave the social policy and ethics to the politicians and philosophers.’ This is an attractive argument. The history of psychiatry is full of examples of overstepping the mark. But as we have seen in this text it is not simply up to the psychiatrists - there are other stakeholders and powerful forces at play with broad ethical issues and significant potential benefits in the balance.
Scientific developments are expanding what we can do; families and patients have steadily rising expectations from us; governments and the pharmaceutical industry challenge us with new demands, inducements, and opportunities. We could only possibly avoid controversy and the risk of potential mistakes if we turned our back on progress and innovation. But that means not fulfilling either psychiatry’s promise or its obligations. Straddling hard science and the field of human behaviour and ambitions, it is simply impossible for psychiatry to be uncontroversial.
Into the 21st century
New technologies and old dilemmas
It doesn’t matter what he does, he’ll never amount to anything.
(Albert Einstein’s teacher, 1895)
Computers in future may weigh no more than 1.5 tons.
(Popular Mechanics, 1949)
We don’t like their sound, and guitar music is on the way out.
(Decca Records on the Beatles, 1962)
It is risky to make predictions. Psychiatry at the beginning of the 21st century is very different to that of just a few decades ago. Who could have imagined that we would be able to visualise not only the living brain’s structure in minute detail but even watch as different regions light up with specific emotions or during hallucinations? Could we have foreseen diagnoses derived by computers or psychotherapies on the web with neither psychiatrist nor psychologist involved? Psychiatry is changing and the rate of change intensifies the dilemmas raised in texts above.
Optimists believe these will fade away as the scientific base of psychiatry becomes firmer, but there is little evidence for this yet. The areas of concern may shift but they seem, if anything, as pressing.
Improvements in brain science
We will continue to experience an acceleration in our understanding of brain functioning. Neurosciences have become the ‘hot topic’ in biomedical research, driven by increasingly powerful tools for visualising and measuring brain functioning. For so long it has been a mysterious, apparently inert organ with no moving parts. Modern imaging techniques reveal the brain’s dynamism, allowing us to observe activity spread through it with individual areas activating sequentially in response to stimulation. And not just responding to external stimuli, but solving mathematical problems or even discriminating between pictures of people we like or don’t like.
The big leap forward came with imaging. The structure of the brain has been studied in great detail by anatomists for over a century. They identified the function of areas by examining the brains of people who had had strokes and lost different functions. Shrinkage or damage to the brain could be demonstrated in post-mortems and by x-ray techniques. These later involved either injecting dye into the bloodstream or air into the ventricles (fluid-filled cavities that exist normally in the brain). Visualisation of the body’s structures took a great leap forward with CAT scanning and then MRI scanning. These use magnetic fields to produce amazingly detailed images of ‘slices’ through any part of the body (including the brain) that can be used to construct 3D pictures. While these techniques were useful in diagnosing brain tumours and demonstrating dementia they were of little help in most psychiatric disorders. Indeed, the term ‘functional disorder’ has long been used as shorthand for psychiatric illnesses precisely because no structural abnormalities could be shown.
Functional imaging has been a further advance for psychiatry. There are already three different types – measuring increased blood flow, measuring cell metabolism with the use of marked chemicals, and now even directly measuring the electric activity of nerve cells. We can now show that thinking and feeling are reflected in activities in different parts of the brain and that the same parts of the brain are active when patients hallucinate as when we hear real voices. Functional imaging has confirmed the complexity and interconnectedness of brain activity.
Have these imaging techniques changed psychiatry yet? They have certainly increased knowledge and helped us understand the biochemical systems in the brain associated with disorders, and this has improved drug research. There are, as yet, no major advances in clinical practice as a direct result. Some early experiments are under way to transplant brain cells in Parkinson’s disease into areas where activity has been demonstrated to be deficient. There has even been a trial of inserting minuscule ‘batteries’ into the brains of some people with chronic depression to see if they stimulate an increased release of transmitter substances and alleviate the depression.
This is a long way from the ‘Cyborg’ fantasy of so many films where small computer chips are inserted into the brain and control behaviour. There is a noticeable reluctance among neuroscientists to develop brain interventions for mental illnesses. Interfering directly with an individual’s consciousness and taking it out of their control generates strong resistance in scientists, as in the rest of us.
This is in contrast, however, to surgery and cell transplantation in brain diseases such as Parkinson’s disease, which do not have the same implications for selfhood.
The human genome and genetic research
Ever since Crick and Watson clarified the double helix structure of DNA in 1953 genetic research has been in overdrive. Genetics used to be the territory of plant and animal breeders applying Mendel’s laws, and medical researchers tracking familial diseases such as haemophilia and Huntington’s disease. Now it has blossomed into a programme which has mapped the very chromosomes and genes themselves. Previously geneticists could only inform patients about their statistical chances of passing on disorders. Now they can know for certain in some cases if the patient is carrying a disorder and even (as in the case of Huntington’s disease, a rare distressing disorder with both psychiatric and movement manifestations) predict if an at-risk individual will develop it years in the future.
Not many psychiatric disorders, however, have simple ‘Mendelian’ genetic patterns where half (dominant) or a quarter (recessive) of the offspring are destined to have the illness. Most of the major disorders (e.g. schizophrenia, bipolar disorder) do run in families and have an undeniable genetic component but there are almost certainly several genes involved and they have been very difficult to identify. There have been many false dawns. Currently, the likeliest candidate is Neuregulin 1 (a gene identified in schizophrenia families in Iceland and the West of Scotland). However, this gene is more widespread than is schizophrenia - possibly up to 30 per cent of the population may carry it. Neuregulin 1 appears to be necessary for developing schizophrenia, but not sufficient. Some life experiences (or perhaps a combination with other genes) are also required. So an interaction between nature and nurture is indicated. This may explain why the issue has been so resistant to the ‘either-or’ solutions of the past. It holds out hope that even in those with the genes for the disorder it may be possible to prevent schizophrenia developing.
While genetic research has yet to have a major clinical impact, in practice it has certainly stirred up thinking. What level of risk are we willing to take if we know that we have a higher than average chance of our child developing schizophrenia or depression? Would it be ethically acceptable to start screening for these disorders once the genes have been confidently identified? What if we were also able to identify genes for being clever -would it be OK to screen for that? Screening implies selection. It is usually only done if the individual wants to know whether to start or continue with a pregnancy.
These problems already confront some families with schizophrenia. An Australian service for treating young people with schizophrenia as early as possible has begun successfully to identify individuals who are at very high risk of developing the illness. These are usually adolescents in families with schizophrenic members and who themselves are ‘odd’ or ‘withdrawn’ and report unusual, but not clearly psychotic, experiences. Should the team, being fairly certain that the young person is likely to become ill, offer treatment with antipsychotic drugs? They have conducted a trial where they gave one-half drugs and the other half placebo. Those given drugs developed fewer psychoses. However, not all of those without drugs did become ill (i.e. if it had not been a trial they could have been given the drugs unnecessarily). The implications for young people at such a sensitive stage of their development are clearly enormous. This is just one example of the sort of decisions we will increasingly face as technologies improve.
Specificity of gene identification in mental illness is still a long way off. Any large-scale genetic screening to avoid psychiatric disorders would inevitably mean a steady reduction in the rich variety of human behaviour. How happy would we be with that - a world without Van Gogh or without Schumann?
Brainwashing and thought control
Most of the scarier fantasies about psychiatry have usually been about its ‘awesome powers’. Ever since the term ‘brainwashing’ was first used during the Korean War in the 1950s these fears have condensed around it. In truth, psychiatrists have little extra knowledge about such procedures beyond those well known in cognitive and group psychology. Psychiatrists and psychologists do advise governments and the military about how to persuade people but their techniques are hardly more advanced (if as advanced) as those of successful advertising agencies.
Millgram’s famous experiment is mistakenly quoted as an example of this power. He used actors to demonstrate that normal people were prepared to deliver severe, even life-threatening, electrical shocks to other people if told it was part of a psychology experiment. This study did not demonstrate the awesome power of psychology, but rather the scarier, but commonplace, propensity we all have to surrender our judgement to ‘authorities’. Where some of the earlier ‘science fiction’ fantasies have proved right is in the pervasive use of mood-altering drugs. ‘Soma’ in Aldous Huxley’s novel Brave New World was a drug to keep the masses contented and submissive in a totalitarian state. How far are we from that when 30 per cent of the adult population of France were taking psychiatric drugs in the late 1990s and when 10 per cent (and rising) of US schoolboys are taking Ritalin? There is an increasing availability of such drugs which both treat mental illness and also enhance how healthy people feel. ‘Better than well’ is how many describe the effects of these ‘designer’ drugs. People have always self-medicated with recreational drugs but now prescribed drugs are widely used to deal with normal life stresses. This risk that psychiatry may invade all aspects of our life and ‘medicalize’ the human condition is increased by the emphasis on the simpler, more reliable but democratically negotiated approach to diagnosis discussed above. The size of the psychiatric population used to be restricted by psychiatrists only giving a diagnosis when the patient’s experience and behaviour were felt to be fundamentally ‘different’. If a diagnosis follows automatically from a series of complaints (without being filtered through such a judgement) then there is little restriction on expansion. We increasingly encourage self-disclosure and attention to our feelings, hence perhaps the rapidly rising number of people who consider themselves depressed or anxious. Most of us welcome this more accepting, open approach to human experience. Equally, most of us support a more balanced relationship embodied in a psychiatric consultation which takes the patient’s symptoms more seriously than the psychiatrist’s preoccupations. But are we happy with the consequences as ever-increasing segments of our lives become labelled as a psychiatric disorder?
Despite all this, we enter the 21st century with remarkably similar dilemmas with which we entered the 20th. Compulsion in psychiatry has not gone away - rather increased somewhat. Similarly the fear that psychiatry may trivialise individual differences and treat people as objects remain just as strong. This conflict may now be played out between ‘evidence based medicine’ versus ‘post-modern individualism’, where once it was the crushing uniformity of large asylums versus the dignity of the patient. Society and psychiatry will always have (and probably should always have) an uneasy relationship balancing duty to the patients and duty to society in the social control of a small number of potentially dangerous individuals. The very durability of these debates reveals them as not simply technical problems. They reflect the tensions and paradoxes that are inherent to psychiatry as a discipline and with which we started these texts.
Will psychiatry survive the 21st century?
The imminent demise of psychiatry has been predicted for most of its history. Optimists (particularly those engaged in medical and biological research) anticipate dramatic breakthroughs that will tame mental illnesses in the way that antibiotics defeated tuberculosis or vaccination eradicated small pox. The mental hygiene movement also hoped that rational child care, reduced alcohol consumption, and improved social conditions would make the analyst and psychotherapist redundant. It hasn’t happened so far. The very success of modern medicine has brought with it the challenges of an ageing population with increased depression and Alzheimer’s disease. Greater openness and respect for individual feelings has resulted in an enormous increase in the demand for counselling and psychotherapy. The numbers of psychiatrists and mental health professionals has risen inexorably across the world. On a simple head-count of staff and the mounting demand for its service, yes it should continue to flourish.
But will it survive as it is now? Certainly, things are changing. Might the psychological and psychotherapeutic treatments separate from the more traditionally medical treatment of the psychoses? In many parts of the world, psychiatry has only recently gained its independence from neurology but we now hear strong calls for reuniting them as a logical development of a more powerful medical psychiatry for the future. Many psychiatrists already call themselves ‘neuropsychiatrists’. This is the case in many Germanic systems. There are several health care systems where the psychiatrists deal with the diagnosis and inpatient care, emphasising a highly scientific medical model. Long-term outpatient care of disabled patients is managed by social workers and psychologists/psychotherapists using a more interpersonal approach. These pressures are not new. What is new is the wide availability of highly trained clinical psychologists, nurses, and social workers with the necessary skills. Responsibilities and power structures are shifting and radically different practices evolving.
There is logic to such developments. Increased knowledge drives specialisation, so some fragmentation of psychiatry is inevitable. Despite this, psychiatry is entrenching as a discipline. Establishing departments of psychiatry independent of neurology or internal medicine is still seen as a marker of progress. Similarly when people can choose they still seem to want that mixture of medical expertise (or is it authority?) combined with psychological and emotional sensitivity traditional to psychiatry. Psychiatry’s medical pedigree gives reassurance yet few of us believe that it is really just a branch of medicine.
The mind is not the same as the brain. The defining characteristic of mental illnesses (and consequently psychiatry) remains their impact on our sense of self and on our closest relationships. Working with these is the hallmark of psychiatry and there is no evidence of society losing interest in it.