Neurotic to the Bone

Ed Hagen recently wrote a paper outlining his objections to the classification of major depression as a “brain disorder,” on the grounds that, in sum: the diagnosis is made to distinguish it from other conditions and not from “normal” persons, symptoms of what is called depression tend to remit within weeks or months, and occur at points in life where some amount of sorrow would be expected, and depression is a continuous distribution, so the cut-off for disordered behaviour is arbitrary. Although I cannot disagree with any of this, I think it misidentifies the problem, which is now potentially avoidable thanks to recent advances in genetics.

Namely: psychiatric diagnoses are based upon symptoms and not their genetic place of origin, evolution, or adaptive value.

By definition, natural selection only allows adaptive or neutral alleles to stay around; alleles can only avoid extinction if they keep up with the rate at which competing variants are reproduced. Thus, when an organism displays maladaptive behaviours, the explanation for this falls into one of three categories: deleterious mutations (the occasional fuck-up in DNA’s copying process), pathogens, and gene-environment mismatch. The last of these refers to situations in which genes that are adaptive in some environment are still present as the environment changes – they simply have not had time to be selected out yet.

As Hagen notes, the symptoms of depression are usually synonymous with the symptoms of neuroticism – a personality variable which remains fairly constant throughout the lifespan and determines responsiveness in aversive situations such as the death of a first-degree relative. However, it is not as though there is no variation in trait neuroticism itself; some people and groups are known to be higher in it than others, e.g. women. How much of that variation is “normal,” in adaptive terms? Perhaps very little, I suspect.

Some of it is obviously gene-environment mismatch. We have not had planes or parachutes for that long, which is why most people are more scared of skydiving than driving a car despite the fact that the latter is demonstrably more dangerous. Equally, sex differences are generally a sign of different (historical) adaptive challenges for the sexes, which may be why women are more neurotic than men by ~0.4 standard deviations, roughly equivalent to two subpopulations of males with a mean height difference of 1¼ inches – think of the English vs. the Spanish. Barely noticeable at the mean, but very much so at the tails. But, on the whole, I doubt that most variation in neuroticism is adaptive.

The genetic architecture of personality traits looks similar to intelligence in that both are massively polygenic and only a small chunk of the variance is eaten up by the “common” neutral alleles. In the case of personality, it might not even be as much as 10%. The rest, according to this paper, is due to “rare variant effects and/or a combination of dominance and epistasis.” These common (freq. > ~1%) variants are in a kind of equilibrium because each has reproductive costs and benefits, otherwise it would be impossible for them to all be common. For a personality trait such as agreeableness, it may be for example that genes which inculcate high agreeableness make one less attractive at the outset, especially as a male, but more fecund in the long run because agreeable people are more willing to have more kids, etc. The rest of the variance, in the individually rare (≤ 1%) alleles, will be deleterious, hence their rarity.

A deleterious allele can accumulate in the population until it reaches equilibrium frequency, the point at which further accumulation is counterbalanced by selection. The equilibrium frequency for a given allele is generally just the mutation rate at its locus divided by its reduction in fitness relative to the population average, e.g. if the population’s average birth rate is 2.0 and the allele knocks carriers down to 1.98, that is a fitness loss of 1%. For an allele with a mutation rate of 0.0001, this gives you an effective “maximum” frequency of 1%. Given the number of variants involved in the brain, there are apparently a lot of these, almost everyone is carrying some, and the unluckiest, at the right tail of mutational load, could be carrying quantities orders of magnitude more than the average.

Since the behavioural correlates of neuroticism are not neutral and tend heavily towards the maladaptive (references: 123), one has to question how much of “normal” sorrow, grief, and anxiety is really normal. Common, sure, but nonetheless aberrant. Natural evolution does not offer a straightforward means to eliminate it in toto, but that need not make it impossible.

This shows yet more problems with the popular usage of the word “disorder.” Perhaps it is time to abandon the word altogether.

A World of Trauma – Civilizational Psychosadomasochism and Emptiness

According to Google’s vast textual corpora, there was nary an instance of the term “trauma,” or its distinctly psychiatric derivative “traumatized,” in written English prior to the 1880s. The first usage of “trauma” is documented in the 1690s, at which point it referred to physical wounding only. Its “psychic wound” sense did not pick up until the tail end of the 19th century, which is now far more familiar to us than the original sense. Exactly what took root in the world between then and now? The standard narrative is that the medical profession became wiser, but what of the wisdom embedded in our species’ genetic history? Note that even most doctors and biomedical lab technicians know little of basic genetics, or, one has to assume, of evolutionary reasoning. I recall being sneeringly told by one, on introducing her to the concept, that she was only interested in “proper science.” This is about when it set in that even many “grunt-work” scientists are basically morons. She certainly was.

Applying the principles of natural selection (i.e. evolutionary reasoning) to find the aetiology of disease tends to yield different answers from those that are now fashionable. In a 2000 paper, “Infectious Causation of Disease: An Evolutionary Perspective,” the authors compellingly argue that a huge number of supposedly mysterious illnesses are in fact caused by pathogens – bacteria or viruses. The argument is simple: any genetic endowment which essentially zeroes fitness (reproductive potential) can be maintained in a population’s genes only at the basal rate of errors, i.e. mutations, in the genetic code, with the apparently sole exception of heterozygote advantage for protection against malaria. Thus, anything so destructive which rises above a certain threshold of prevalence should arouse suspicion that a pathogen is to blame. This would include schizophrenia, an alleged evolutionary “paradox,” with a prevalence of ~0.5%, especially since, unlike “psychopathy,” schizophrenia has low twin-concordance, low heritability, and is discontinuous with normal personality. At present, direct evidence of the pathogen is scant, but that is to be expected: viruses are tricksy. No other explanation is plausible.

What, then, when one turns the evolutionary lens towards “trauma”? What is commonly called psychological trauma can helpfully be divided into two categories: non-chronic and chronic. The former is what most people would call distress. It is adaptive to have unpleasant memories of situations that could kill you or otherwise incur significant reproductive costs, which is why everyone feels this. It is good to have unpleasant memories of putting one’s hand on an electric fence for this reason. It is bad, and certainly not evolutionarily adaptive, for the memory to continually torture you for years after the fact. I have it on good authority that this does nothing to attract mates, for example.

In light of this, it becomes clearer what may be behind the apparent explosion of mental “traumas” in our psychiatry-obsessed world. One may observe, for instance, that there is no record of anything remotely resembling PTSD in the premodern world. It emerged in the 20th century, either as a result of new weapons inflicting new kinds of damage (brain injuries), or from psychiatrists’ egging people on, or both. If the received narrative about it were true, then all of Cambodia ought to have gone completely insane in recent times. It did not happen. Likewise with rape. One struggles to find any mention of long-term trauma from rape for most of human history. The ancients were not very chatty about it. Of course, they saw it as wrong, as is rather easy to do, but their notions about it were not ours. Rape does impose reproductive costs, but so does cuckoldry, and being cuckolded does not cause chronic trauma. Nor would claiming that it had done so to you do much for your social status. Sadly, exactly one person in existence has the balls to comment on this rationally. Many of these problems seem to originate from something more diffuse, something about the cultural zeitgeist of our age, rather than a particular field or bureaucracy.

It is generally agreed upon in the modern West that sexual activity before late adolescence, especially with older individuals, is liable to causing trauma of the chronic kind. This alone should give one pause, since “adolescence” is a linguistic abstraction with only very recent historical precedent, and many of the biopsychological processes which are conventionally attributed uniquely to it begin earlier and persist long after. The onset of stable, memorable sexual desire and ideation occurs at the age of ~10 (it was certainly present in me by age 11), commensurate with gonadarche, and is certainly almost universally present by age 12-13. The reason these desires arise with gonadarche is simple: they exist to facilitate reproduction. It would make little biological sense in any species other than humans to experience sexual desire but also experience some strange latency period of 1-8 years (depending on the country) during which any acting upon those desires causes inconsolable soul-destruction. Any time something seems completely unique to humans, one has to wonder if it has something to do with uniquely human cultural phenomena such as taboos. It is even more obvious when one observes human cultures which lack these taboos, e.g. Classical Greece. When they married their daughters off at age 13-14, they were concerned chiefly about whether the groom could provide her and her children with a stable living. But they were not concerned about soul-destruction. At least, I’m fairly sure of that. For the record: this is not an endorsement of lowering the age of consent. I am decidedly neutral on that question, but I do not believe Mexico’s answer is any less correct than California’s or vice versa.

It is wrong to say that psychiatrists, or therapists, have a superpower of changing people’s phenotypes. This is impossible, as any such change they could impart would be genetically confounded, i.e. it is genetically non-random sample of the population who are “successful” subjects to their interventions. So it seems fair to assume that a lot of mental health problems are explicable in this way rather than through straight-up iatrogenesis, and their prevalence is inflated somewhat through media hype and social media shenanigans. However, an interesting question is: how much of an evolutionarily novel phenomenon is the field of psychiatry? Are our minds equipped to deal with it? Well, not everyone’s. It seems possible to confect illnesses out of thin air if you subject the right person to the right conditioning, as is the case with the probably purely iatrogenic “dissociative identity disorder.”

Masses of people these days shell out large chunks of their finances on “therapy,” a form of psychiatric intervention which has shown itself to be of at best mixed efficacy. Many long-running randomised controlled trials of its effects turn up jack shit, which ought not to be shocking given what is known about the non-effects of education, extensively documented by Bryan Caplan and others. It has to change the brain in a dramatic way. Still lingering though, is the question of whether it may in fact make matters worse. Many social commentators have taken notice of the way in which mental illness, especially “depression,” seems to be afforded a kind of bizarre social status in some circles, such as within university culture in Canada. Even more galling is that it is not even clear whether “depression” of the garden variety is a disorder; it may be an adaptation that evolved to ward people off hopeless pursuits. Status is a powerful motivator, so this weird grievance culture cannot help, but encouraging people to make their living from talking to such people and consoling them with soothing words cannot be great either, since it is likely to induce the kind of institutional inertia on which the pointless continuance of America’s “drug war” is sometimes (correctly) blamed.

Legalising drugs and investing more energies into high-precision “super-drugs,” e.g. powerful mood-enrichers with no side effects, would do more for the true chronic depressives who literally have never even known what it means to be happy – a malady probably induced by rare mutations if it exists – than what is on offer today. Drugs are the only guaranteed way to do profound psychological re-engineering without gene-editing. It is not clear, though, if the psychiatric industry as it currently exists would be happy to see such problems vanish.

Diseases, Disorders and Illnesses, Oh My

This will serve as an addendum of sorts to my article, The Harmless Psychopaths.

Medicine as a science is a modern phenomenon. It was not all that long ago that going to a doctor was more likely to hurt than help, a fact which persisted, some think, until as late as the 1930s. Medical researchers tend to just keep plugging away at their specialist interest and are unconcerned with what to them seem like instrumentally useless philosophical minutiae. Moral philosophers might argue about meta-ethics: the essence of moral statements, but this does not seem a necessary prerequisite to a relatively harmonious social order. One might just as well ask what is the use of “meta-medicine,” to wonder at the underlying assumptions of medical diagnoses, when scientists are quite happy getting on with finding cures for cancer, and whatever else.

Unfortunately, medicine is as subject to such human frailties as status-seeking and fashion as anything else. It became unfashionable in the 20th century to look for pathogenic causation to diseases thanks to the then nascent science of genetics, which is why it was not accepted as common knowledge that bacteria cause peptic ulcers until the 1980s despite this having been suspected, on good evidence, for well over a hundred years. Note that most cancers are only dimly heritable, in contrast with, say, autism, and have no clear Mendelian inheritance pattern. (Fill in the blank.)

Medicine is an applied science and so obviously has a prescriptive dimension to it, i.e. what is worth treating? Call this is the meta-medical question if one likes. The answer to this is not so complicated when dealing with physical disorders which glaringly go against the sufferer’s interests and those of peers, such as the flu, atherosclerosis, whatever. But what about disorders of the mind? Surely a meaningful concept, but surely far more prone to spurious theorising and fashion-biases in answering the meta-medical question, due to the diversity of moral viewpoints about what is “disordered” behaviour. For the purposes of this post, I use the terms disease, disorder, and illness interchangeably – which they more or less are in everyday usage.

This is how the DSM-IV defines mental disorder:

A. a clinically significant behavioral or psychological syndrome or pattern that occurs in an individual

B. is associated with present distress (e.g., a painful symptom) or disability (i.e., impairment in one or more important areas of functioning) or with a significantly increased risk of suffering death, pain, disability, or an important loss of freedom

C. must not be merely an expectable and culturally sanctioned response to a particular event, for example, the death of a loved one

D. a manifestation of a behavioral, psychological, or biological dysfunction in the individual

E. neither deviant behavior (e.g., political, religious, or sexual) nor conflicts that are primarily between the individual and society are mental disorders unless the deviance or conflict is a symptom of a dysfunction in the individual

The inadequacies of this are manifold and torturously obvious. Childbirth seems to fit quite snugly with condition B. Also, it is generally unhelpful to include a word itself or its synonyms in its own definition, such as in D. with “dysfunction.” E. seems to take it as read that the distinction between biological dysfunction and normal deviance is obvious, yet it is apparently not to most psychiatrists. It is for that reason that the traits branded “psychopathy,” for example, are continuously distributed in the population and usually harmless, but there exists an arbitrarily defined cut-off at the right tail of the distribution where it is conveniently labelled “disorder,” and the relevant convenience is just relative to the interests of whosoever finds these traits unappealing or whoever lacks the theory of mind to understand them. See also: ADHD, and teachers.

How to get around this arbitrariness? If ADHD and psychopathy are not useful to us WEIRDos, who or what are they useful to? Well, they are adaptations: they have a fitness benefit, i.e. a reproductive edge, in at least some environments, even if they are unpalatable to individual persons. This evolutionary view is what tempts some to propose a purely Darwinian definition of disease in which disease is conceived as any embodied phenomenon that is counter-adaptive across all environments. This would make homosexuality a disease, but “Asperger’s syndrome,” “ADHD,” and “psychopathy” not. This could certainly be illuminating from a solely descriptive angle where the only interest is to scientifically describe the causes of disease, but it is useless to practitioners of medicine and psychiatry, for whom the relevant question is “What ought to be treated?” If one asks doctors what the problem is with flu, they are unlikely to say anything about how it affects one’s reproductive chances, and, well, it doesn’t. Not much.

Whether one finds this distasteful to mention as a dispassionate intellectual, it is also a fact that the word “diseased” in popular usage carries a certain moral valence, even when applied to activities that one does not think morally important. To say that “Behaviour X is a disease” is not simply to say that it is evolutionarily maladaptive, but that it is wrong. This would seem an unhelpful confusion.

For the application of medicine, I tentatively suggest that what I think is the best formulation of the conventional usage of “disorder” be merged with the Darwinian definition: anything, internally generated (which may be another bone of contention, but that is a separate topic), which leads to non-trivial suffering in the individual and also has no conceivable fitness benefit. As for the descriptive-only theorists and researchers, the Darwinian definition is fine on its own, although perhaps it is worth while to find a word other than “disease.”