On Clothing and Beautification Norms

Leftwing social critics of the modern tradition describe social norms as arbitrary, especially as they relate to gender. They (henceforth, “leftists”) have an equally noticeable tendency to be wrong, and I bet the matters described herein are no exception.

First, the word “arbitrary” denotes that which happens without reason. With leftists, it means, “For reasons that I should not be expected to care about,” so obviously I will not bother to argue against that. I will give some possible reasons, and then people’s moral proclivities be what they may.

Men and women have different clothing norms almost everywhere, with exceptions, just as there are exceptions to the proposition that nodding of the head up and down is an affirmative gesture. The latter is overwhelmingly common in a vast range of cultures, but it does not follow from its being less than totally universal that it is arbitrary. There is some reason that the tendency exists, even if it is just as simple as copying a high-status group.

Likewise: clothes. Dresses and skirts are strongly associated with femaleness and trousers only weakly with maleness, to such a degree that a skirt on a male is sometimes seen as an aberration and trousers on women not. Some have tried, without lasting success, to change this.

It is questionable how common clothing styles now associated with women have been on men for much of history. The kinds of “skirt-like” or “dress-like” attire that have been somewhat common among men at various points include pulpit robes or cassocks, which have specialised uses; they are not everyday attire for the wearers, and many are relics of the Roman tradition, which I will get to later. Indeed, trousers were apparently common only on men in medieval England, and certainly among the Germanic tribes (Didorus Siculus, V, section 30) and Persians. Charlemagne (Laver. pp. 52-3) donned the tunic, which is dress-like enough, for ceremonial reasons and otherwise wore trousers.

Nevertheless, tunics and kirtles were easy to see on both sexes when they were popular. So one could easily get the impression that trousers and trouser-like garments have become less gendered through time, whereas skirt-like and dress-like garments have become more so. What changed?

The reasons people wore particular types of clothing were different in the past. Practicality, cost, class divisions, and social conservatism were high in the past. Setting aside the thorny question of how knee-length vestments and tights became high-status among European males of the upper class, one can look at the Romans. Braccae (woollen trousers) were associated with barbarians and trousers were seldom worn by Romans. Nevertheless, soldiers did use trousers when practical, especially in the cold regions of the empire. Rome held its clothing conventions in high enough esteem that, after the Empire’s collapse, they were still followed for reasons of tradition qua tradition, and status.

In many professions, especially in the industrial era, skirts and dresses would have proven impractical for a lot of work, most conspicuously among working-class men, and to the extent that this was true, wearing those garments would have been associated with impracticality and therefore low status. The examples are fairly obvious, but this is of questionable relevance to the present day when the majority of work is in the tertiary and quaternary sectors.

It looks almost as though before the 1800s some men wore trousers but far fewer women, and trousers were not “in” as the default male fashion choice until the 1800s.

Finally, perhaps most saliently, clothing for purely ornamental purposes was rare historically outside the elite. Aristocrats especially the French, and Georgian-era gentry, were known for it, but never the average person. This changed in the 20th century, especially the second half. Interestingly, trousers started to become fashionable among women in the West at about the same time that short skirts did. For trousers, I would guess it was a matter of practicality and, secondarily, dissociation from any historic tradition, e.g. that of the Romans or the Catholic Church. In the latter, changing sexual mores (hence, miniskirts) and social mores; the trouser’s association with work, an essential part of the male sex role, came to embody a kind of archetype which some women sought to copy as the 20th century went along.

The association of long (e.g. ankle-length) skirts and dresses with chastity was probably not as strong in the early 20th century as later, because there was less in public life with which to contrast it, i.e. the miniskirt-wearers were barely present.

Today, almost the only reasons to be choosy about what one wears are aesthetic, whether sexual or not. Thus, the vast majority of men do not even wear shorts unless 1) the weather is unbearably hot or 2) they are performing some activity that necessitates it or makes it easier, e.g. running and swimming. No one is interested in seeing men’s legs per se except homosexual men. By contrast, women’s legs are objects of intense desire and adoration for legions. So, in the present, a man who wears a long skirt or dress is giving off signals of chastity or sexual innocence, which is ridiculous in men. If he wears a short skirt or dress, he is giving off signals of sexual attractiveness, which, again, is absurd; the visual advertisement of these qualities is nearly meaningless in men: especially for chastity, but even for attractiveness unless he is profoundly physically attractive.

Other social changes have come about along the same course for comparable reasons, such as the practice of leg-shaving, far more common in women than men, and in men it is typically to highlight musculature: athletes, swimmers, models, etc.

One finds oneself suspicious of anyone claiming that a social trend emerged from the aether simply because of marketing or propaganda. The evidence that propaganda, after controlling for confounding factors, affects public opinion is thin. It is not even true of Hitler’s speeches. There are always confounds: some economic, some endogenous and innate. It is sometimes claimed that the preference for shaven legs came about in the early 20th century in response to specific ad campaigns, which explains why one sees loads of old paintings of women with visibly hairy legs. At least, it would explain that if it were true.

Few women have their legs shaved the year round unless they live in a climate wherein they can expect to have their legs bare on any given day. In the past, when women seldom used their legs as sexual ornaments, it is reasonable to deduce that shaving was even rarer but became common once they did commonly use them for that purpose. Are we really to believe that this is a coincidence?

Women have sparser leg hair than men to begin with, which is a neotenous trait along with lack of facial hair, lower height, paedomorphism in facial structure, etc, all of which are considered highly attractive in women. Since relative lack of hirsutism is a sex-typical trait in females and the heightened neoteny that shaving projects is attractive, women who frequently have their legs bared shave them. This is descriptive, not to say that anyone of either sex is ethically obliged to be attractive. However, what constitutes an attractive feature is fairly universal.

Tangentially, something similar occurred to cause the gradual skin-lightening of Europeans. Women almost universally have lighter skin than men, and more sex-typical features are preferred in mates. Europe is thought to have had a female-skewed sex ratio for much of its prehistory, thus increasing competition among females for mates and upwardly modulating selection upon elements of female sexual attractiveness, many of which spilled over into males either as byproducts or due to bidirectional sexual selection. This is one reason among many why Europeans are the most attractive race.

All this could be obvious. Much of it may have been once. Alas, few have any interest in finding the knowledge themselves.

Neurotic to the Bone

Ed Hagen recently wrote a paper outlining his objections to the classification of major depression as a “brain disorder,” on the grounds that, in sum: the diagnosis is made to distinguish it from other conditions and not from “normal” persons, symptoms of what is called depression tend to remit within weeks or months, and occur at points in life where some amount of sorrow would be expected, and depression is a continuous distribution, so the cut-off for disordered behaviour is arbitrary. Although I cannot disagree with any of this, I think it misidentifies the problem, which is now potentially avoidable thanks to recent advances in genetics.

Namely: psychiatric diagnoses are based upon symptoms and not their genetic place of origin, evolution, or adaptive value.

By definition, natural selection only allows adaptive or neutral alleles to stay around; alleles can only avoid extinction if they keep up with the rate at which competing variants are reproduced. Thus, when an organism displays maladaptive behaviours, the explanation for this falls into one of three categories: deleterious mutations (the occasional fuck-up in DNA’s copying process), pathogens, and gene-environment mismatch. The last of these refers to situations in which genes that are adaptive in some environment are still present as the environment changes – they simply have not had time to be selected out yet.

As Hagen notes, the symptoms of depression are usually synonymous with the symptoms of neuroticism – a personality variable which remains fairly constant throughout the lifespan and determines responsiveness in aversive situations such as the death of a first-degree relative. However, it is not as though there is no variation in trait neuroticism itself; some people and groups are known to be higher in it than others, e.g. women. How much of that variation is “normal,” in adaptive terms? Perhaps very little, I suspect.

Some of it is obviously gene-environment mismatch. We have not had planes or parachutes for that long, which is why most people are more scared of skydiving than driving a car despite the fact that the latter is demonstrably more dangerous. Equally, sex differences are generally a sign of different (historical) adaptive challenges for the sexes, which may be why women are more neurotic than men by ~0.4 standard deviations, roughly equivalent to two subpopulations of males with a mean height difference of 1¼ inches – think of the English vs. the Spanish. Barely noticeable at the mean, but very much so at the tails. But, on the whole, I doubt that most variation in neuroticism is adaptive.

The genetic architecture of personality traits looks similar to intelligence in that both are massively polygenic and only a small chunk of the variance is eaten up by the “common” neutral alleles. In the case of personality, it might not even be as much as 10%. The rest, according to this paper, is due to “rare variant effects and/or a combination of dominance and epistasis.” These common (freq. > ~1%) variants are in a kind of equilibrium because each has reproductive costs and benefits, otherwise it would be impossible for them to all be common. For a personality trait such as agreeableness, it may be for example that genes which inculcate high agreeableness make one less attractive at the outset, especially as a male, but more fecund in the long run because agreeable people are more willing to have more kids, etc. The rest of the variance, in the individually rare (≤ 1%) alleles, will be deleterious, hence their rarity.

A deleterious allele can accumulate in the population until it reaches equilibrium frequency, the point at which further accumulation is counterbalanced by selection. The equilibrium frequency for a given allele is generally just the mutation rate at its locus divided by its reduction in fitness relative to the population average, e.g. if the population’s average birth rate is 2.0 and the allele knocks carriers down to 1.98, that is a fitness loss of 1%. For an allele with a mutation rate of 0.0001, this gives you an effective “maximum” frequency of 1%. Given the number of variants involved in the brain, there are apparently a lot of these, almost everyone is carrying some, and the unluckiest, at the right tail of mutational load, could be carrying quantities orders of magnitude more than the average.

Since the behavioural correlates of neuroticism are not neutral and tend heavily towards the maladaptive (references: 123), one has to question how much of “normal” sorrow, grief, and anxiety is really normal. Common, sure, but nonetheless aberrant. Natural evolution does not offer a straightforward means to eliminate it in toto, but that need not make it impossible.

This shows yet more problems with the popular usage of the word “disorder.” Perhaps it is time to abandon the word altogether.

Ethnocentrism, or How I Learned to Stop Caring and Abandon Eternally Unreciprocated Altruism

Like many or perhaps most people with a verbally slanted intelligence profile, I used to invest a lot of energy into heady academic philosophy, especially ethics. The distant vision of a cognitively post-human species and a suffering-free world governed by utilitarian moral codes probably stimulated my endogenous opioid system. In retrospect, it seems to have been little more than an indulgence, which is my impression of credentialled ethicists generally – particularly utilitarians. They are content to spend eternity arguing over the merits of classical vs. negative utilitarianism, or whether the balance of hedonic and dystonic impulses that sentient life experiences can be said to make it “worth living or reproducing,” etc. For what it is worth, my meta-ethical views are anti-realist: I do not believe that any moral system is the “correct one.” Utilitarianism may be useful insomuch as most beings do value happiness and disvalue suffering. But the complexity of moral values in humans is such that these categories will never be quantifiable to anyone’s satisfaction, and prescriptive antinatalism is a dead end for reasons I have discussed before. Although, non-human life is not such a tough call. People who romanticise the beauty of the natural world are admiring the décor of a planet-sized torture chamber. It is, as Dawkins observed, “beyond all decent contemplation,” and I would have no compunction in painlessly culling most of it if I and my kind were the governors of the known universe.

If. Herein lies a cautionary tale against the distinctly European tendency of embracing blind universalism. There is no coming future of global utilitarian welfarism, at least not one with internally coherent goals or loyalties, because for most of the peoples of this Earth, the only relevant loyalty is to their group, their moral community, their kith and kin – whatever that looks like. It need not be racial, although it usually is in part. Recent computer model data have shown how ethnocentric orientation fares in competition with three alternative group strategies: humanitarianism (English liberals, effectively), egoism, and traitors (SJWs). The last of these is found to be the least effective strategy by far, which is why in the long run there is little reason to worry about traitorous progressive ideologues. Eventually they will learn the hard way. Ethnocentrism wins the day against humanitarianism too, by exploiting the generosity of humanitarians. This sheds light on how ethnocentrism evolved without invoking impossibilities such as group selection in humans, and how it is that this strategy seems to predominate, quietly, even in relatively humanitarian populations. It also illustrates why calling immigration leftist or anti-tribal is such a farce. Far too many white people see it as such because they are humanitarians – optimised to believe that their group-blind, gene-blind preferences are completely generalisable.

There is no indication that moral preferences can ever transcend genetic conflict. If it could, an awful lot of things in human history would have taken a different course. Political preferences are certainly heritable, and although it may be impractical to seek twin-study data on ethical systems, I would be willing to bet that there is also a common genetic architecture to utilitarians, humanitarians, libertarians, and advocates of “effective altruism.” One aspect of their shared phenotype is distinctly visible whenever large numbers of them congregate, as many have observed. Some even seem to understand, perhaps unconsciously, just how rarefied their values are and endorse global government as a solution. But would all the opportunities for abuse in such a system be worth it? Would anyone other than its own architects, presumably Westerners, consent to its rulership? Indeed, if these are the measures necessary to bring the world into value-alignment, why not push for genetic imperialism, i.e. diluting the phenotypic diversity of the human species through genetic engineering so as to render everyone effective clones of Will MacAskill and Diana Fleischman?

Don’t laugh. Some people actually take this seriously as a solution. Their profile is predictable: white, middle class, educated, and living in a country in which the problems that were not there before multiracialism are getting increasingly hard to ignore – although it is a relatively safe assumption that they live nowhere near it themselves. They understand that group differences, even in something as simple as IQ, are enough to cause resentment (that is why racial affirmative action exists), and advocate post-racialism via genetic enhancement. It sounds nice enough until you realise that it would give hostile aliens yet more reasons to emigrate to the West, with our finite resources, but there is no reciprocal allowance for whites to move into Africa and Asia and take them over. Furthermore, given that there is more to group conflict than IQ, who is to say that this would ameliorate group tensions enough to be worth the risk? In the end, what we are talking about is a literal horror movie replacement scenario in which POC are converted into high-class whites who just happen to have brown bodies. How many would willingly subject themselves to this, and how many of those who rejected it would continue to fan the flames of ethnic conflict?

The reason the appraisal of group differences is important to begin with is that nearly all human altruism is based on either reciprocity or sociogenetic closeness. Thus, a universal basic income within a homogeneous society is at least justifiable on the grounds that it is helping to maintain the social order, reducing economic inequality, and the recipients, if they do something destructive with the money, are only going to self-destruct. In a country like the United States, those paying through their taxes, whites, would have legitimate cause to wonder: “Why is money being forcefully extracted from me and given to people who often explicitly say they want me destroyed?” This, too, spurs on conflict. Foreign aid has yet more pitfalls. Since Africans mostly lack the intelligence to build civilisation alone, they rely on foreign donors, their population explodes, and then masses of them die in Malthusian collapses.

Some humanitarians are prepared to accept that we will probably never get to explore much of the universe, for a host of pragmatic reasons. It seems all the more implausible these days. With two interstellar objects passing through the solar system in as many years, it is looking as though interstellar space is chock-full of fast-moving debris, and since no one actually wants to live on Mars (I don’t), we may well be stuck on this planet. Yet, the same people are unwilling to acknowledge the barriers to moral unity within the human species, and the power dynamics inherent to any attempts at “conversion.” It has already been tried by colonialists. Didn’t work. Why would it this time?

Ultimately, idealists must accept that the only effective way for their ideas to propagate is for their moral community to propagate its genes. The most effective way to do that is for the idealists to secure for themselves exclusive territories, build artificial uteri, and use them to create more copies of themselves. After all, natural human reproduction is much less efficient than that of some e.g. fishes, which can produce a million offspring in a year. Technology offers a way around this, such that we can all be Genghis. The result? An endless supply of persons who share your values and interests, not to mention all the possibilities of genetic and cybernetic enhancement. Not everyone will be welcome to your territory. But I promise, the sooner you can get over that, the happier and saner you, and everyone, will be.

The Brick Wall of Washing Machines

People probably make too much fuss about defining biological sex in terms of its organic components. The term “chromosomes” gets thrown about, maybe because it is commonly used in basic biology education and is consequently a bit more accessible than “gametes,” although gametes are in fact the heart of the matter. Several different chromosomal combinations exist in humans (as abnormalities) besides XX and XY, but gametes come in only two forms – sperm and ova, the component factors of sexual reproduction.

But why does sexual reproduction itself exist, and by extension, why do the two sexes themselves? It is not a given across all species. Quite a few species of plants and some unicellular organisms practise autogamous fertilisation, effectively a slightly modified form of cloning in which the variants of sex are applied to an otherwise identical genetic template. Others, like the New Mexico whiptail, are parthenogenic, meaning that females can produce more females (clones) with no fertilisation at all. Most often this manifests as a “fail safe,” in species such as the Komodo monitor, for environments with a shortage of males. Obligate parthenogens are rare. When it happens, it tends to be the result of an unusually torpid environment combined with some kind of recent fuck-up. In the case of the obligately parthenogenic New Mexico whiptail: it lives primarily in the desert and owes its existence to cross-breeding between two parent lizard species which cannot produce viable males. If its environment changes too much, it is fucked: cloning and autogamy place a hard limit on gene recombination, and therefore adaptation, which is why the latter really only exists in plants and invertebrates, and the dominant presentation of the former is as a “failure mode” in otherwise sexually reproducing species. It is only “practical” in species with extraordinarily high reproductive potential, short gestation periods, sedate or undemanding environments, low metabolic needs, or high mutation rates.

Given this, it is not hard to see where males and females came from. Think of The Sexes™ as a strategy of gene propagation, and then secondary sex differences, in morphology and psychology, as strategies which reflect the different selective pressures the sexes were subjected to and/or subjected each other to (dimorphism). Viewed through this lens, females represent the “default” strategy which began with the oldest organisms (e.g. asexual bacteria): the “incubators,” reproducing through cloning and self-fertilisation, whereas males, the “fertilisers,” are a comparatively recent innovation. The degree of “sex-differentiation load” that falls upon males varies by species according to the aforestated variables in selection. Since females are, as is often noted, the gatekeepers of reproduction, the selection pressures that act primarily on females tend to be similar across species and relate, directly or obliquely, to their ability to bear offspring. For males, the story revolves around the conditions of access to females, which is why the male sex “morph” (form) differentiates itself from the female in completely different ways across species.

Sometimes male and female are barely distinguishable from one another. This is the case for many monogamous avians, whose environments, for whatever reason, do not lend themselves to significant sexual differentiation, which reduces female choosiness, which limits dimorphism: it is a negative feedback system. Other birds, like the crested auklet, engage in a kind of mutually eliminative sexual selection, whereby each sex vets the other for organically expensive sexual ornaments for reasons that are not well understood. In elephant seals, the degree of sex differentiation, just in size, borders on the absurd, although their (relative to humans) feeble brains mean that the possible scope of behavioural differentiation is not all that striking most of the time. Exactly where humans “fit” on these continua of male sex differentiation is something of a relative judgement call, but we are obviously not auklets or crows.

Sexual dimorphism and monomorphism have special behavioural correlates, most of which are obvious. Monomorphic species tend to be monogamous with fairly equal parental investment in offspring and low variance in male reproductive success. Dimorphics tend towards, well, the opposite of those traits. Humans also have a lengthier post-reproductive schedule than most animals, largely because of how long it takes the human brain to develop, which probably limits sex differentiation in e.g. aggression compared with some species that practise effective polygyny, and different normative mating systems between human societies will also affect it notwithstanding other forces such as judicially enforced genetic pacification. There is also considerable variation in these “life history traits” through time: from a time when “childhood” was seldom acknowledged as its own entity and children were expected to be responsible, to the point of execution, for criminal wrongdoing from an extremely young age, to … whatever you would call the situation we have now. Certain kinds of change may be inevitable, in this respect. Other things are remarkably changeless even in the face of new environments.

Human sexual dimorphism is an example of this changelessness. If aliens were to observe the human sexes 100 years ago and now, they would note stability in a range of male and female responses to exogenous stimuli, and note the differences in underlying strategy. Males are the strategy of high risk, aggression, dominance, status-seeking, agency and systems orientation; females are the strategy of low risk, passive aggression, emotional dominance, comfort-seeking, agency by proxy, and social orientation. (A great example of the agency/agency by proxy distinction can be seen in sex-specific antisocial behaviours such as psychopathy in males and Briquet’s syndrome in females.) They would note that human females are the limiting factor in reproduction, but human males are the limiting factor in just about everything else (obligatory Paglia quote about living in grass huts, etc). Intelligence is probably not a sexually selected trait in humans, or at least, there is little good evidence for it, and sex differences in intelligence per se are trivial. The sex difference is in application. Human brain complexity and its antecedents mean that the domain of activities germane to preserving one’s genetic line are rather more elaborate than normal, and since females are the “selector” sex, those tasks, and selection for assiduous task-doing, are upon the males.

There is no real sense in which human beings can “escape” natural selection, because natural selection is the reason behind everything that we are, including the desire (of some) to “overcome” natural selection, whatever that means. However, natural selection has also given us moral instincts and reasoning abilities which, combined with the technologies born mostly of male ingenuity, could allow us to divert evolutionary selection pressures in a way that could never happen without our technology. The crapshoot of genetic recombination, by the lights of human morality, is just that: a crapshoot. At some point, artificial gametogenesis could allow humans to become effective hermaphrodites, even if we still have the old equipment. CRISPR, and eventually full genome synthesis, could render natural recombination processes obsolete, and therefore sexual reproduction itself obsolete. Childhood will increasingly resemble adulthood as we produce children of extremely superior intelligence, and thus, reduce the need for high investment. Male breadwinning social roles will run into a brick wall of automation, or perhaps cloning of the 99.999th percentile most workaholic and intelligent workers. Female homemaking roles will (or have?) run into a brick wall of washing machines. As technology outpaces our obsolescent biological hardware, one seriously has to wonder: how much of the human intersexual dynamic, i.e. behavioural sexual dimorphism, is worth preserving? Maybe we could do with being more like the monomorphic crows.

Alternatively, perhaps one imagines a world of nearly infinite morphological freedom where individuals can modify their own physiology and psychology with ease, unconstrained by sex, like character profiles in an RPG, and where sex and gender, insomuch as they exist, amount to little more than fashion. One may dream.

On Agency and Accountability

Is it morally permissible for a 14-year-old to be enlisted in the military under any circumstance? The impulse this question elicits is one of disgust, founded on a historically and geographically local set of assumptions about moral agency and its relationship to age. The “true” existence of agency itself is contestable if taken to mean the free exercise of an internal will. In light of a determinist view of causality, it may be merely a legal heuristic – a means by which to differentiate the degrees of illusory freedom to action in developed and developing brains. The human brain does not mature until the mid-late 20s, and one can safely assume that no one is up for all their freedoms to be forfeited until age 25, since no such standard exists. With that off the table as a viable standard, we are left with varying degrees of randomness, stupidity, and (often hollow) virtue-signalling.

It goes without saying that reason postcedes rather than precedes morals – humans have moral emotions, which they justify through consequentialist reasoning only if such an expectation is placed upon them. Hence, discourse of the kind that follows here is rare.

One may ask, “What is true of a 14-year-old which, if true of an 18-year-old, would render the 18-year-old unfit for military service?” Intelligence is an untenable response, because intelligence does not scale linearly with age, and there are (and have been) plenty of legally adult military personnel with IQs in the 80s. The problem would be resolvable, perhaps, if age were a quantifiable trait, as intelligence is, as opposed to a numerical series of demographic cohorts, each with large individual variation in mental profiles. The reason it is illegal for anyone with an IQ below 83 to be inducted into the US military is not even “moral,” by the way. It is that such people are useless to the military. The moral argument, if any, is retroactive. No one seems to care how hard it is for them, really. Perhaps the “child” is a sacred demographic category, one which, under current conditions, must be extolled as a nexus of antediluvian bliss and innocence. By contrast, even acknowledging the unintelligent as a group with distinct needs is to be treated with indifference at best, or more often, suspicion.

It is legal in the UK to join the armed forces at age 16. Few who object to this seem interested in learning how many of these 16-year-olds regret the decision in hindsight, which would seem a good test of how “impulsively” the choice is made. It is well known that the military offers a kind of “escape hatch” of civic duty for teenagers with little hope of succeeding elsewhere, and many of them surely wanted to join from younger ages. The military, then, removes them from the morass of indecision and wastefulness that they would otherwise carry with them. “Impulsivity” is hard to measure. Maybe it can be defined in qualitative terms: a tendency to make decisions with low future-orientation. But if 14-year-olds are blighted by that, many more adults are, especially among the stupid.

Note, concomitantly, that the moral importance of emotions such as “regret” is not uniform. I argue that there are cases in which it can safely be called delusional or self-inflicted, as all emotions can be. In some cases, it simply does not feel significant enough to onlookers for it to be considered morally salient, such as the regrets of child actors about their profession as they become adults – something that few care to acknowledge.

Even the knee-jerk harm-reduction case against the military in the context of this argument has complications, because in most years, military deaths are low – probably lower than construction work or industrial fishing. Yet, no one cares for the lowly construction worker despite the fact that he has a 50% chance of having an IQ south of 92, and a, relatively speaking, alarming risk of fatality in any given year, not just those in which there is an ongoing war. Sudden, high-density death is more morally weighty to the average person than slow, diffuse, “accidental” death.

The hard determinism of brain-states, combined with knowledge of those states’ evolution through age, may have relevance to legal degrees of agentic “freedom.” If one compares the brain at different stages of development, the complexity and variety of its interactions with the world (“decisions”) at one stage may differ from those at another on aggregate, although exactly how morally salient the difference is will vary from individual case to case and may sometimes be doomed to subjective judgement call.

The flip side of agency (or freedom) is accountability, a concept which was once as ubiquitous as that of “freedom” today. The extent of brutal judicial punishment in premodern England was remarked upon by contemporaneous authors and looks absurd in retrospect. I suspect a large part of it was an environment of resource scarcity and technological deprivation: and the social tinderbox that these facts gave rise to – lawmakers may have felt that they could not but disincentivise, in the clearest way possible, antisocial behaviour, because whatever antisocials destroyed could not be rebuilt as quickly as today. They took no chances on recidivism. Another example would be the harsh punishments for sex crimes, in a world swimming with sexually transmitted pathogens but no effective medicine. This thesis could be empirically examined: are harsh punishments more common in deprived regions? Is clemency more common among the privileged classes? Etc. It also seems to dovetail with the assertion that our modern obsession with rights and freedoms is due to technologically generated luxury, without which the social order prioritises duties.

Old criminal justice may have been disproportionate and cruel. Nonetheless, if ever it were possible to quantify how well behaviour X at {age} predicts outcome Y at a later date, why deny agency, or accountability, in cases where it is indeed predictive? God knows who is qualified to make such calculations; probably no human, since humans are all preoccupied with sending virtue signals of endless freedom and protection. But, in principle, it ought not to be difficult to tell whether a child who murders is likely to do so again in adulthood, just as it is possible to make an algorithm that predicts recidivism within adulthood. In which case, why not hang the little shit? (Comedic exaggeration, of course. I do not endorse the death penalty.) After all, many traits are stable throughout the lifespan. What is now called psychopathy is usually one of them. I have no “solution” to dealing with age-dependent social norms, nor much hope of a “science” of agency and accountability ever coming to pass. Age and agency is nevertheless a conundrum of interest to thinking people.

It is doubly unlikely that such a science will emerge as social norms from high-status societies, such as the West, spread across the planet memetically. Eventually it will get to the point where people cannot distinguish the signal from reality, and we shall all pretend that the move in this direction was “scientific,” and “progressive,” as with child labour laws. A less pernicious example of exactly this is the recent trend of Arabs turning away from religion. By far the biggest predictive factor in (ir)religiosity at the national level is IQ, and this change is not due to the Arabs’ having gained IQ points, so they are probably just copying what they see as their social betters, in the West.

The arbitrariness of it all came to mind again when I saw someone on Twitter being called a “true sociopath” and “enabler of child-rape” because he was apparently endorsing changing the age of sexual consent to 15. The same was debated in Britain recently. Since the legal age is 16 here, presumably Britons are enablers of child-rape as far as the average American is concerned. It never occurs to them: maybe the concept of the post-pubescent “child” is highly socially fungible, and no one can even really agree on what it is, let alone what its rights or liabilities should be.

Such faux pas are ever-present, though, because extraordinarily few people have a picture of reality in their heads that integrates anything beyond the whimsy of the here and now. Many people struggle to wrap their heads around how different public opinion was on their political hobby-horse as recently as 20 years ago, never mind any further back.

And that’s not counting the surprising number of stupid and ahistorical things which even political dissidents believe.

PMS City

As with labels such as “schizophrenia,” and many besides, premenstrual syndrome is a symptomatological diagnosis – a category formed not on the basis of any known cause but on a loosely associated set of symptoms, which are many and vary in severity.

And none of them should exist. Evolution has had a near-eternity to chip away at the reproductive system, and for normal bodily processes to induce pain or debility ought to be selected out unless there is some obvious adaptive trade-off (the only example that comes to mind is giving birth). Furthermore, since these problems are experienced by only a subset of women, they are not an inevitable result of hormone changes.

An alternative explanation is pathogens. During the luteal phase of the menstrual cycle, the immune system is weakened to avoid destroying new embryos, leaving women vulnerable to infectious agents. Empirically confirmed associations of PMS symptoms with pathogens include chlamydia and trichomonas vaginalis, but there could easily be others which have either evaded precise investigation or have been ignored.

The psychic pain brought on by menstruation is well documented. Hippocrates spoke of it, but he was clearly talking about the “madness” that could come as an effect of the physical symptoms such as dysmenorrhoea, not an independent mania or irrationalism brought on by what we now call “being hormonal,” whatever that means.

About one-quarter of women report clinical symptoms of PMS, which are likely to be pathogenic, but a fairly decent percentage of them, with or without the disease symptoms, report other problems such as killing people, screaming Love Island-tier insults at household objects, crying incontinently, losing the ability to turn-take in conversation, psychotic paranoia, and wasting other people’s money.

Lots of physiological processes happen all the time which, theoretically, could have a noticeable impact on mood. Levels of cortisol, the “stress hormone,” shift throughout the day, peaking in the hours just after waking, and drinking alcohol has a far more dramatic impact on nearly all aspects of brain function than anything menstruation does. Yet, the Morning Cortisol Rage has yet to breach the popular lexicon, and the effects of alcohol are closer to being psychosomatic than is ordinarily assumed: it is not a human universal that drinking causes chimpanzee-like states of aggression and disinhibition as it does in Britain. So it looks like another anomaly of our time and place, a thing that exists because people want it to. The same probably holds true for – well, lots of things.

Childhood was a fun time. Maybe it’s not surprising that people love an excuse to return to it; some periodically, others pretty much all the time.