On Clothing and Beautification Norms

Leftwing social critics of the modern tradition describe social norms as arbitrary, especially as they relate to gender. They (henceforth, “leftists”) have an equally noticeable tendency to be wrong, and I bet the matters described herein are no exception.

First, the word “arbitrary” denotes that which happens without reason. With leftists, it means, “For reasons that I should not be expected to care about,” so obviously I will not bother to argue against that. I will give some possible reasons, and then people’s moral proclivities be what they may.

Men and women have different clothing norms almost everywhere, with exceptions, just as there are exceptions to the proposition that nodding of the head up and down is an affirmative gesture. The latter is overwhelmingly common in a vast range of cultures, but it does not follow from its being less than totally universal that it is arbitrary. There is some reason that the tendency exists, even if it is just as simple as copying a high-status group.

Likewise: clothes. Dresses and skirts are strongly associated with femaleness and trousers only weakly with maleness, to such a degree that a skirt on a male is sometimes seen as an aberration and trousers on women not. Some have tried, without lasting success, to change this.

It is questionable how common clothing styles now associated with women have been on men for much of history. The kinds of “skirt-like” or “dress-like” attire that have been somewhat common among men at various points include pulpit robes or cassocks, which have specialised uses; they are not everyday attire for the wearers, and many are relics of the Roman tradition, which I will get to later. Indeed, trousers were apparently common only on men in medieval England, and certainly among the Germanic tribes (Didorus Siculus, V, section 30) and Persians. Charlemagne (Laver. pp. 52-3) donned the tunic, which is dress-like enough, for ceremonial reasons and otherwise wore trousers.

Nevertheless, tunics and kirtles were easy to see on both sexes when they were popular. So one could easily get the impression that trousers and trouser-like garments have become less gendered through time, whereas skirt-like and dress-like garments have become more so. What changed?

The reasons people wore particular types of clothing were different in the past. Practicality, cost, class divisions, and social conservatism were high in the past. Setting aside the thorny question of how knee-length vestments and tights became high-status among European males of the upper class, one can look at the Romans. Braccae (woollen trousers) were associated with barbarians and trousers were seldom worn by Romans. Nevertheless, soldiers did use trousers when practical, especially in the cold regions of the empire. Rome held its clothing conventions in high enough esteem that, after the Empire’s collapse, they were still followed for reasons of tradition qua tradition, and status.

In many professions, especially in the industrial era, skirts and dresses would have proven impractical for a lot of work, most conspicuously among working-class men, and to the extent that this was true, wearing those garments would have been associated with impracticality and therefore low status. The examples are fairly obvious, but this is of questionable relevance to the present day when the majority of work is in the tertiary and quaternary sectors.

It looks almost as though before the 1800s some men wore trousers but far fewer women, and trousers were not “in” as the default male fashion choice until the 1800s.

Finally, perhaps most saliently, clothing for purely ornamental purposes was rare historically outside the elite. Aristocrats especially the French, and Georgian-era gentry, were known for it, but never the average person. This changed in the 20th century, especially the second half. Interestingly, trousers started to become fashionable among women in the West at about the same time that short skirts did. For trousers, I would guess it was a matter of practicality and, secondarily, dissociation from any historic tradition, e.g. that of the Romans or the Catholic Church. In the latter, changing sexual mores (hence, miniskirts) and social mores; the trouser’s association with work, an essential part of the male sex role, came to embody a kind of archetype which some women sought to copy as the 20th century went along.

The association of long (e.g. ankle-length) skirts and dresses with chastity was probably not as strong in the early 20th century as later, because there was less in public life with which to contrast it, i.e. the miniskirt-wearers were barely present.

Today, almost the only reasons to be choosy about what one wears are aesthetic, whether sexual or not. Thus, the vast majority of men do not even wear shorts unless 1) the weather is unbearably hot or 2) they are performing some activity that necessitates it or makes it easier, e.g. running and swimming. No one is interested in seeing men’s legs per se except homosexual men. By contrast, women’s legs are objects of intense desire and adoration for legions. So, in the present, a man who wears a long skirt or dress is giving off signals of chastity or sexual innocence, which is ridiculous in men. If he wears a short skirt or dress, he is giving off signals of sexual attractiveness, which, again, is absurd; the visual advertisement of these qualities is nearly meaningless in men: especially for chastity, but even for attractiveness unless he is profoundly physically attractive.

Other social changes have come about along the same course for comparable reasons, such as the practice of leg-shaving, far more common in women than men, and in men it is typically to highlight musculature: athletes, swimmers, models, etc.

One finds oneself suspicious of anyone claiming that a social trend emerged from the aether simply because of marketing or propaganda. The evidence that propaganda, after controlling for confounding factors, affects public opinion is thin. It is not even true of Hitler’s speeches. There are always confounds: some economic, some endogenous and innate. It is sometimes claimed that the preference for shaven legs came about in the early 20th century in response to specific ad campaigns, which explains why one sees loads of old paintings of women with visibly hairy legs. At least, it would explain that if it were true.

Few women have their legs shaved the year round unless they live in a climate wherein they can expect to have their legs bare on any given day. In the past, when women seldom used their legs as sexual ornaments, it is reasonable to deduce that shaving was even rarer but became common once they did commonly use them for that purpose. Are we really to believe that this is a coincidence?

Women have sparser leg hair than men to begin with, which is a neotenous trait along with lack of facial hair, lower height, paedomorphism in facial structure, etc, all of which are considered highly attractive in women. Since relative lack of hirsutism is a sex-typical trait in females and the heightened neoteny that shaving projects is attractive, women who frequently have their legs bared shave them. This is descriptive, not to say that anyone of either sex is ethically obliged to be attractive. However, what constitutes an attractive feature is fairly universal.

Tangentially, something similar occurred to cause the gradual skin-lightening of Europeans. Women almost universally have lighter skin than men, and more sex-typical features are preferred in mates. Europe is thought to have had a female-skewed sex ratio for much of its prehistory, thus increasing competition among females for mates and upwardly modulating selection upon elements of female sexual attractiveness, many of which spilled over into males either as byproducts or due to bidirectional sexual selection. This is one reason among many why Europeans are the most attractive race.

All this could be obvious. Much of it may have been once. Alas, few have any interest in finding the knowledge themselves.

Neurotic to the Bone

Ed Hagen recently wrote a paper outlining his objections to the classification of major depression as a “brain disorder,” on the grounds that, in sum: the diagnosis is made to distinguish it from other conditions and not from “normal” persons, symptoms of what is called depression tend to remit within weeks or months, and occur at points in life where some amount of sorrow would be expected, and depression is a continuous distribution, so the cut-off for disordered behaviour is arbitrary. Although I cannot disagree with any of this, I think it misidentifies the problem, which is now potentially avoidable thanks to recent advances in genetics.

Namely: psychiatric diagnoses are based upon symptoms and not their genetic place of origin, evolution, or adaptive value.

By definition, natural selection only allows adaptive or neutral alleles to stay around; alleles can only avoid extinction if they keep up with the rate at which competing variants are reproduced. Thus, when an organism displays maladaptive behaviours, the explanation for this falls into one of three categories: deleterious mutations (the occasional fuck-up in DNA’s copying process), pathogens, and gene-environment mismatch. The last of these refers to situations in which genes that are adaptive in some environment are still present as the environment changes – they simply have not had time to be selected out yet.

As Hagen notes, the symptoms of depression are usually synonymous with the symptoms of neuroticism – a personality variable which remains fairly constant throughout the lifespan and determines responsiveness in aversive situations such as the death of a first-degree relative. However, it is not as though there is no variation in trait neuroticism itself; some people and groups are known to be higher in it than others, e.g. women. How much of that variation is “normal,” in adaptive terms? Perhaps very little, I suspect.

Some of it is obviously gene-environment mismatch. We have not had planes or parachutes for that long, which is why most people are more scared of skydiving than driving a car despite the fact that the latter is demonstrably more dangerous. Equally, sex differences are generally a sign of different (historical) adaptive challenges for the sexes, which may be why women are more neurotic than men by ~0.4 standard deviations, roughly equivalent to two subpopulations of males with a mean height difference of 1¼ inches – think of the English vs. the Spanish. Barely noticeable at the mean, but very much so at the tails. But, on the whole, I doubt that most variation in neuroticism is adaptive.

The genetic architecture of personality traits looks similar to intelligence in that both are massively polygenic and only a small chunk of the variance is eaten up by the “common” neutral alleles. In the case of personality, it might not even be as much as 10%. The rest, according to this paper, is due to “rare variant effects and/or a combination of dominance and epistasis.” These common (freq. > ~1%) variants are in a kind of equilibrium because each has reproductive costs and benefits, otherwise it would be impossible for them to all be common. For a personality trait such as agreeableness, it may be for example that genes which inculcate high agreeableness make one less attractive at the outset, especially as a male, but more fecund in the long run because agreeable people are more willing to have more kids, etc. The rest of the variance, in the individually rare (≤ 1%) alleles, will be deleterious, hence their rarity.

A deleterious allele can accumulate in the population until it reaches equilibrium frequency, the point at which further accumulation is counterbalanced by selection. The equilibrium frequency for a given allele is generally just the mutation rate at its locus divided by its reduction in fitness relative to the population average, e.g. if the population’s average birth rate is 2.0 and the allele knocks carriers down to 1.98, that is a fitness loss of 1%. For an allele with a mutation rate of 0.0001, this gives you an effective “maximum” frequency of 1%. Given the number of variants involved in the brain, there are apparently a lot of these, almost everyone is carrying some, and the unluckiest, at the right tail of mutational load, could be carrying quantities orders of magnitude more than the average.

Since the behavioural correlates of neuroticism are not neutral and tend heavily towards the maladaptive (references: 123), one has to question how much of “normal” sorrow, grief, and anxiety is really normal. Common, sure, but nonetheless aberrant. Natural evolution does not offer a straightforward means to eliminate it in toto, but that need not make it impossible.

This shows yet more problems with the popular usage of the word “disorder.” Perhaps it is time to abandon the word altogether.