Neurotic to the Bone

Ed Hagen recently wrote a paper outlining his objections to the classification of major depression as a “brain disorder,” on the grounds that, in sum: the diagnosis is made to distinguish it from other conditions and not from “normal” persons, symptoms of what is called depression tend to remit within weeks or months, and occur at points in life where some amount of sorrow would be expected, and depression is a continuous distribution, so the cut-off for disordered behaviour is arbitrary. Although I cannot disagree with any of this, I think it misidentifies the problem, which is now potentially avoidable thanks to recent advances in genetics.

Namely: psychiatric diagnoses are based upon symptoms and not their genetic place of origin, evolution, or adaptive value.

By definition, natural selection only allows adaptive or neutral alleles to stay around; alleles can only avoid extinction if they keep up with the rate at which competing variants are reproduced. Thus, when an organism displays maladaptive behaviours, the explanation for this falls into one of three categories: deleterious mutations (the occasional fuck-up in DNA’s copying process), pathogens, and gene-environment mismatch. The last of these refers to situations in which genes that are adaptive in some environment are still present as the environment changes – they simply have not had time to be selected out yet.

As Hagen notes, the symptoms of depression are usually synonymous with the symptoms of neuroticism – a personality variable which remains fairly constant throughout the lifespan and determines responsiveness in aversive situations such as the death of a first-degree relative. However, it is not as though there is no variation in trait neuroticism itself; some people and groups are known to be higher in it than others, e.g. women. How much of that variation is “normal,” in adaptive terms? Perhaps very little, I suspect.

Some of it is obviously gene-environment mismatch. We have not had planes or parachutes for that long, which is why most people are more scared of skydiving than driving a car despite the fact that the latter is demonstrably more dangerous. Equally, sex differences are generally a sign of different (historical) adaptive challenges for the sexes, which may be why women are more neurotic than men by ~0.4 standard deviations, roughly equivalent to two subpopulations of males with a mean height difference of 1¼ inches – think of the English vs. the Spanish. Barely noticeable at the mean, but very much so at the tails. But, on the whole, I doubt that most variation in neuroticism is adaptive.

The genetic architecture of personality traits looks similar to intelligence in that both are massively polygenic and only a small chunk of the variance is eaten up by the “common” neutral alleles. In the case of personality, it might not even be as much as 10%. The rest, according to this paper, is due to “rare variant effects and/or a combination of dominance and epistasis.” These common (freq. > ~1%) variants are in a kind of equilibrium because each has reproductive costs and benefits, otherwise it would be impossible for them to all be common. For a personality trait such as agreeableness, it may be for example that genes which inculcate high agreeableness make one less attractive at the outset, especially as a male, but more fecund in the long run because agreeable people are more willing to have more kids, etc. The rest of the variance, in the individually rare (≤ 1%) alleles, will be deleterious, hence their rarity.

A deleterious allele can accumulate in the population until it reaches equilibrium frequency, the point at which further accumulation is counterbalanced by selection. The equilibrium frequency for a given allele is generally just the mutation rate at its locus divided by its reduction in fitness relative to the population average, e.g. if the population’s average birth rate is 2.0 and the allele knocks carriers down to 1.98, that is a fitness loss of 1%. For an allele with a mutation rate of 0.0001, this gives you an effective “maximum” frequency of 1%. Given the number of variants involved in the brain, there are apparently a lot of these, almost everyone is carrying some, and the unluckiest, at the right tail of mutational load, could be carrying quantities orders of magnitude more than the average.

Since the behavioural correlates of neuroticism are not neutral and tend heavily towards the maladaptive (references: 123), one has to question how much of “normal” sorrow, grief, and anxiety is really normal. Common, sure, but nonetheless aberrant. Natural evolution does not offer a straightforward means to eliminate it in toto, but that need not make it impossible.

This shows yet more problems with the popular usage of the word “disorder.” Perhaps it is time to abandon the word altogether.

Ethnocentrism, or How I Learned to Stop Caring and Abandon Eternally Unreciprocated Altruism

Like many or perhaps most people with a verbally slanted intelligence profile, I used to invest a lot of energy into heady academic philosophy, especially ethics. The distant vision of a cognitively post-human species and a suffering-free world governed by utilitarian moral codes probably stimulated my endogenous opioid system. In retrospect, it seems to have been little more than an indulgence, which is my impression of credentialled ethicists generally – particularly utilitarians. They are content to spend eternity arguing over the merits of classical vs. negative utilitarianism, or whether the balance of hedonic and dystonic impulses that sentient life experiences can be said to make it “worth living or reproducing,” etc. For what it is worth, my meta-ethical views are anti-realist: I do not believe that any moral system is the “correct one.” Utilitarianism may be useful insomuch as most beings do value happiness and disvalue suffering. But the complexity of moral values in humans is such that these categories will never be quantifiable to anyone’s satisfaction, and prescriptive antinatalism is a dead end for reasons I have discussed before. Although, non-human life is not such a tough call. People who romanticise the beauty of the natural world are admiring the décor of a planet-sized torture chamber. It is, as Dawkins observed, “beyond all decent contemplation,” and I would have no compunction in painlessly culling most of it if I and my kind were the governors of the known universe.

If. Herein lies a cautionary tale against the distinctly European tendency of embracing blind universalism. There is no coming future of global utilitarian welfarism, at least not one with internally coherent goals or loyalties, because for most of the peoples of this Earth, the only relevant loyalty is to their group, their moral community, their kith and kin – whatever that looks like. It need not be racial, although it usually is in part. Recent computer model data have shown how ethnocentric orientation fares in competition with three alternative group strategies: humanitarianism (English liberals, effectively), egoism, and traitors (SJWs). The last of these is found to be the least effective strategy by far, which is why in the long run there is little reason to worry about traitorous progressive ideologues. Eventually they will learn the hard way. Ethnocentrism wins the day against humanitarianism too, by exploiting the generosity of humanitarians. This sheds light on how ethnocentrism evolved without invoking impossibilities such as group selection in humans, and how it is that this strategy seems to predominate, quietly, even in relatively humanitarian populations. It also illustrates why calling immigration leftist or anti-tribal is such a farce. Far too many white people see it as such because they are humanitarians – optimised to believe that their group-blind, gene-blind preferences are completely generalisable.

There is no indication that moral preferences can ever transcend genetic conflict. If it could, an awful lot of things in human history would have taken a different course. Political preferences are certainly heritable, and although it may be impractical to seek twin-study data on ethical systems, I would be willing to bet that there is also a common genetic architecture to utilitarians, humanitarians, libertarians, and advocates of “effective altruism.” One aspect of their shared phenotype is distinctly visible whenever large numbers of them congregate, as many have observed. Some even seem to understand, perhaps unconsciously, just how rarefied their values are and endorse global government as a solution. But would all the opportunities for abuse in such a system be worth it? Would anyone other than its own architects, presumably Westerners, consent to its rulership? Indeed, if these are the measures necessary to bring the world into value-alignment, why not push for genetic imperialism, i.e. diluting the phenotypic diversity of the human species through genetic engineering so as to render everyone effective clones of Will MacAskill and Diana Fleischman?

Don’t laugh. Some people actually take this seriously as a solution. Their profile is predictable: white, middle class, educated, and living in a country in which the problems that were not there before multiracialism are getting increasingly hard to ignore – although it is a relatively safe assumption that they live nowhere near it themselves. They understand that group differences, even in something as simple as IQ, are enough to cause resentment (that is why racial affirmative action exists), and advocate post-racialism via genetic enhancement. It sounds nice enough until you realise that it would give hostile aliens yet more reasons to emigrate to the West, with our finite resources, but there is no reciprocal allowance for whites to move into Africa and Asia and take them over. Furthermore, given that there is more to group conflict than IQ, who is to say that this would ameliorate group tensions enough to be worth the risk? In the end, what we are talking about is a literal horror movie replacement scenario in which POC are converted into high-class whites who just happen to have brown bodies. How many would willingly subject themselves to this, and how many of those who rejected it would continue to fan the flames of ethnic conflict?

The reason the appraisal of group differences is important to begin with is that nearly all human altruism is based on either reciprocity or sociogenetic closeness. Thus, a universal basic income within a homogeneous society is at least justifiable on the grounds that it is helping to maintain the social order, reducing economic inequality, and the recipients, if they do something destructive with the money, are only going to self-destruct. In a country like the United States, those paying through their taxes, whites, would have legitimate cause to wonder: “Why is money being forcefully extracted from me and given to people who often explicitly say they want me destroyed?” This, too, spurs on conflict. Foreign aid has yet more pitfalls. Since Africans mostly lack the intelligence to build civilisation alone, they rely on foreign donors, their population explodes, and then masses of them die in Malthusian collapses.

Some humanitarians are prepared to accept that we will probably never get to explore much of the universe, for a host of pragmatic reasons. It seems all the more implausible these days. With two interstellar objects passing through the solar system in as many years, it is looking as though interstellar space is chock-full of fast-moving debris, and since no one actually wants to live on Mars (I don’t), we may well be stuck on this planet. Yet, the same people are unwilling to acknowledge the barriers to moral unity within the human species, and the power dynamics inherent to any attempts at “conversion.” It has already been tried by colonialists. Didn’t work. Why would it this time?

Ultimately, idealists must accept that the only effective way for their ideas to propagate is for their moral community to propagate its genes. The most effective way to do that is for the idealists to secure for themselves exclusive territories, build artificial uteri, and use them to create more copies of themselves. After all, natural human reproduction is much less efficient than that of some e.g. fishes, which can produce a million offspring in a year. Technology offers a way around this, such that we can all be Genghis. The result? An endless supply of persons who share your values and interests, not to mention all the possibilities of genetic and cybernetic enhancement. Not everyone will be welcome to your territory. But I promise, the sooner you can get over that, the happier and saner you, and everyone, will be.

The Brick Wall of Washing Machines

People probably make too much fuss about defining biological sex in terms of its organic components. The term “chromosomes” gets thrown about, maybe because it is commonly used in basic biology education and is consequently a bit more accessible than “gametes,” although gametes are in fact the heart of the matter. Several different chromosomal combinations exist in humans (as abnormalities) besides XX and XY, but gametes come in only two forms – sperm and ova, the component factors of sexual reproduction.

But why does sexual reproduction itself exist, and by extension, why do the two sexes themselves? It is not a given across all species. Quite a few species of plants and some unicellular organisms practise autogamous fertilisation, effectively a slightly modified form of cloning in which the variants of sex are applied to an otherwise identical genetic template. Others, like the New Mexico whiptail, are parthenogenic, meaning that females can produce more females (clones) with no fertilisation at all. Most often this manifests as a “fail safe,” in species such as the Komodo monitor, for environments with a shortage of males. Obligate parthenogens are rare. When it happens, it tends to be the result of an unusually torpid environment combined with some kind of recent fuck-up. In the case of the obligately parthenogenic New Mexico whiptail: it lives primarily in the desert and owes its existence to cross-breeding between two parent lizard species which cannot produce viable males. If its environment changes too much, it is fucked: cloning and autogamy place a hard limit on gene recombination, and therefore adaptation, which is why the latter really only exists in plants and invertebrates, and the dominant presentation of the former is as a “failure mode” in otherwise sexually reproducing species. It is only “practical” in species with extraordinarily high reproductive potential, short gestation periods, sedate or undemanding environments, low metabolic needs, or high mutation rates.

Given this, it is not hard to see where males and females came from. Think of The Sexes™ as a strategy of gene propagation, and then secondary sex differences, in morphology and psychology, as strategies which reflect the different selective pressures the sexes were subjected to and/or subjected each other to (dimorphism). Viewed through this lens, females represent the “default” strategy which began with the oldest organisms (e.g. asexual bacteria): the “incubators,” reproducing through cloning and self-fertilisation, whereas males, the “fertilisers,” are a comparatively recent innovation. The degree of “sex-differentiation load” that falls upon males varies by species according to the aforestated variables in selection. Since females are, as is often noted, the gatekeepers of reproduction, the selection pressures that act primarily on females tend to be similar across species and relate, directly or obliquely, to their ability to bear offspring. For males, the story revolves around the conditions of access to females, which is why the male sex “morph” (form) differentiates itself from the female in completely different ways across species.

Sometimes male and female are barely distinguishable from one another. This is the case for many monogamous avians, whose environments, for whatever reason, do not lend themselves to significant sexual differentiation, which reduces female choosiness, which limits dimorphism: it is a negative feedback system. Other birds, like the crested auklet, engage in a kind of mutually eliminative sexual selection, whereby each sex vets the other for organically expensive sexual ornaments for reasons that are not well understood. In elephant seals, the degree of sex differentiation, just in size, borders on the absurd, although their (relative to humans) feeble brains mean that the possible scope of behavioural differentiation is not all that striking most of the time. Exactly where humans “fit” on these continua of male sex differentiation is something of a relative judgement call, but we are obviously not auklets or crows.

Sexual dimorphism and monomorphism have special behavioural correlates, most of which are obvious. Monomorphic species tend to be monogamous with fairly equal parental investment in offspring and low variance in male reproductive success. Dimorphics tend towards, well, the opposite of those traits. Humans also have a lengthier post-reproductive schedule than most animals, largely because of how long it takes the human brain to develop, which probably limits sex differentiation in e.g. aggression compared with some species that practise effective polygyny, and different normative mating systems between human societies will also affect it notwithstanding other forces such as judicially enforced genetic pacification. There is also considerable variation in these “life history traits” through time: from a time when “childhood” was seldom acknowledged as its own entity and children were expected to be responsible, to the point of execution, for criminal wrongdoing from an extremely young age, to … whatever you would call the situation we have now. Certain kinds of change may be inevitable, in this respect. Other things are remarkably changeless even in the face of new environments.

Human sexual dimorphism is an example of this changelessness. If aliens were to observe the human sexes 100 years ago and now, they would note stability in a range of male and female responses to exogenous stimuli, and note the differences in underlying strategy. Males are the strategy of high risk, aggression, dominance, status-seeking, agency and systems orientation; females are the strategy of low risk, passive aggression, emotional dominance, comfort-seeking, agency by proxy, and social orientation. (A great example of the agency/agency by proxy distinction can be seen in sex-specific antisocial behaviours such as psychopathy in males and Briquet’s syndrome in females.) They would note that human females are the limiting factor in reproduction, but human males are the limiting factor in just about everything else (obligatory Paglia quote about living in grass huts, etc). Intelligence is probably not a sexually selected trait in humans, or at least, there is little good evidence for it, and sex differences in intelligence per se are trivial. The sex difference is in application. Human brain complexity and its antecedents mean that the domain of activities germane to preserving one’s genetic line are rather more elaborate than normal, and since females are the “selector” sex, those tasks, and selection for assiduous task-doing, are upon the males.

There is no real sense in which human beings can “escape” natural selection, because natural selection is the reason behind everything that we are, including the desire (of some) to “overcome” natural selection, whatever that means. However, natural selection has also given us moral instincts and reasoning abilities which, combined with the technologies born mostly of male ingenuity, could allow us to divert evolutionary selection pressures in a way that could never happen without our technology. The crapshoot of genetic recombination, by the lights of human morality, is just that: a crapshoot. At some point, artificial gametogenesis could allow humans to become effective hermaphrodites, even if we still have the old equipment. CRISPR, and eventually full genome synthesis, could render natural recombination processes obsolete, and therefore sexual reproduction itself obsolete. Childhood will increasingly resemble adulthood as we produce children of extremely superior intelligence, and thus, reduce the need for high investment. Male breadwinning social roles will run into a brick wall of automation, or perhaps cloning of the 99.999th percentile most workaholic and intelligent workers. Female homemaking roles will (or have?) run into a brick wall of washing machines. As technology outpaces our obsolescent biological hardware, one seriously has to wonder: how much of the human intersexual dynamic, i.e. behavioural sexual dimorphism, is worth preserving? Maybe we could do with being more like the monomorphic crows.

Alternatively, perhaps one imagines a world of nearly infinite morphological freedom where individuals can modify their own physiology and psychology with ease, unconstrained by sex, like character profiles in an RPG, and where sex and gender, insomuch as they exist, amount to little more than fashion. One may dream.

On Agency and Accountability

Is it morally permissible for a 14-year-old to be enlisted in the military under any circumstance? The impulse this question elicits is one of disgust, founded on a historically and geographically local set of assumptions about moral agency and its relationship to age. The “true” existence of agency itself is contestable if taken to mean the free exercise of an internal will. In light of a determinist view of causality, it may be merely a legal heuristic – a means by which to differentiate the degrees of illusory freedom to action in developed and developing brains. The human brain does not mature until the mid-late 20s, and one can safely assume that no one is up for all their freedoms to be forfeited until age 25, since no such standard exists. With that off the table as a viable standard, we are left with varying degrees of randomness, stupidity, and (often hollow) virtue-signalling.

It goes without saying that reason postcedes rather than precedes morals – humans have moral emotions, which they justify through consequentialist reasoning only if such an expectation is placed upon them. Hence, discourse of the kind that follows here is rare.

One may ask, “What is true of a 14-year-old which, if true of an 18-year-old, would render the 18-year-old unfit for military service?” Intelligence is an untenable response, because intelligence does not scale linearly with age, and there are (and have been) plenty of legally adult military personnel with IQs in the 80s. The problem would be resolvable, perhaps, if age were a quantifiable trait, as intelligence is, as opposed to a numerical series of demographic cohorts, each with large individual variation in mental profiles. The reason it is illegal for anyone with an IQ below 83 to be inducted into the US military is not even “moral,” by the way. It is that such people are useless to the military. The moral argument, if any, is retroactive. No one seems to care how hard it is for them, really. Perhaps the “child” is a sacred demographic category, one which, under current conditions, must be extolled as a nexus of antediluvian bliss and innocence. By contrast, even acknowledging the unintelligent as a group with distinct needs is to be treated with indifference at best, or more often, suspicion.

It is legal in the UK to join the armed forces at age 16. Few who object to this seem interested in learning how many of these 16-year-olds regret the decision in hindsight, which would seem a good test of how “impulsively” the choice is made. It is well known that the military offers a kind of “escape hatch” of civic duty for teenagers with little hope of succeeding elsewhere, and many of them surely wanted to join from younger ages. The military, then, removes them from the morass of indecision and wastefulness that they would otherwise carry with them. “Impulsivity” is hard to measure. Maybe it can be defined in qualitative terms: a tendency to make decisions with low future-orientation. But if 14-year-olds are blighted by that, many more adults are, especially among the stupid.

Note, concomitantly, that the moral importance of emotions such as “regret” is not uniform. I argue that there are cases in which it can safely be called delusional or self-inflicted, as all emotions can be. In some cases, it simply does not feel significant enough to onlookers for it to be considered morally salient, such as the regrets of child actors about their profession as they become adults – something that few care to acknowledge.

Even the knee-jerk harm-reduction case against the military in the context of this argument has complications, because in most years, military deaths are low – probably lower than construction work or industrial fishing. Yet, no one cares for the lowly construction worker despite the fact that he has a 50% chance of having an IQ south of 92, and a, relatively speaking, alarming risk of fatality in any given year, not just those in which there is an ongoing war. Sudden, high-density death is more morally weighty to the average person than slow, diffuse, “accidental” death.

The hard determinism of brain-states, combined with knowledge of those states’ evolution through age, may have relevance to legal degrees of agentic “freedom.” If one compares the brain at different stages of development, the complexity and variety of its interactions with the world (“decisions”) at one stage may differ from those at another on aggregate, although exactly how morally salient the difference is will vary from individual case to case and may sometimes be doomed to subjective judgement call.

The flip side of agency (or freedom) is accountability, a concept which was once as ubiquitous as that of “freedom” today. The extent of brutal judicial punishment in premodern England was remarked upon by contemporaneous authors and looks absurd in retrospect. I suspect a large part of it was an environment of resource scarcity and technological deprivation: and the social tinderbox that these facts gave rise to – lawmakers may have felt that they could not but disincentivise, in the clearest way possible, antisocial behaviour, because whatever antisocials destroyed could not be rebuilt as quickly as today. They took no chances on recidivism. Another example would be the harsh punishments for sex crimes, in a world swimming with sexually transmitted pathogens but no effective medicine. This thesis could be empirically examined: are harsh punishments more common in deprived regions? Is clemency more common among the privileged classes? Etc. It also seems to dovetail with the assertion that our modern obsession with rights and freedoms is due to technologically generated luxury, without which the social order prioritises duties.

Old criminal justice may have been disproportionate and cruel. Nonetheless, if ever it were possible to quantify how well behaviour X at {age} predicts outcome Y at a later date, why deny agency, or accountability, in cases where it is indeed predictive? God knows who is qualified to make such calculations; probably no human, since humans are all preoccupied with sending virtue signals of endless freedom and protection. But, in principle, it ought not to be difficult to tell whether a child who murders is likely to do so again in adulthood, just as it is possible to make an algorithm that predicts recidivism within adulthood. In which case, why not hang the little shit? (Comedic exaggeration, of course. I do not endorse the death penalty.) After all, many traits are stable throughout the lifespan. What is now called psychopathy is usually one of them. I have no “solution” to dealing with age-dependent social norms, nor much hope of a “science” of agency and accountability ever coming to pass. Age and agency is nevertheless a conundrum of interest to thinking people.

It is doubly unlikely that such a science will emerge as social norms from high-status societies, such as the West, spread across the planet memetically. Eventually it will get to the point where people cannot distinguish the signal from reality, and we shall all pretend that the move in this direction was “scientific,” and “progressive,” as with child labour laws. A less pernicious example of exactly this is the recent trend of Arabs turning away from religion. By far the biggest predictive factor in (ir)religiosity at the national level is IQ, and this change is not due to the Arabs’ having gained IQ points, so they are probably just copying what they see as their social betters, in the West.

The arbitrariness of it all came to mind again when I saw someone on Twitter being called a “true sociopath” and “enabler of child-rape” because he was apparently endorsing changing the age of sexual consent to 15. The same was debated in Britain recently. Since the legal age is 16 here, presumably Britons are enablers of child-rape as far as the average American is concerned. It never occurs to them: maybe the concept of the post-pubescent “child” is highly socially fungible, and no one can even really agree on what it is, let alone what its rights or liabilities should be.

Such faux pas are ever-present, though, because extraordinarily few people have a picture of reality in their heads that integrates anything beyond the whimsy of the here and now. Many people struggle to wrap their heads around how different public opinion was on their political hobby-horse as recently as 20 years ago, never mind any further back.

And that’s not counting the surprising number of stupid and ahistorical things which even political dissidents believe.

Pinksheet Yang

A couple of months ago, when an avalanche of Yang memes seemed to appear out of nowhere, Hunter Wallace pointed out (his youtube channel has been deleted so I can’t link to it) that this wasn’t organic, and that Yang was clearly getting a “boost” from somewhere. Wallace was certainly correct about that. It was clearly a coordinated, professional op, but by whom? I have some ideas about who was directing it and what the reasons were, but it doesn’t matter. It’s all speculation. It’s also hard to tell to what extent anything that originates from places like 4chan is even real anymore, or to what extent it ever was. That wasn’t Yang’s fault though. Many of his policies were good. If nothing else, $1000 a month is $1000 a month. Nothing else mattered. Yang’s candidacy was propelled in essentially a “pump and dump” scheme, similar to those used in the seedy world of pinksheets and penny stocks promotion. With that thought, how appropriate the “pink hats” were.

None of that was Yang’s fault though. He of course made a strategic error in failing to embrace his new “supporters” and capitalize on the momentum which was gifted to him by the powers that be. Many people were disappointed by this and quickly abandoned the yacht. Part of me found it kind of admirable though that Yang insisted on being true to himself, “math” and all, rather than latch onto some fleeting, trendy meme campaign and pretend to be an obnoxious shitlord.

Yang did make some real blunders though. His first error was the idea to announce some new policy everyday (can’t remember if it was for 30 days or 60 days.) Many of these proposals seemed to have just been pulled out of his ass or a result of poor advice. Things such as “lowering the voting age to 16” were totally unnecessary and alienated a lot of potential supporters. He failed to take his own advice and “focus on the money.” His big selling point was the $1000 per month. That is all he should have been talking about with the exception of a few other common sense stances on important issues of the day to show he was a serious, well-rounded candidate. Yang’s other serious error was in his over the top pandering to SJWs and Russia conspiracy airheads. There is no way that someone as smart as Yang really buys into all that nonsense. The same criticism I applied to Trump years ago, applies to Yang. Intelligent candidates are at their best when they boldly articulate what they believe in their hearts rather than tell people they think (or have been advised) voters want to hear. Even if it seems unpopular or like a bad move politically, you have to just take the heat and press forward, confident that you will be vindicated. Lead the people where you want them to go.

Finally, I didn’t watch the debates, but from every indication, Yang’s performance was a disaster. He squandered what little airtime he received to make statements like “Russia is hacking our democracy.” Yang clearly does not understand where his potential pool of support lies. There was a niche available to him which he has been too clueless to recognize and exploit. Look, I like Yang. I wrote 3 lengthy essays and made a youtube video expressing enthusiastic (by my standards anyway) support for him. There’s still a long way to go in the election. If he’s really good at math, maybe he can learn from his mistakes like a sophisticated computer. At this point though, I don’t believe Yang has what it takes. $YANG stock has tanked. Don’t be left holding this bag.

A Brief Look at “Incel” Hysteria

Individuals of the libertine persuasion, those who take delight in the kind of bland sex-positive advocacy for which there are now countless figureheads on the internet, have suggested to me many times that the availability of internet pornography is behind recent falls in sexual violence, beginning roughly in the early 90s. I did not think that was abjectly insane, but it was certainly hard to miss some inconvenient background details: homicide, and many other kinds of crime, fell in the 90s, which casts immediate doubt on pornography as causal. No one quite knows why it happened, but there are several hypotheses, some spicier than others.

It is important to mention that this period of declining violence (~ 1990-2005) is a mere blip within a trend which has been going on far longer. Technology, broadly construed, may be responsible for some recent declines, and before that, the aggressive genetic selection against criminality that took place in Europe over several centuries.

One wonders: when was the last time that rape was a viable reproductive strategy – that is, more likely to result in descendants than to result in imprisonment and ignominy? World War Two seems a good recent example: the Red Army managed to leave hundreds of thousands of descendants through the rape of foreign women of the nation they were at war with, but that is quite different in terms of its social consequences, i.e. has much weaker selective force acting on it, than raping women in one’s social in-group or clan. All this is amplified, too, by the availability of abortion, which renders births from rape in the First World basically non-existent at this point.

So, the type of rapist who rapes the enemy’s women in war has a distinct psychological profile, and is probably far more common, than the type who rapes at home. In the modern West, the latter type make up almost all “rape data” after controlling for immigration. Exactly what do we know about those people?

Genes do not seem to contribute much to the variance of propensity to rape adults in Sweden. Then again, possible confound: Sweden. However, I would not be surprised if the pattern holds true even in undiversified regions. For instance, it could be that the heritability of within-group rape has declined over time because its “adaptive” function, if it ever had one, is now a dead end. So now, it (within-group rape) survives because the genes that increase the likelihood of doing it are normally implicated in other, more reproductively successful behaviours.

All of this should cast doubt on the idea that the incel “phenomenon” will trigger a rape epidemic. Men who rape (again, excluding wartime rape) have different brains from men who do not, regardless of whether they are celibate. And anyone saying that men need more “sexual outlets” to ward off the incoming Incel Rape Army is full of shit. There has essentially never been a society where long-term relationships (or even short-term liaisons) are men’s only chance at sexual access, even where alternative means are “banned.” Pornography and prostitution are banned in South Korea, yet approximately 23% of men there have visited prostitutes.

The average marital age for males in western Europe has been 26-28 for centuries, and a goodly portion of the population never married, even back when marriage was far more of an idealised social norm. Current trends are really not all that terrifying.