The Brick Wall of Washing Machines

People probably make too much fuss about defining biological sex in terms of its organic components. The term “chromosomes” gets thrown about, maybe because it is commonly used in basic biology education and is consequently a bit more accessible than “gametes,” although gametes are in fact the heart of the matter. Several different chromosomal combinations exist in humans (as abnormalities) besides XX and XY, but gametes come in only two forms – sperm and ova, the component factors of sexual reproduction.

But why does sexual reproduction itself exist, and by extension, why do the two sexes themselves? It is not a given across all species. Quite a few species of plants and some unicellular organisms practise autogamous fertilisation, effectively a slightly modified form of cloning in which the variants of sex are applied to an otherwise identical genetic template. Others, like the New Mexico whiptail, are parthenogenic, meaning that females can produce more females (clones) with no fertilisation at all. Most often this manifests as a “fail safe,” in species such as the Komodo monitor, for environments with a shortage of males. Obligate parthenogens are rare. When it happens, it tends to be the result of an unusually torpid environment combined with some kind of recent fuck-up. In the case of the obligately parthenogenic New Mexico whiptail: it lives primarily in the desert and owes its existence to cross-breeding between two parent lizard species which cannot produce viable males. If its environment changes too much, it is fucked: cloning and autogamy place a hard limit on gene recombination, and therefore adaptation, which is why the latter really only exists in plants and invertebrates, and the dominant presentation of the former is as a “failure mode” in otherwise sexually reproducing species. It is only “practical” in species with extraordinarily high reproductive potential, short gestation periods, sedate or undemanding environments, low metabolic needs, or high mutation rates.

Given this, it is not hard to see where males and females came from. Think of The Sexes™ as a strategy of gene propagation, and then secondary sex differences, in morphology and psychology, as strategies which reflect the different selective pressures the sexes were subjected to and/or subjected each other to (dimorphism). Viewed through this lens, females represent the “default” strategy which began with the oldest organisms (e.g. asexual bacteria): the “incubators,” reproducing through cloning and self-fertilisation, whereas males, the “fertilisers,” are a comparatively recent innovation. The degree of “sex-differentiation load” that falls upon males varies by species according to the aforestated variables in selection. Since females are, as is often noted, the gatekeepers of reproduction, the selection pressures that act primarily on females tend to be similar across species and relate, directly or obliquely, to their ability to bear offspring. For males, the story revolves around the conditions of access to females, which is why the male sex “morph” (form) differentiates itself from the female in completely different ways across species.

Sometimes male and female are barely distinguishable from one another. This is the case for many monogamous avians, whose environments, for whatever reason, do not lend themselves to significant sexual differentiation, which reduces female choosiness, which limits dimorphism: it is a negative feedback system. Other birds, like the crested auklet, engage in a kind of mutually eliminative sexual selection, whereby each sex vets the other for organically expensive sexual ornaments for reasons that are not well understood. In elephant seals, the degree of sex differentiation, just in size, borders on the absurd, although their (relative to humans) feeble brains mean that the possible scope of behavioural differentiation is not all that striking most of the time. Exactly where humans “fit” on these continua of male sex differentiation is something of a relative judgement call, but we are obviously not auklets or crows.

Sexual dimorphism and monomorphism have special behavioural correlates, most of which are obvious. Monomorphic species tend to be monogamous with fairly equal parental investment in offspring and low variance in male reproductive success. Dimorphics tend towards, well, the opposite of those traits. Humans also have a lengthier post-reproductive schedule than most animals, largely because of how long it takes the human brain to develop, which probably limits sex differentiation in e.g. aggression compared with some species that practise effective polygyny, and different normative mating systems between human societies will also affect it notwithstanding other forces such as judicially enforced genetic pacification. There is also considerable variation in these “life history traits” through time: from a time when “childhood” was seldom acknowledged as its own entity and children were expected to be responsible, to the point of execution, for criminal wrongdoing from an extremely young age, to … whatever you would call the situation we have now. Certain kinds of change may be inevitable, in this respect. Other things are remarkably changeless even in the face of new environments.

Human sexual dimorphism is an example of this changelessness. If aliens were to observe the human sexes 100 years ago and now, they would note stability in a range of male and female responses to exogenous stimuli, and note the differences in underlying strategy. Males are the strategy of high risk, aggression, dominance, status-seeking, agency and systems orientation; females are the strategy of low risk, passive aggression, emotional dominance, comfort-seeking, agency by proxy, and social orientation. (A great example of the agency/agency by proxy distinction can be seen in sex-specific antisocial behaviours such as psychopathy in males and Briquet’s syndrome in females.) They would note that human females are the limiting factor in reproduction, but human males are the limiting factor in just about everything else (obligatory Paglia quote about living in grass huts, etc). Intelligence is probably not a sexually selected trait in humans, or at least, there is little good evidence for it, and sex differences in intelligence per se are trivial. The sex difference is in application. Human brain complexity and its antecedents mean that the domain of activities germane to preserving one’s genetic line are rather more elaborate than normal, and since females are the “selector” sex, those tasks, and selection for assiduous task-doing, are upon the males.

There is no real sense in which human beings can “escape” natural selection, because natural selection is the reason behind everything that we are, including the desire (of some) to “overcome” natural selection, whatever that means. However, natural selection has also given us moral instincts and reasoning abilities which, combined with the technologies born mostly of male ingenuity, could allow us to divert evolutionary selection pressures in a way that could never happen without our technology. The crapshoot of genetic recombination, by the lights of human morality, is just that: a crapshoot. At some point, artificial gametogenesis could allow humans to become effective hermaphrodites, even if we still have the old equipment. CRISPR, and eventually full genome synthesis, could render natural recombination processes obsolete, and therefore sexual reproduction itself obsolete. Childhood will increasingly resemble adulthood as we produce children of extremely superior intelligence, and thus, reduce the need for high investment. Male breadwinning social roles will run into a brick wall of automation, or perhaps cloning of the 99.999th percentile most workaholic and intelligent workers. Female homemaking roles will (or have?) run into a brick wall of washing machines. As technology outpaces our obsolescent biological hardware, one seriously has to wonder: how much of the human intersexual dynamic, i.e. behavioural sexual dimorphism, is worth preserving? Maybe we could do with being more like the monomorphic crows.

Alternatively, perhaps one imagines a world of nearly infinite morphological freedom where individuals can modify their own physiology and psychology with ease, unconstrained by sex, like character profiles in an RPG, and where sex and gender, insomuch as they exist, amount to little more than fashion. One may dream.

On Agency and Accountability

Is it morally permissible for a 14-year-old to be enlisted in the military under any circumstance? The impulse this question elicits is one of disgust, founded on a historically and geographically local set of assumptions about moral agency and its relationship to age. The “true” existence of agency itself is contestable if taken to mean the free exercise of an internal will. In light of a determinist view of causality, it may be merely a legal heuristic – a means by which to differentiate the degrees of illusory freedom to action in developed and developing brains. The human brain does not mature until the mid-late 20s, and one can safely assume that no one is up for all their freedoms to be forfeited until age 25, since no such standard exists. With that off the table as a viable standard, we are left with varying degrees of randomness, stupidity, and (often hollow) virtue-signalling.

It goes without saying that reason postcedes rather than precedes morals – humans have moral emotions, which they justify through consequentialist reasoning only if such an expectation is placed upon them. Hence, discourse of the kind that follows here is rare.

One may ask, “What is true of a 14-year-old which, if true of an 18-year-old, would render the 18-year-old unfit for military service?” Intelligence is an untenable response, because intelligence does not scale linearly with age, and there are (and have been) plenty of legally adult military personnel with IQs in the 80s. The problem would be resolvable, perhaps, if age were a quantifiable trait, as intelligence is, as opposed to a numerical series of demographic cohorts, each with large individual variation in mental profiles. The reason it is illegal for anyone with an IQ below 83 to be inducted into the US military is not even “moral,” by the way. It is that such people are useless to the military. The moral argument, if any, is retroactive. No one seems to care how hard it is for them, really. Perhaps the “child” is a sacred demographic category, one which, under current conditions, must be extolled as a nexus of antediluvian bliss and innocence. By contrast, even acknowledging the unintelligent as a group with distinct needs is to be treated with indifference at best, or more often, suspicion.

It is legal in the UK to join the armed forces at age 16. Few who object to this seem interested in learning how many of these 16-year-olds regret the decision in hindsight, which would seem a good test of how “impulsively” the choice is made. It is well known that the military offers a kind of “escape hatch” of civic duty for teenagers with little hope of succeeding elsewhere, and many of them surely wanted to join from younger ages. The military, then, removes them from the morass of indecision and wastefulness that they would otherwise carry with them. “Impulsivity” is hard to measure. Maybe it can be defined in qualitative terms: a tendency to make decisions with low future-orientation. But if 14-year-olds are blighted by that, many more adults are, especially among the stupid.

Note, concomitantly, that the moral importance of emotions such as “regret” is not uniform. I argue that there are cases in which it can safely be called delusional or self-inflicted, as all emotions can be. In some cases, it simply does not feel significant enough to onlookers for it to be considered morally salient, such as the regrets of child actors about their profession as they become adults – something that few care to acknowledge.

Even the knee-jerk harm-reduction case against the military in the context of this argument has complications, because in most years, military deaths are low – probably lower than construction work or industrial fishing. Yet, no one cares for the lowly construction worker despite the fact that he has a 50% chance of having an IQ south of 92, and a, relatively speaking, alarming risk of fatality in any given year, not just those in which there is an ongoing war. Sudden, high-density death is more morally weighty to the average person than slow, diffuse, “accidental” death.

The hard determinism of brain-states, combined with knowledge of those states’ evolution through age, may have relevance to legal degrees of agentic “freedom.” If one compares the brain at different stages of development, the complexity and variety of its interactions with the world (“decisions”) at one stage may differ from those at another on aggregate, although exactly how morally salient the difference is will vary from individual case to case and may sometimes be doomed to subjective judgement call.

The flip side of agency (or freedom) is accountability, a concept which was once as ubiquitous as that of “freedom” today. The extent of brutal judicial punishment in premodern England was remarked upon by contemporaneous authors and looks absurd in retrospect. I suspect a large part of it was an environment of resource scarcity and technological deprivation: and the social tinderbox that these facts gave rise to – lawmakers may have felt that they could not but disincentivise, in the clearest way possible, antisocial behaviour, because whatever antisocials destroyed could not be rebuilt as quickly as today. They took no chances on recidivism. Another example would be the harsh punishments for sex crimes, in a world swimming with sexually transmitted pathogens but no effective medicine. This thesis could be empirically examined: are harsh punishments more common in deprived regions? Is clemency more common among the privileged classes? Etc. It also seems to dovetail with the assertion that our modern obsession with rights and freedoms is due to technologically generated luxury, without which the social order prioritises duties.

Old criminal justice may have been disproportionate and cruel. Nonetheless, if ever it were possible to quantify how well behaviour X at {age} predicts outcome Y at a later date, why deny agency, or accountability, in cases where it is indeed predictive? God knows who is qualified to make such calculations; probably no human, since humans are all preoccupied with sending virtue signals of endless freedom and protection. But, in principle, it ought not to be difficult to tell whether a child who murders is likely to do so again in adulthood, just as it is possible to make an algorithm that predicts recidivism within adulthood. In which case, why not hang the little shit? (Comedic exaggeration, of course. I do not endorse the death penalty.) After all, many traits are stable throughout the lifespan. What is now called psychopathy is usually one of them. I have no “solution” to dealing with age-dependent social norms, nor much hope of a “science” of agency and accountability ever coming to pass. Age and agency is nevertheless a conundrum of interest to thinking people.

It is doubly unlikely that such a science will emerge as social norms from high-status societies, such as the West, spread across the planet memetically. Eventually it will get to the point where people cannot distinguish the signal from reality, and we shall all pretend that the move in this direction was “scientific,” and “progressive,” as with child labour laws. A less pernicious example of exactly this is the recent trend of Arabs turning away from religion. By far the biggest predictive factor in (ir)religiosity at the national level is IQ, and this change is not due to the Arabs’ having gained IQ points, so they are probably just copying what they see as their social betters, in the West.

The arbitrariness of it all came to mind again when I saw someone on Twitter being called a “true sociopath” and “enabler of child-rape” because he was apparently endorsing changing the age of sexual consent to 15. The same was debated in Britain recently. Since the legal age is 16 here, presumably Britons are enablers of child-rape as far as the average American is concerned. It never occurs to them: maybe the concept of the post-pubescent “child” is highly socially fungible, and no one can even really agree on what it is, let alone what its rights or liabilities should be.

Such faux pas are ever-present, though, because extraordinarily few people have a picture of reality in their heads that integrates anything beyond the whimsy of the here and now. Many people struggle to wrap their heads around how different public opinion was on their political hobby-horse as recently as 20 years ago, never mind any further back.

And that’s not counting the surprising number of stupid and ahistorical things which even political dissidents believe.

A World of Trauma – Civilizational Psychosadomasochism and Emptiness

According to Google’s vast textual corpora, there was nary an instance of the term “trauma,” or its distinctly psychiatric derivative “traumatized,” in written English prior to the 1880s. The first usage of “trauma” is documented in the 1690s, at which point it referred to physical wounding only. Its “psychic wound” sense did not pick up until the tail end of the 19th century, which is now far more familiar to us than the original sense. Exactly what took root in the world between then and now? The standard narrative is that the medical profession became wiser, but what of the wisdom embedded in our species’ genetic history? Note that even most doctors and biomedical lab technicians know little of basic genetics, or, one has to assume, of evolutionary reasoning. I recall being sneeringly told by one, on introducing her to the concept, that she was only interested in “proper science.” This is about when it set in that even many “grunt-work” scientists are basically morons. She certainly was.

Applying the principles of natural selection (i.e. evolutionary reasoning) to find the aetiology of disease tends to yield different answers from those that are now fashionable. In a 2000 paper, “Infectious Causation of Disease: An Evolutionary Perspective,” the authors compellingly argue that a huge number of supposedly mysterious illnesses are in fact caused by pathogens – bacteria or viruses. The argument is simple: any genetic endowment which essentially zeroes fitness (reproductive potential) can be maintained in a population’s genes only at the basal rate of errors, i.e. mutations, in the genetic code, with the apparently sole exception of heterozygote advantage for protection against malaria. Thus, anything so destructive which rises above a certain threshold of prevalence should arouse suspicion that a pathogen is to blame. This would include schizophrenia, an alleged evolutionary “paradox,” with a prevalence of ~0.5%, especially since, unlike “psychopathy,” schizophrenia has low twin-concordance, low heritability, and is discontinuous with normal personality. At present, direct evidence of the pathogen is scant, but that is to be expected: viruses are tricksy. No other explanation is plausible.

What, then, when one turns the evolutionary lens towards “trauma”? What is commonly called psychological trauma can helpfully be divided into two categories: non-chronic and chronic. The former is what most people would call distress. It is adaptive to have unpleasant memories of situations that could kill you or otherwise incur significant reproductive costs, which is why everyone feels this. It is good to have unpleasant memories of putting one’s hand on an electric fence for this reason. It is bad, and certainly not evolutionarily adaptive, for the memory to continually torture you for years after the fact. I have it on good authority that this does nothing to attract mates, for example.

In light of this, it becomes clearer what may be behind the apparent explosion of mental “traumas” in our psychiatry-obsessed world. One may observe, for instance, that there is no record of anything remotely resembling PTSD in the premodern world. It emerged in the 20th century, either as a result of new weapons inflicting new kinds of damage (brain injuries), or from psychiatrists’ egging people on, or both. If the received narrative about it were true, then all of Cambodia ought to have gone completely insane in recent times. It did not happen. Likewise with rape. One struggles to find any mention of long-term trauma from rape for most of human history. The ancients were not very chatty about it. Of course, they saw it as wrong, as is rather easy to do, but their notions about it were not ours. Rape does impose reproductive costs, but so does cuckoldry, and being cuckolded does not cause chronic trauma. Nor would claiming that it had done so to you do much for your social status. Sadly, exactly one person in existence has the balls to comment on this rationally. Many of these problems seem to originate from something more diffuse, something about the cultural zeitgeist of our age, rather than a particular field or bureaucracy.

It is generally agreed upon in the modern West that sexual activity before late adolescence, especially with older individuals, is liable to causing trauma of the chronic kind. This alone should give one pause, since “adolescence” is a linguistic abstraction with only very recent historical precedent, and many of the biopsychological processes which are conventionally attributed uniquely to it begin earlier and persist long after. The onset of stable, memorable sexual desire and ideation occurs at the age of ~10 (it was certainly present in me by age 11), commensurate with gonadarche, and is certainly almost universally present by age 12-13. The reason these desires arise with gonadarche is simple: they exist to facilitate reproduction. It would make little biological sense in any species other than humans to experience sexual desire but also experience some strange latency period of 1-8 years (depending on the country) during which any acting upon those desires causes inconsolable soul-destruction. Any time something seems completely unique to humans, one has to wonder if it has something to do with uniquely human cultural phenomena such as taboos. It is even more obvious when one observes human cultures which lack these taboos, e.g. Classical Greece. When they married their daughters off at age 13-14, they were concerned chiefly about whether the groom could provide her and her children with a stable living. But they were not concerned about soul-destruction. At least, I’m fairly sure of that. For the record: this is not an endorsement of lowering the age of consent. I am decidedly neutral on that question, but I do not believe Mexico’s answer is any less correct than California’s or vice versa.

It is wrong to say that psychiatrists, or therapists, have a superpower of changing people’s phenotypes. This is impossible, as any such change they could impart would be genetically confounded, i.e. it is genetically non-random sample of the population who are “successful” subjects to their interventions. So it seems fair to assume that a lot of mental health problems are explicable in this way rather than through straight-up iatrogenesis, and their prevalence is inflated somewhat through media hype and social media shenanigans. However, an interesting question is: how much of an evolutionarily novel phenomenon is the field of psychiatry? Are our minds equipped to deal with it? Well, not everyone’s. It seems possible to confect illnesses out of thin air if you subject the right person to the right conditioning, as is the case with the probably purely iatrogenic “dissociative identity disorder.”

Masses of people these days shell out large chunks of their finances on “therapy,” a form of psychiatric intervention which has shown itself to be of at best mixed efficacy. Many long-running randomised controlled trials of its effects turn up jack shit, which ought not to be shocking given what is known about the non-effects of education, extensively documented by Bryan Caplan and others. It has to change the brain in a dramatic way. Still lingering though, is the question of whether it may in fact make matters worse. Many social commentators have taken notice of the way in which mental illness, especially “depression,” seems to be afforded a kind of bizarre social status in some circles, such as within university culture in Canada. Even more galling is that it is not even clear whether “depression” of the garden variety is a disorder; it may be an adaptation that evolved to ward people off hopeless pursuits. Status is a powerful motivator, so this weird grievance culture cannot help, but encouraging people to make their living from talking to such people and consoling them with soothing words cannot be great either, since it is likely to induce the kind of institutional inertia on which the pointless continuance of America’s “drug war” is sometimes (correctly) blamed.

Legalising drugs and investing more energies into high-precision “super-drugs,” e.g. powerful mood-enrichers with no side effects, would do more for the true chronic depressives who literally have never even known what it means to be happy – a malady probably induced by rare mutations if it exists – than what is on offer today. Drugs are the only guaranteed way to do profound psychological re-engineering without gene-editing. It is not clear, though, if the psychiatric industry as it currently exists would be happy to see such problems vanish.

The Nail In the Coffin

When I saw that JF Gariepy was releasing a book related to genetics, I assumed it would be another boring race/IQ/HBD volume that I would have little interest in (given the redundancy of the topic.) However, it would turn out that JF instead devotes his book, The Revolutionary Phenotype to the subject of gene editing. In it, he argues against messing around with the technology of gene editing, making the case that such modifications will lead to the end of our species. He claims essentially that the new and improved lifeforms resulting from this process will eventually replace humanity. For reasons not entire clear to me, JF just seems to take for granted that such a development would be a bad thing. In fact, the entire point of the book becomes moot however, if the reader disagrees with the JF’s premise. JF’s arguments for why gene editing may inevitably spell the end for humans may be astonishingly persuasive, air tight and what have you, but if one doesn’t think that the prospect of humans being replaced by a related, “superior” organism would be a negative outcome, then it’s merely an academic question. The merit of the arguments put forth doesn’t matter one way or the other. The type of people whom will be most receptive to JF’s premise are those already vehemently opposed to gene editing and transhumanism on religious and moralistic grounds (“humans shouldn’t be ‘playing God'”…etc.) They don’t give a rats ass in a room full of cats about the scientific arguments for it being bad other than to the extent such arguments could be used to reinforce their pre-existing religious beliefs (if JF had written a scientifically persuasive book in favor of gene editing, these same people would dismiss it irrespective of the veracity of the arguments.)

I have to admit that I find JF’s ambivalence toward gene editing to be disappointing. Ironically, it actually strikes me as similar to Jordan Peterson warning people of the dangers of identity politics. As Ryan Faulk has pointed out, Jordan Peterson’s audience is primarily white, and the likely effect of his crusade against identity politics will be to make white people (the least overtly ethnocentric group) less likely to engage in it, while other groups continue to use it to their advantage. Identity politics may change in form as new identities emerge, but it isn’t going away. Since non-white groups are unlikely to take Peterson’s advice and abandon group identity, Peterson ultimately serves to convince whites to further handicap themselves and become less ethnocentric than they already are.

What JF does here is strikingly similar. Gene editing is absolutely going to happen. The “genie” (so to speak) is out of the bottle. Even if nations don’t officially sanction it at the public level, there will be scientists which continue with this research privately, and as a practical matter it will be unstoppable. JF’s own thesis backs up this assertion, since he argues that these newly concocted, revolutionary beings will replace us if they are created. Since scientists in Asia or who knows where will continue to move forward with gene editing, these beings will come to be sooner or later. Since JF’s audience is primarily AltRight and “pro-white” types and not rogue Asian scientists, the effect of this book will be to convince the AltRight to simply cede this bio-technological frontier to someone else, even though as humans we won’t be shielded from the effects of others embracing it anyway. Babies are going to be genetically modified. So we can either decide to be at the forefront and help direct this process toward something in our image, or sit passively as others enthusiastically explore this technology and render us irrelevant. The West has typically been at the forefront of technological progress, which is why it was so easily able to dominate large parts of the world, where natives (whom could easily outbreed Europeans) held vastly superior numbers. Even with the most aggressive pro-natalist policies, Europeans are not going to outhump the third world. Why then would Europeans want to deprive themselves of one of the few tools which could offer them some kind of advantage?

Also, the idea that beings which result from experiments in gene-editing will lead to “our” extinction strikes me as a matter of interpretation. Sure, maybe technically such organisms would not be our direct progeny, but just because a baby didn’t pop out of some lady’s vagina, does not mean it isn’t our descendant for all intents and purposes. If the result of gene editing is that something ends up being created which improves upon and replaces humanity, I don’t see what the problem is, since these supposedly “superior” beings would ultimately still be a product of our creation. If not literally, they would figuratively be our children…(and sometimes children do grow up to rebel and take our place.) This seems undoubtedly to me to be a more remarkable achievement though than simply two overweight, reality tv watching, human beasts taking a trip to bonetown and making some disgusting babies. Any idiot with functioning reproductive organs can do that.

This is of course to say nothing of the misanthropic objections to JF’s premise. Anyone who has ever worked retail on Black Friday probably wouldn’t clutch their pearls at the idea of humanity becoming extinct and replaced by something better. A few hours overhearing people’s conversations on public transit or an afternoon reading the hundreds of thousands of replies to a typical Ariana Grande tweet, and I might volunteer to push the button myself.

One thing which JF’s book has managed to do is act as a proverbial “nail in the coffin” in terms of my own relationship with AltRight ideas. JF’s faction was probably one of the few remaining which I could still relate to on any level. His laid back persona, high-profile guest lineup, cogent debate style, and pink pantheresque delivery make for what for what in my mind is probably the only substantive and watchable AltRight program. There are no compelling factions or attractive political movements to be enthusiastic about. People like me are withdrawing and moving toward an abstract, post-political future. I, for one, am ready for whatever comes next.

 

The Drab Gab

Gab needs to stop marketing itself as a right-leaning haven for nutjobs. It should just present itself as a fun, entertaining social media site that just so happens to not ban people as easily as other sites. One of the things I really dislike about Gab is how difficult it is to find people with interests or even opinions outside the realm of basic bitch AltRight/AltLite/MAGA politics. Ideally, I want a place where I can view entertaining content and discuss topics earnestly but one which doesn’t punish people for PC indiscretions. Sites like Gab should aim to attract with apolitical entertainment, with the idea that people go will go there for that but have to tolerate some uncomfortable political speech as a price. Just like when people watch football or some funny cooking vid on youtube, and they have to sit through the annoying political diatribe or cheesy social justice commercial. Kind of like how youtube has its own shows. They need exclusive non-political (mostly) content, which will draw in ordinary people. The “exclusive” streams and shows which can for there now there are all just Alex Jones style and “MAGA” oriented material. They need things like cooking shows, makeup tutorials and animated series. As it currently stands, Gab’s appeal seems to be along the lines of “Come to our site where you can discuss ‘pizzagate,’ ‘false flags’ and other wild conspiracy theories, free of censorship.” It’s no surprise what kind of demographic that ultimately attracts. As a result, discussion on Gab is dominated by insufferable lunatics and surly cranks. Simply saying “we’re a free speech site and everyone is welcome” isn’t enough. You have to actually offer the kind of content which people from a variety of ideological, non-ideological and social spheres will be interested in.

Of course, I don’t believe Gab is to blame for the fact that one of its users (allegedly a man named Robert Bowers) committed the shooting at a Pittsburgh synagogue. A social media site or forum can’t be expected to be responsible for the offline behavior of one of their users. It simply isn’t their responsibility. There are too many crazy people out there. There have been crimes and violent attacks committed by users of every major social media site.

However, what is the point of suspending his (or any other perpetrator’s) account after the fact? Just leave it up, otherwise it just looks like you’re trying to conceal what he posted to avoid damage to your reputation. There’s no point in destroying a public record of someone’s posts just because they happened to commit a crime. Twitter and FB do the same thing, and it’s annoying. People are interested in reading the old posts on these kinds of accounts because they offer insight into the person’s mindset and motivations. I’d prefer to read these myself and draw my own conclusions rather than take the word of some media outlet’s second or third hand interpretation.

During a particularly censorious time on Twitter a few years ago, I contemplated using a spare domain name I had obtained for building a small scale social media site called “Wand” (which was intended to fill the void which Gab has since occupied.) Ultimately, I decided the potential for legal liabilities would be a hassle I just wasn’t equipped to deal with. Once you make the decision to start hosting other people’s edgy content and images on your site, there’s a hell of a lot of shit that can go wrong. Maybe, I’m just a tad too misanthropic to be willing to “take one for the team.” I just don’t care about these issues enough.

I’m grateful that Gab exists, but a site which seems designed specifically to attract pond scum has built in experiential limitations.

Brandon Adamson is the author of Skytrain to Nowhere