“Where Microaggressions Come From”: Some (Gently Triggering) Reflections

Just finished reading some extracts of Campbell and Manning’s 2014 paper “Microaggression and Moral Cultures”, by way of Jonathan Haidt’s website.

Haidt, who is best known for penning The Righteous Mind and The Coddling of the American Mind, and most recently for helming initiatives aimed at supporting ideological diversity on college campuses, finds much to praise in Campbell and Manning’s piece, which neatly partitions cultures into those based on “Honor”, “Dignity”, and (most recently) “Victimhood”.

This three-part taxonomy has since become somewhat popular among those seeking to understand the strange expansion of political correctness in recent years. To summarize, where “Honor” cultures depend on the individual’s ability to react decisively to even minor personal offenses in order to avoid personal shame, and “Dignity” cultures instead prize self-restraint and informal self-reliant mediation, with appeal to official systems of justice as a last resort, “Victimhood” cultures are a new modality where the emphasis is on maximizing one’s appearance of grievance and helplessness from the get-go, in hopes of immediately attracting a powerful authority to punish the (alleged) source of the grievance.

In some ways, even if it is over-simple and even if “Victimhood culture” is really anything but new (see Karpman Drama Triangle), it’s tempting to accept this taxonomy as a helpful description of a disturbing trend of emotional ultra-fragility and accusation-proneness in the younger generations. For my part, it certainly fits some of my own encounters with the new, ostentatious “victimhood”—such as a very recent occasion when one of my table-mates complained of feeling “triggered” three or four times in the span of several minutes, mainly whenever environmental problems or possible differences between the sexes were brought up. (It is interesting, by the way, to experience firsthand how such “triggering” claims actually function as threats.)

But in another sense these exercises in taxonomy, with their air of studious neutrality, are themselves symptomatic of a much wider problem, namely the wholesale relativism and quietism that have come to permeate the intellectual, and particularly the academic sensibility of the times—and which arguably enables the rise of Victimhood culture in the first place. It is as if the very people who would clear away the phenomenon of “microaggression” are themselves deathly afraid of being accused of—microaggression!

For instance, given the kind of world-creating magic potency that is modernly vested in the word “culture”, the very packaging of Victimhood as a “culture” could easily legitimize or immunize it. After all, as long as one agrees (as Campbell and Manning seemingly do) that different “cultures” are incommensurable, separate realities, “lived experiences” with inherently equal validity, there can be really no justification for opposing the rise of a “Victimhood culture”… or any other type of “culture”, for that matter. Under the relativist modus operandi, any criticism of Victimhood Culture may easily be chalked up to chauvinism, personal distaste, misunderstanding, outright malice, and so forth—and so itself become fodder for more grievance.

Absurd as this all may sound, one can already find places in Haidt’s comments section where this same reflexive, self-paralytic relativism, under the guise of sober neutrality, wells up. One does not have to look long and hard to find an attitude along the lines of: “who are we to judge Victimhood Culture, since we are ourselves soaked in/tainted by the assumptions of Dignity Culture? Let us instead non-judgmentally study Victimhood Culture as the fascinating human artifact it is!”

So unless there is the willingness to take off our relativist-anthropologists’ hats (& kid-gloves) and say outright that a “Victimhood Culture” is indeed foolish, destructive, and yes, objectively bad, there is a danger in today’s climate that well-meaning discussions of this sort will boomerang and end up buttressing and “dignifying” Victimhood Culture as culture… perhaps up to the point of cordoning it off as a marvelous new object of study, or of deeding it special protections and privileges. This would be the very opposite of the thing that is needed–which is, of course, to actively oppose and then dispose of it.

Laissez-Con

The alliance between conservatives and laissez-faire advocates is a fundamentally self-destructive one for conservatives, simply because free-market capitalism is, in fact, the most radically anti-conservative social force at work in the world today, short of perhaps communism or wokesterism (though note how seamlessly these increasingly mesh with capitalist structures). This is basically why the program of unfettered capitalism is called “economic liberalism”, not “economic conservatism”.

Capitalism, in its most unrestrained form, means the establishment of material enrichment and self-centered profit optimization as the value above all other values. Those “other values”, when not purposefully eliminated, are left adrift in the de-emphasized non-economic areas of the society, where they then gradually cool, lose influence and even dissipate.

Even granting that certain moral or pseudo-moral elements are deliberately preserved by capitalism—such as “work ethic”, or ability to cooperate in a team, or obedience to a superior, or the importance of maintaining one’s agreements with exactitude—no sensible person can believe that these are more than remnants of the morality that previously held sway, or that they can truly seem particularly “moral” or “traditional” except by comparison with the few even more degenerative world social trends extant, such as, again, communism/wokesterism.

Capitalism is thus, at heart, an almost nihilistic force, as it aggressively attacks and destroys all prior, more traditional, more nuanced aspects of morality, meaning, and polity and replaces them with the pursuit of personal profit and material productivity. “Everything solid melts into air”, Marx infamously said. He was referring to the universal corrosive effect, not yet of communism, but of capitalism upon all established forms and beliefs in society—an effect that was already highly evident when he wrote in the middle-19th-century.

So let us make no mistake: free-market capitalism, almost as much as “social justice”, is through and through a progressive project, committed to reinventing the world on ever more universalist, ever more interchangeable, and ever more materialistic lines, at the expense of all social, emotional, and spiritual aspects of life that do not quantitatively enhance profit or throughput.

For “conservatives” or “values voters” to think, then—or ever have thought—that “free markets” would be their ally in the long run, shows only that either 1) they have been unpardonably, unbelievably deluded; 2) the “inevitable arc of history” arguments typical of progressivism had already seeped so thoroughly into their subconscious as to make them throw in with a “lesser of two progressive evils” rather than an actual good; or 3) secretly, “conservatives” have, for roughly the past half-century at least, really been little more than progressive-nihilists-in-disguise.

Were capitalism and markets viewed–as Tucker Carlson proposed in a recent, surprisingly controversial monologue–as tools, rather than ends-in-themselves, it might be possible to maintain the proper perspective and balance necessary to couple material productivity with healthy polities with vigorous boundaries and strong, distinct cultures and traditions, more or less in peace and mutual respect—aided, but not defined, by trade. But once capitalism had made the jump from being an instrument of gross utility—ultimately second or third in importance in the life of nations and peoples—to being the universalizing super-weapon in a pseudo-religious mission of globalist homogenization, then the die had been cast. And we have our dear “conservatives” to thank for an enormous portion of that.

Political Awokeness: the Next Phase of Digital Population Control

Many on the left (and also some on the center-right) are lately penning critiques of the disturbing phenomenon of “surveillance capitalism”—for example here, and here.

Many of these critiques are perceptive, very well-written and much-needed. But I cannot help wonder how many of these brave and principled souls would still demur at the involuntary “moralization” of our surreptitiously collected personal data, if the algorithmic ranking of citizens by credit or criminal scores, currently at issue, were instead replaced or even simply supplemented… by measures of individual wokeness…?

One has only to imagine something like the already tremendously spooky mylife.com—which features a “reputation” gauge that literally judges for you, on the basis of criminal and financial data, whether a given citizen-of-interest is “good” or “bad”—but with the addition of an extra weighting factor. Let us call this the “political-awokeness” factor (or some other euphemism to taste; it doesn’t matter).

This “political-awokeness” factor would collate all the person’s available correspondence, public statements, social media, and lifestyle/purchasing decisions, subject it to an opaque (and probably poorly-validated) machine-learning rubric, and return a value indicating the person’s degree of concordance with and advocacy for leftist-SJW orthodoxies (or, in the event of a Fascist instead of Marxist takeover, rightist-racialist orthodoxies; technology is quite flexible and indifferent to such fine-points of ideology).

How many of those critics of “surveillance capitalism”, who decry most articulately and justly the digital invasion and ranking of our lives and souls in the name of capital, would nonetheless be perfectly happy to go for ranking of citizens by “political-awokeness”? I cannot say with certainty, but we will soon know: for the experiment is almost surely underway already.

Such a scale based on political amenability is no more far-fetched than one based on finances, criminal records, or any other personal data; such things have been in the works for a long, long time.

We enjoy pointing at the Chinese “social credit score” and feeling a flush of moral pride (or schadenfreude) that we in the West, at least, still appreciate freedom and privacy and due process before judgment(!)

But in fact, even if very slightly behind, Western organizations are hardly less obsessed than Chinese ones with developing such tech for themselves—in fact, they are largely in sync with China on this.

To take just one example, consider the “incivility detector”, announced several months ago, which combs the Internet for signs of rudeness or even, perish the thought, outright meanness. Or, for a more thorough sampling, we might want to consult such scholarly tomes as “Social, Cultural, and Behavioral Modeling“, a periodic collection of research proceedings whose recent annals include such juicy titles as “Detecting Betrayers in Online Environments Using Active Indicators”; “Learning Latent Representations of Users Using Their Social-Media Activities”; “Multi-Agent Accumulator-Based Decision-Making Model of Incivility (MADI)”; “Exploring Fake News and Satirical Responses During the Black Panther Movie Event”; “#metoo Through the Lens of Social Media”; and my own totalitarian favorite, “Fine-Scale Prediction of People’s Home Location Using Social Media Footprints”.

Interestingly, these investigations are nearly always approached from a leftist or social-justiciar perspective, not a financially-focused “capitalist”—let alone rightist—one. (Surely, these authors’ political-awokeness scores must be astronomical!)

***

Glancing over the deluge of such developments, which seem to meet less and less resistance even as their flow becomes greater and greater, it is hard not to suspect that we are actually approaching the point where it will be completely useless to refer to things as “Orwellian”—partly because, due to sweeping intellectual illiteracy, no one will really know who Orwell was or grasp what point he was even making, but mostly just because everything will be Orwellian. There will simply be nothing meaningfully to compare Orwell’s forecasts against; at that point, the word will have been wholly exhausted of shock-value.

But even so, one has to ask: why such a rush on the part of elites around the globe to out-Orwell themselves? Are there simply no other technological frontiers left to fight on?

This last is a distinct possibility, actually, as technological stagnation is far further advanced (pardon the mixed metaphor) than is generally discussed or admitted. Still, it’s almost as if the elite are anticipating a period of exceptionally hard times not far ahead, in which it will be necessary to deploy much stronger and more innovative means of control and persuasion to keep the population from eating each other—and, of course, the elites themselves.

“Just Animals”

My thoughts keep returning to an exchange I recently had online with someone who, almost from the outset of the discussion, angrily declared:

“We’re all just animals; humans are animals no different from the rest of the animal kingdom; the only difference is that we stupidly think we are better and act all conceited. If we weren’t just animals, we would behave better to each other and seek peace and quiet…” etc., etc.

Sensing a certain heatedness in the above, I first thought of saying something flippant to lighten the tension. But suddenly I felt the view expressed was really so grim and dangerous that it would not help to mince words. So I instead said what I really felt:

“I think this is an overconfidently reductionist and also highly fashionable view, one I don’t think I can share.”

Interestingly, the person gave no arguments, but simply ended the conversation at once, and I have not heard from them since. (Perhaps I was too strong, but then immediate rejection and silencing of those who do not abet one’s beliefs is also, as we have seen all too clearly and all too many times, quite standard procedure now.)

Really what my sometime interlocutor does not see—because she is too busy congratulating herself for quoting scientistic bromides and therefore, by a sort of crackpot modus ponens typical of the lumpen-intellectuals of today, being “smart”—is that the view she expressed is monstrous, not only because it allows one to degrade humans without limit, but because it actually allows one, as a first step, to degrade animals without limit.

The slip, I believe, was in her saying animals are “just animals”—meaning biochemical machines with no “higher” potentials or capacities. This is, after all, pop-scientistic boilerplate now: all life is merely “moist robots”.

But just as one feels no moral pang in taking apart a vacuum cleaner or smashing a TV with a hammer, why, if an animal is “just” an animal, should one feel any such compunction about doing the same to it? And thus, if humans are no better, to a human being…?

The implications of the whole view, it is not too hard to see, are actually horrifying: they admit no limit of brutality, for they deny all moral basis for distinguishing brutality from taking apart a watch.

In fact the situation was even worse than this, for my friend’s tone betrayed an actual hatred of the human, a desire to place it not equal to, but below the animal. Why? Most likely, as she hinted, as punishment for humanity’s not “behaving better to each other”, and so failing to live up to those selfsame “higher” potentials which she loudly denies–and ultimately, for disappointing her.

If this interpretation is right, we see here an extraordinary phenomenon–out of a reflexive spite or a personal sense of disappointment, she has committed to deny the existence of these potentials—in order to get back at them!

Here is the irony: in their need for revenge at personal failings and slights, the progressive-minded of today must admit, albeit backhandedly, to the very things they are driven to deny—much as the postmodernist must concede the existence of truth in order to deny it, when he says, “truly, there is no truth”!

Like all nihilistic drives built around rage, revenge and resentment, we find self-negation at the core of this kind of “progressive enlightenment”. For is not my friend herself a human? What are we to make of this conflict? Does she then yearn actually to be lesser than what she is? Would she like, perhaps, even to be degraded in some way, perhaps even be forced to be a “moist robot”–all to relieve her of this pain of being human?

But self-contradiction inevitably breeds restlessness; and so, as long as it is never resolved at its source, it can feed on itself indefinitely, continually expanding the perimeter of its grievances, and continually degrading that which it claims to restore or heal.

This very line of thought might provide, in fact, a most useful definition of evil: Evil is that which begins by promising everything and ends by sparing nothing.

Flynn’s Retreat and Academic Idiocracy

I recently came across some remarkable news: the Flynn effect, the mysterious trend of ever-rising IQs that has been documented though most of the early and middle 20th century and has been the source of much technocratic/laissez-faire optimism, appears to have actually begun reversing in many countries. We are, it seems, now measurably becoming “dumbed down”. This is claimed to be due to “environmental factors”.

My own experience may be relevant here as far as these intellectual “environmental factors” go, for not too long ago I was involved with proctoring a statistics-based course, focusing on applications in public health and medical research. This was graduate level, at a pretty high-ranking research university.

Here are some of the interesting features of how the course was designed:

1) extra-credit quizzes, worth 5% of the total course grade;

2) extra credit questions on both the midterm and final exam, together totaling an entire grade-point (B to A);

3) lowest homework quiz score is dropped;

4) most exam questions do not require showing any of one’s calculations;

5) finally, most amazingly, an extra-credit “make-up” exam, where you get to “redo” questions you got wrong on an exam for credit (I helped get this particular foolishness blocked at least, and got a good dose of student flak for it).

Final result: of all students, and even without 5),

• About four-fifths got an A or A- (with about half of these scoring above 100%),

• one-fifth got some kind of B,

• One student got an F (because they didn’t even show up for the last 2 exams).

This is now considered “successful teaching”; indeed the professor who designed the course and rubric and gave the lectures was cited for “outstanding contributions” to teaching.

Of course the real shock, after larding the course with this much extra-credit and other fudges, is that anyone still got less than an A. In reality, belying the grades, I would say perhaps a bit under half the students understood the material at a functional level by the end. However one quickly discovers that:

• If a student gets anything less than an A-, they will complain because this stops them from doing “capstone research”.

• If a student gets anything less than a B-, they will complain because, in a misguided attempt to battle grade inflation, programs have moved towards making B- the minimum passing grade–thus pressuring instructors to simply inflate their grades even more.

These students will go on, one assumes, to fairly responsible positions in management, tech, and perhaps clinical research. We already are beginning to see how that works out. But if the university is just a business and students are just customers, who are we to deny them what they paid for?

Degree mills, everybody: it’s what the People (& college administrators) have spoken for! If IQs are indeed sliding, this kind of inflation of coursework will be both partly to blame, yet also increasingly demanded—thus completing the vicious circle.

Minitrue, 40 Years Delayed

We are now seeing an accelerating rollout of censorship and high-precision thought-management across the most dominant services of the Internet. In a recent video, “Dilbert” creator Scott Adams quite chillingly explains that there are countless persons and subjects that one can no longer even name in a YouTube video without having the video’s comments blocked or it being automatically “demonetized” (the resemblance of this word to “demonized” is, surely, a blameless if wondrous coincidence).

Moreover, considering that social media 1) has more than enough following and influence within the electorate to swing election results on a national scale, 2) is now equipped with more powerful persuasion-managing algorithms than ever before, and 3) that these algorithms are in the hands of social-media conglomerates whose leadership have strongly apparent (leftward) political biases, Adams comes to the stunning but plausible conclusion that the free democratic process in the United States has likely already completely ceased to exist.

Henceforth, we and our elections are to be largely pawns of the censoring, content-micromanaging conglomerates, suggests Adams—raw behaviorist material to be guided, instructed, and shaped not only for profit but for intellectual and political hygiene. And for the most part, since the whole thing is proprietary, we will not even be able to recognize when it is being done.

Similarly to YouTube, a spate of “deplatformings” of high-profile but controversial figures has occurred on Twitter and Facebook, and in a short span of time, giving an impression of a coordinated crackdown. Such bans usually are summary and Kafkaesque, with at most a vague explanation along the lines of having “violated content guidelines”, and no process of appeal.

Those banned range from absurdist provocateurs to real alt-right ideologues to tellers of what used to be called “offensive”, “off-color” or “distasteful” jokes. But from what data is available, there is a clear pattern: the censorship appears to be overwhelmingly biased against right-wing personalities and opinions.

On the left, meanwhile, even advocates for violent groups such as Antifa remain largely untouched; in an interesting twist, a researcher who found evidence that many Antifa leaders are being actively courted by journalists was himself recently suspended by Twitter, with no reasons given.

In this new climate of “repressive tolerance”, it seems, only rightists can be “extreme”—a fine touch of Marcusean theory in action.

One can debate the merits of the banned individuals’ ideas and contributions, many of which are mean-spirited and a few grotesque, but the momentum points unmistakably towards something larger, for which these are but outliers and test cases.

But what is that larger thing? Nothing less than consolidated control of thought and expression and political will, all under the auspices of preventing “hate speech”—a dangerously nebulous concept all too easily remodeled, and now actually being remodeled, into the expansively Orwellian demand that nothing upsetting or offensive be said concerning anyone We like.

It has long been a truism that the way we handle offensive speech is a kind of bellwether for the fate of all speech. But it now seems we are very quickly leaving such early-modern sentimentality behind.

Evelyn Hall‘s famously idealistic cry, “I disapprove of what you say, but I will defend to the death your right to say it”, or the left-leaning ACLU’s defense of the right even of neo-Nazis to demonstrate and speak, give testimony of an era when upholding the principles of free speech and assembly to the very letter was understood to be far preferable to uncorking the genie of centralized censorship (or now, “deplatforming”), even if in order to score victories against the most repellent ideology. This was also an era that had the courage and clarity to recognize that these two approaches are, in the end, mutually exclusive.

But for all the dangers we now see to free thought and its expression, here is the deeper calamity: that we allowed our national political life to become so pitifully dependent on the Internet and on these three companies, that “freedom of speech” and “participation in the political commons” are now viewed as functionally indistinguishable from “access to social media platforms”.

***

Let me say it straight up: the Internet is now mostly an unmitigated disaster, exceeded only by the lemming-like enabling behavior with which billions have greeted it. Driven by a long-inculcated and ingenuous faith in technology as a moral good, these billions walked right up to the Internet somewhere in the late ‘aughts and, without understanding or caring quite what they were doing, as quickly as they could handed over nearly our entire social commons and civic life to what just so happens to be the most atomizing, delusion-breeding, monopolistic, emotionally toxic and conformity-inducing technology ever created.

And so now a huge and forlorn midsection of this country and others finds itself not only quite addicted to this digital crack (as was intended all along), but largely unable to remember or care how older generations ever made friends, formed communities, carried out politics, or pursued ideas and knowledge without it (even though every indication is that we did all of these things considerably better pre-Internet).

We have lost the physical world; we have lost our own efficacy. But this is not all. For when it became broadly apparent that this Internet, this beloved new manifestation of immersive techno-escapism, might in fact be a cult generator that continually buries truth and amity under foetid tides of rhetorical sludge (leading, to a much greater extent than the alleged Russian collusion ever did, to the election of our current president), our wise and virtuous elites somehow concluded that therefore, massive social-media monopolies such as Facebook, Twitter and Google should correct (read, censor) the Internet as they pleased—thereby rescuing us all from our deplorable selves!

The upshot is that the Internet rapidly is transitioning from a place of mob-based mudslinging, that is at least limited by being chaotic and decentralized, to a monolithic system of hyper-efficient eavesdropping and technocratically-curated falsehood. Now there’s a capital improvement!

The promise of Internet as a tool of “liberation” is and always was a fool’s promise, as a few writers farsightedly grasped. While it can serve to enable dissident organization, its overall course tends and always has tended, by its very nature, towards the dissolving of real-world social bonds in favor of the consolidation of remote social control. Indeed, real-world social bonds can be seen as a kind of coarse-grained and dangerously unpredictable rival to the power of the fine-grained algorithmic panjandrums, hence to be replaced by atomized conformity with all dispatch.

* * *

If it were the case that the censorship and thought-control were largely confined to the social media realm—and I have already made no bones about the latter’s perniciousness—then the situation would be less concerning. It might even be salutary, by driving more people away from these stultifying and addictive media and back into the far healthier, if now quixotic channels of original-source research, long-form discourse, and in-person interaction.

Alas the evidence has been coming in for years now that this great “closing of the online mind” is not just a matter of social media or even of the Internet generally, but is instead rapidly developing at the far more worrying level of government-abetted censorship, and in countries with a long tradition of speech and liberalism no less: in Australia, whistleblowers can now face life sentences and whole news networks can be raided—with carte blanche to “add, copy, delete or alter” information—with scarcely a shrug. In the UK, the government has now banned any advertisements containing gender stereotypes deemed “harmful” (like housewives doing chores, or masculine men), and police appear to be actually criminalizing opinion.

This real-life Minitrue carefully monitors social media accounts for any signs of thoughtcrime, ready to pounce with the threat of actual imprisonment. It even declares, portentously: “we take all reports of malicious communication seriously”. All reports!

Speech on religious matters is far from immune either. In many parts of the European Union, criticism of Islam and its founder in ways that show it in an unflattering light can now lead to outright legal penalties. Even in daily life, even limited face-to-face disagreement with new orthodoxies on gender identity and gay marriage, to name the biggest examples, has rapidly become potentially career- or friendship-ending.

* * *

The First Amendment buys time in the USA against the governmental censorship seen elsewhere, by spelling out “free speech” in big capital letters, so to speak, on the doorstep of the nation. But if we take seriously the warnings of Snowden and other whistleblowers—now already 5 years behind the times and the tech—eventually the First Amendement will be swamped, or simply redefined out of existence. For the Amendment, already a somewhat impressionistic and porous barrier by virtue of its very generality and simplicity, is now charged with holding back two huge floods from opposite directions: from the “private” sphere of the social media masterminds as well as from the “public” sphere of government.

This double attack is a consequence, not only of the oligarchic-fascistic merging between government and corporate power that has been underway for decades in the USA, but also of the fact that the ranks of new government officials ultimately flow from academia, which since the 1960s has been increasingly dominated by admirers of just such Orwellian doublespeak doctrines as “repressive tolerance”.

(If it seems unfair to describe only the left as Orwellian here, bear in mind, firstly, that institutional and philosophical legitimacy is overwhelmingly being accorded to even quite extreme leftist theories over rightist ones, while the latter are, as we have seen, disproportionately censored; and secondly, that Orwell chose to name the dominant political party in 1984 “Ingsoc”, or English Socialism, for very definite and pressing historical reasons.)

* * *

Speech has power, both to bind and to disintegrate. This always has been so; it is why rulers have sought continually to restrict it, to varying degrees. And with this power of speech has inevitably come the prospect of causing emotional distress or embarrassment.

But this risk has always been thought a very tolerable price to pay, because the project of free thought, accountability of power to truth, and ultimately individuality itself depends profoundly on the ability of individuals to independently call things as they see them, even at the risk that they may be in error or end up disliked.

This applies not just to the subjective-narcissistic “my truth”, now so lauded under the fork-tongued modern sense of “inclusivity” but, much more importantly, to actual truth and reality—to that which, as Philip K. Dick put it, “when you stop believing in it, doesn’t go away”.

Instead, the days of “sticks and stones will break my bones but words will never harm me” has given ground instead to the “word that wounds”, and wounds endlessly; with the license of subjectivism, insult and offense have been smuggled into the domain of “violence”. Caught in a great and growing mire of pain, obsession, and resentment, the whole project of the open society founders. The remedy? Suppression of all strong and sincere feeling, unless backed by a sufficiently powerful identity group (or corporation).

* * *

What will happen now? The same thing that always has to happen when power attempts to commandeer history, thought and opinion, but a determined minority is unwilling to accept such: alternative, covert or semi-covert channels of information and organization will have to percolate and spread. Some of these already exist; for instance smaller, anonymized, encrypted, or more libertarian communication methods such as Telegram or the “Dark Web” may be options. Blogs remain relatively untouched, but there are indications that WordPress is beginning to test the censorship waters as well.

But the problem remains that these are all still online tools, ultimately dependent upon gigantic server infrastructures maintainable only by governments and large corporations, and hence susceptible to the strange collectivist-yet-top-down control inherent in any highly networked yet centralized information system—and the flexing of those muscles of control is exactly what is at issue.

The only likely solution is to collectively, drastically cut our usage of the Internet and, as much and as soon as possible, prize it from its stranglehold over the social metabolism of our lives, our thoughts, and our nation.

Perhaps there will begin to be a trade in thumb-drives, or other physical media, or even—Heaven forbid, what atavistic blasphemy! Dare one even say it?—actual meetings and interactions of groups of real people, in actual places, to discuss matters that concern them and form actual interpersonal bonds and initiatives aimed towards the addressing of those matters. (Indeed, I suspect that we will soon see put to the test the extent to which that other essential component of the First Amendment—freedom of association—still lives.)

In past ages, freedom was won and maintained through the vigorous pursuit of the written word, and through direct personal meeting of actual humans, not tweets or flash videos. If we are not able to find the will to recover something of that tradition and that skill, which has been essentially left for dead in the lust for false progress, then we will soon find ourselves living under genuinely totalitarian conditions. And Orwell will then prove (rather as I think Malthus will, but that is another story), to have been not so much wrong, as late.

Unfortunately, this may already be inevitable. Aside from the widespread addiction to social media that has already rendered it compulsory to much of the adult and most of the youth population, the greatest risk is that the vast majority simply will not see or care enough to extricate themselves. For totalitarianism is like swimming in a fast-moving current: so long as one obeys it, one does not even feel that it is there.

A Depressing Ambiguity

I recently read an Atlantic article on the fiasco surrounding 5-HTTLPR, as well as psychiatrist Scott Alexander’s blog post on the topic, and am still straining to grasp all the implications.

To summarize, it now appears that over the last 25 years, anywhere from several hundred to over a thousand scientific papers were published, in reputable, peer-reviewed journals, based around presumed “genes for depression” (5-HTTLPR being perhaps the best-known) that are now thought to be completely bogus. The dramatic and abrupt discrediting of these genetic linkages is mostly due to an immense, 600,000-subject study by Border et al., released just last March, which investigated a collection of the most famous “depression genes” and came away finding no statistical support for any of them.

Alexander’s discussion of this, the scientific equivalent of a 500-car-pileup, is especially punchy and concentrated; he is, understandably, much shaken that such a huge body of seemingly reputable confirmatory research on 5-HTTLPR could have turned out, apparently, to be pure phantasm. He throws the absurdity of the whole situation—where perhaps hundreds of academic research groups all managed to convince each other for decades of the rock-solid validity of a host of nonexistent effects—into sharp relief, using a barrage of vivid analogies like the following:

“…This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.”

Yet even this evocative comparison doesn’t quite capture the bizarreness of the “depression genes” situation, for what we see here is less like one explorer, than an entire corps of hundreds of explorers, all going to the Orient and all coming back claiming they saw the same collection of extraordinary, unicorn-themed hijinks.

One obvious possibility, which Alexander gives perhaps too little credence, is that the allegedly dispositive 600,000-subject study, despite being larger, broader, and more modern than all the previous ones put together, may have nonetheless missed something in dismissing the other results. Certainly it seems easier to believe that one study, however large, might be inaccurate, than that hundreds of smaller but independent ones might be. That said, no flaw is yet apparent in the new study, and as Alexander points out, it is not the first work to cast serious doubt on 5-HTTLPR.

But what really haunts me while reading about this latest scientific mess, is a wilder, more Sheldrakean possibility: could it be that 5-HTTLPR and the other gene variants actually were associated with depression for a decade or two—during which these hundreds of studies simply reflected reality—but then simply ceased to be associated with depression, which was then also correctly reported by the new study? If one pauses to consider, it’s not obvious that this possibility is much crazier than the notion that 1,000 studies were carried out, written up, and accepted, year after year, about a completely nonexistent effect.

These are desperate epistemic times, indeed.

* * *

Such snowballing interpretations and re-interpretations, often involving hypotheses of increasingly surreal strangeness, are suggestive of a far more sinister epistemological breakdown at the heart of at least some branches of science.

The issues are by no means limited to 5-HTTLPR. For example, in another of Alexander’s posts, where he reflects on the growing difficulties in establishing reliable scientific truth through research. Strikingly, he recounts findings indicating that parapsychology—the study of such problematic, nay “unscientific” phenomena  as clairvoyance, telepathy, telekinesis, etc.—actually now manages to justify its results at a level of rigor equivalent to that required of “normal” scientific publications, and at about the same rate at normal scientific fields do. He sums the situation up this way:

“…with enough energy focused on a subject, you can always produce ‘experimental evidence’ for it that meets the usual scientific standards.”

This is a remarkable statement. Of course by it Alexander means to drive home something like, “real science’s evidentiary standards are in trouble, because even the parapsychologists, whom we know produce only unscientific nonsense, are now about equally able to meet these same standards”.

Yet as with 5-HTTLPR, we find that Alexander has again inadvertently set us face-to-face with still more Sheldrakean alternate interpretations. First and most obvious of these is that parapsychology’s success in meeting scientific standards of knowledge may actually imply that it is not wholly unscientific nonsense after all, and hence that Alexander’s working epistemic assumptions about it are no more solid than the mainstream wisdom about 5-HTTLPR apparently was. But secondly, and even more strikingly, Alexander’s remark aimed at dismissing parapsychology seems in itself to concede a kind of parapsychological effect: to wit, that “focused energy”, in the form of mass intentions and expectations, can in some way directly influence, even reverse, scientific outcomes, thus fanning the flames of the replication crisis.

* * *

At any rate, even if we do not dare propound such dangerous notions as a fundamental ambiguity in the distinction between parapsychology and “real” psychology, or a direct effect of mass mentality on the aetiology of psychological conditions, we may at least point out certain other, no less remarkable aspects of the current situation.

In particular, at the core of the 5-HTTLPR disaster—standing over its spent body, one might even say—is the now-ascendant omnigenic hypothesis, which asserts, based on a growing number of big-data GWAS studies with huge subject sizes much like Border et al., that almost none of the variability in most kinds of “complex” traits can be explained by single gene variants, or even by small clusters of genes or metabolic pathways. Instead, it appears that human characteristics of such obvious importance as height, intelligence, temperament, mental illness, and even skin color all have an important genetic component, but that this component can only be reckoned as a summation of extremely tiny effects exerted by thousands or tens of thousands of genetic variants.

So then the failure of the “depression genes” hypothesis, from an omnigenic point of view, is “simply to be expected”. In his 5-HTTLPR post, Alexander himself indulges in a bout of Whiggish, retroactive rationalization on this score, approvingly pointing to 5-HTTLPR’s downfall as reaffirming the strength of scientific progress. In science, he tells us, what was once real to us we later discover to be “really” the silly ignorance of our past selves; in the present case, we now see clearly that the whole idea of “depression genes” could never really have made sense, because it would conflict with omnigenics.

Yet why such self-congratulation or epic retconning should increase our confidence in science and truth in the context of escalating failures of knowledge is, to put it mildly, hard to to understand. What makes the modern revision more solid or stable than the old one we now mock, and what makes our present selves fundamentally different from the past selves that were so hilariously taken in, when the whole pattern really suggests our present views, too, may soon need similar retconning?

Much like his oddly parapsychological argument against parapsychology, this appears to be another case of Alexander inadvertently strengthening a point he means to persuade against (and in this we admittedly here take him as somewhat emblematic of the scientific mindset). One is reminded rather much of McGilchrist’s view of the left brain run amok, continually spinning stories, howsoever fabulous, to reassure us of the world’s tractability and our own powers of control. I would submit that 5-HTTLPR was one such story; the assurance that “failures of science actually are proof of its strength” is another. This is not to say that some stories may not have more objective validity than others, but that a situation like that now increasingly manifest in science—a rapid succession of stories without a clear way of judging whether there has been an actual improvement in explanatory or causal understanding from one to the next—is a sign of breakdown.

In light of this, the most disquieting aspect of the omnigenic model is not that it cannot be shown to explain more trait variance than models with fewer gene variants; it is that the gene variants found to account for most traits have, usually, no evident functional relation to each other. They typically do not segregate strongly by metabolic pathway, or chromosomal, physiological or cellular location, for instance. Often, the variants are not even in regions that code for protein. In this sense, the omnigenic model might best be characterized as the absence of any model, a limit on our understanding as much as it is an advance.

Consider an omnigenic accounting of depression risk. Here, the likelihood of an individual developing depression will be governed by some kind of weighted linear combination of tens of thousands of gene variants. Yet what makes this combination the “depression combination” rather than some other, equally random-seeming combination out of the trillions upon trillions possible in combinatorial space? In short, what is so special about it? In the omnigenic world, these questions have no answer—or else “answering” them will require models containing so many variables and assumptions that they give at most a hugely contingent and relatively weak statistical account, not a mechanistic one.

Yet assuming omnigenics is nonetheless “the case”, we are brought to the following fascinating idea: features that are salient and even indispensably important from the perspective of the mental, such as depression and intelligence, can yet have, from the perspective of the physical (or genetic), absolutely no discernable salience whatsoever.

Omnigenics makes much of the idea that these thousands of variants, once properly weighted and accounted for, “explain” the “missing heritability” of complex traits. Yet this depends very much what one means by “explain”. While the variants may collectively lead to better statistical estimates than those made with any one gene or few genes, they give no intuition about what is going on, and no suggestion of how to proceed towards such intuition. We must simply take the numbers and do what we can with them.

Omnigenics therefore represents a kind of abdication of causation—since the interactions between the tens of thousands of tiny genetic effects, although they may lead to a profoundly significant and totally undeniable mental consequence (depression), cannot be traced to any pathway or even to any discrete sequence of events. Complexity has not only swallowed up explanation; it has digested it.

This places us in the throes of a strange new sort of dualism. The omnigenic model suggests that the world as seen from the mental or intentional realm, as well as most of the other complex traits of life that we single out so intuitively within the realm of our perceptions, are in fact causally near-complete strangers as far as the realm of material is concerned. For, as well as we can see, the mappings between the two, while capable of being hacked out by brute statistics using huge sample sizes, are curiously arbitrary, in the sense of being governed by no determinate, comprehensible, or known law or causal factor. Analogously to the hyper-complex representations inside a trained artificial neural network, they are conceptually random; they give us no conceptual sense of how a linear combination of 20,000 genes strongly influences the development of the given mental trait, but only stipulate that it does. And if these mappings really are conceptually random/arbitrary, with no comprehensible sense of causation in which to ground them, then nothing much stands to argue against their being much more freely and rapidly changeable than evolutionary theory or mutation rates might suggest. Through omnigenics, Sheldrake (or possibly a weird dualist-postmodernist kind of irrealism) may have the last laugh.

Reflections: Mass-Crystallization and the New Theodicy

The formation of a mass-conformist hive-mentality out of an individualistic, free-inquiry-based culture is like a kind of symmetry-breaking: it is considerably analogous to the sudden crystallization of a supercooled liquid. Above a certain social “temperature” (for instance, a certain rate of innovation and change, or a certain average level of personal prosperity), the individual phase is stablest; people, like molecules in a liquid, then move and think with considerable, though never quite total, independence. Below that temperature, the collective yearning for defined social patterns and fixed ideas becomes increasingly overwhelming, but is not initially able to hit upon a new configuration to build around (a nucleus).

I suspect we have passed below that temperature, or are just doing so; as for the nucleus for the new configuration, it looks more and more likely that some combination of leftist narratives, self-adulatory memes, and identity-group tribalism will serve the purpose.

* * *

On the one hand one finds, particularly among environmentalist progressives, the pervasive idea that human beings are a kind of nature-devouring and ultimately self-annihilating blight upon the earth (and they may yet be proved right). Yet these same people tend to think of humans’ individual self-conceptions and motivations as superb and sacrosanct. Here is a profound tension: how can it be possible simultaneously to condemn a species as deeply destructive and at the same time think whatever its members wish to believe about themselves is the greatest thing possible? Probably because those who subscribe to both views at once don’t really support “everything” an individual could want, but only a certain quite circumscribed range of approved desiresmostly those based on hedonism, sentimentality, collectivism, or resentfulness. (The existence of “collectivist narcissists”, similarly, isn’t a contradiction, but the very condition of atomized conformity.)

* * *

The Wokester’s Theodicy. — The existence and provenance of hatred, in the purview of a philosophy where all sincerely-felt emotions of the individual are supposed to be pure and wonderful and worthy of unconditional acceptance, cannot help being closely similar to the traditional idea of the Fall, and of “sin entering into the world”. Just as the religious person is bitten by anxiety when she wonders “how can an all-good God have allowed evil in the world”, so is the postmodernist when he wonders, “if ‘my truth’ is necessarily sacrosanct, what happens if my hatred of some group is also my truth?”

The usual resolution here is to say it all depends on which group one’s hatred is directed at. Does one’s truth include hatred exclusively of the Oppressor? Then the hatred is fine. This is equivalent to saying that evil deeds are actually good so long as they are directed only against “bad people”—but of course this has the unintended consequence of tacitly legitimating the evil and taking a mulligan on the real moral puzzle, i.e., the definition of “bad people”.

* * *

Unfettered capitalism is, the more one looks at it, quite obviously an incubator for postmodernism and the regime of “my truth”. Consider the parallels—

Capitalist script: “…if it can make you richer and expand the economy, don’t hesitate—do it!”

Wokester script: “…if it can increase your political power and advance your narrative against oppressors, don’t hesitate—do it!”

Capitalist script: “…if this product seems to give you pleasure and convenience, don’t hesitate—buy it!”

Wokester script: “…if this story seems to gives you pleasure and help you feel empowered and self-actualized, don’t hesitate—believe it!”

And on it goes.

The Violence in the Virtual

If the process instigated by Nietzsche and carried forth in postmodernism is indeed the obliteration of any cogent distinction between simulation and reality—as in the ‘Aura of the Digital‘, for instance—then this process must obtain not just for matter, or money, or social mores, but moreover for violence as well: the distinction between virtual and physical violence, too, must wither away, and the two become increasingly interchangeable.

This interchangeability is now readily seen, in the virtual-to-physical direction, in the steadily growing list of mild insults and disagreements that are classified as “violence”, such as “microaggressions” or “victimizations” or “damaged self-esteem” or “being made to feel unsafe”. But it is the other, virtual-to-physical direction that is even more alarming, for there lies the possibility that visualizations and simulations of violence–the most pitiless and realistic of which already saturate our entertainment and popular culture–will cease to be even distinguishable from physical carnage, so that, being already inured to, accepting of, and indeed amused by the one, we will find no credible grounds for rejecting the other.

To repeat, “physical violence” can only be seriously considered more objectionable than “simulated violence” so long as there remains a trusted demarcation between “the physical” and “the simulated”. And yet this demarcation has already been mightily breached: it has become a cliché by now in popular discussions of physics to speak of “the universe as simulation”, the “holographic principle”, and so forth, while even at the heart of physics there has been an undeniable evolution towards sheer mathematical abstraction, exemplified in “quantum wave functions”, “metric tensors”, “string theories”, and even in much older notions like “action at a distance”. As for the side of simulation moving to become physics, one has only to skim the endless encomiums in the media to the coming “Internet of Things”, “augmented reality”, or “the mirrorworld” to see this complementary prong of the attack gleefully underway.

In sum, we have de-realized the universe from under our feet–and seem still hungry to carry the process to its every last logical conclusion. And so what sheer credulity is it to think that violence, alone, will somehow remain exempt? That it will keep its place, or content itself with mere gestures, sentiments, images? To think, as some do, that violence will only ever continue to move in the first direction, from physicality into virtuality, is hopeless–not simply because there is no compelling reason (let alone law) for that motion not to reverse but, even more problematically, because the direction itself necessarily becomes arbitrary once the real/virtual distinction is lost. The only thing that can stop the disintegration is a faith in the difference, faith in the Real—though that, too, has its problems.

The Blind That Lead the Woke

The goal in our time is erasure and indeed prohibition of all forms of difference as “forms of oppression”… excepting those differences that are wholly self-avowed. These latter—so long as they are not pre-designated as “oppressor”—are instead to be celebrated without exception, being invariably described as neither chosen nor coerced, but as arising from a deep inner, personal source or animistic essence: “my truth”. This “essence”, curiously, is seen as in no way constructed and both it and its needs as absolutely non-negotiable, even though it is in practice often promulgated for entirely external, consensual reasons, such as increased approval and status within the “woke” herd.

This also implies that the racial, ethnic & cultural differences that are so front-and-center in “social justice” movements are really apprehended only in a severely reductive and bowdlerized form–essentially as an individual-centric spiritual “flavor” that may, as if by happenstance, exist across a group, and not as a free-standing, deeply interwoven cultural structure with its own intricate world of meanings and demands that actually breathes its life into a group. These various traditions and ways-of-life, existing wholly outside of the SJW’s field of vision, are thus treated with words of gushy, conspicuous reverence, but with deeds that are profoundly patronizing (such as the hijab-donning politicians in New Zealand).

Thus is born “diversity”: a potpourri of meaningless, essentially narcissistic, pre-approved “flavors” of individual, all dissociated (atomized) from any of the sustaining meanings of their original cultures (let alone deities), all displaying with proud resentment the sacred brand of victimhood, and all merging (conforming) together under a single emergent culture of the debauched Self. This atomized conformity is, indeed, a “melting pot” of the highest intensity; but it is also one with no shape or mold into which to solidify, since “assimilation” is here deemed equivalent to chauvinism, and truth to prejudice.

We may say the greatest danger of the social-justiciars, then, is their complete blindness, despite (or even better, because of) their ideology of “social constructedness”, to the real meaning, value and richness of a functioning culture, of group identification—for these are things, bluntly, that none of them have ever seen or experienced. This blindness leads them to a vision of an “inclusive” world on the model of a chaotic mass, heaped together from the individual level with at most the aid of technical organization, but without any inspiration concerning the living structure, customs and principles that must intervene between the level of the individual and the whole in order for there to be culture, let alone civilization.

We see here that the fatal limitation of SJW—even if to be considered as nothing less than an emerging successor-culture to the decaying West—is the way in which it continually mistakes collectivism for culture (much in the way that, in other fields, derivation has been mistaken for truth). There is the collective, and within it there are, rattling around, the woke individuals with zheir inviolable inner essences and identities; but there is nothing to mediate the gulf between these levels, except technology and power. Thus the rule of SJW, if it is achieved, will defer, out of sheer lack of anything else, increasingly to these and these alone.