Flynn’s Retreat and Academic Idiocracy

I recently came across some remarkable news: the Flynn effect, the mysterious trend of ever-rising IQs that has been documented though most of the early and middle 20th century and has been the source of much technocratic/laissez-faire optimism, appears to have actually begun reversing in many countries. We are, it seems, now measurably becoming “dumbed down”. This is claimed to be due to “environmental factors”.

My own experience may be relevant here as far as these intellectual “environmental factors” go, for not too long ago I was involved with proctoring a statistics-based course, focusing on applications in public health and medical research. This was graduate level, at a pretty high-ranking research university.

Here are some of the interesting features of how the course was designed:

1) extra-credit quizzes, worth 5% of the total course grade;

2) extra credit questions on both the midterm and final exam, together totaling an entire grade-point (B to A);

3) lowest homework quiz score is dropped;

4) most exam questions do not require showing any of one’s calculations;

5) finally, most amazingly, an extra-credit “make-up” exam, where you get to “redo” questions you got wrong on an exam for credit (I helped get this particular foolishness blocked at least, and got a good dose of student flak for it).

Final result: of all students, and even without 5),

• About four-fifths got an A or A- (with about half of these scoring above 100%),

• one-fifth got some kind of B,

• One student got an F (because they didn’t even show up for the last 2 exams).

This is now considered “successful teaching”; indeed the professor who designed the course and rubric and gave the lectures was cited for “outstanding contributions” to teaching.

Of course the real shock, after larding the course with this much extra-credit and other fudges, is that anyone still got less than an A. In reality, belying the grades, I would say perhaps a bit under half the students understood the material at a functional level by the end. However one quickly discovers that:

• If a student gets anything less than an A-, they will complain because this stops them from doing “capstone research”.

• If a student gets anything less than a B-, they will complain because, in a misguided attempt to battle grade inflation, programs have moved towards making B- the minimum passing grade–thus pressuring instructors to simply inflate their grades even more.

These students will go on, one assumes, to fairly responsible positions in management, tech, and perhaps clinical research. We already are beginning to see how that works out. But if the university is just a business and students are just customers, who are we to deny them what they paid for?

Degree mills, everybody: it’s what the People (& college administrators) have spoken for! If IQs are indeed sliding, this kind of inflation of coursework will be both partly to blame, yet also increasingly demanded—thus completing the vicious circle.

Advertisements

Minitrue, 40 Years Delayed

We are now seeing an accelerating rollout of censorship and high-precision thought-management across the most dominant services of the Internet. In a recent video, “Dilbert” creator Scott Adams quite chillingly explains that there are countless persons and subjects that one can no longer even name in a YouTube video without having the video’s comments blocked or it being automatically “demonetized” (the resemblance of this word to “demonized” is, surely, a blameless if wondrous coincidence).

Moreover, considering that social media 1) has more than enough following and influence within the electorate to swing election results on a national scale, 2) is now equipped with more powerful persuasion-managing algorithms than ever before, and 3) that these algorithms are in the hands of social-media conglomerates whose leadership have strongly apparent (leftward) political biases, Adams comes to the stunning but plausible conclusion that the free democratic process in the United States has likely already completely ceased to exist.

Henceforth, we and our elections are to be largely pawns of the censoring, content-micromanaging conglomerates, suggests Adams—raw behaviorist material to be guided, instructed, and shaped not only for profit but for intellectual and political hygiene. And for the most part, since the whole thing is proprietary, we will not even be able to recognize when it is being done.

Similarly to YouTube, a spate of “deplatformings” of high-profile but controversial figures has occurred on Twitter and Facebook, and in a short span of time, giving an impression of a coordinated crackdown. Such bans usually are summary and Kafkaesque, with at most a vague explanation along the lines of having “violated content guidelines”, and no process of appeal.

Those banned range from absurdist provocateurs to real alt-right ideologues to tellers of what used to be called “offensive”, “off-color” or “distasteful” jokes. But from what data is available, there is a clear pattern: the censorship appears to be overwhelmingly biased against right-wing personalities and opinions.

On the left, meanwhile, even advocates for violent groups such as Antifa remain largely untouched; in an interesting twist, a researcher who found evidence that many Antifa leaders are being actively courted by journalists was himself recently suspended by Twitter, with no reasons given.

In this new climate of “repressive tolerance”, it seems, only rightists can be “extreme”—a fine touch of Marcusean theory in action.

One can debate the merits of the banned individuals’ ideas and contributions, many of which are mean-spirited and a few grotesque, but the momentum points unmistakably towards something larger, for which these are but outliers and test cases.

But what is that larger thing? Nothing less than consolidated control of thought and expression and political will, all under the auspices of preventing “hate speech”—a dangerously nebulous concept all too easily remodeled, and now actually being remodeled, into the expansively Orwellian demand that nothing upsetting or offensive be said concerning anyone We like.

It has long been a truism that the way we handle offensive speech is a kind of bellwether for the fate of all speech. But it now seems we are very quickly leaving such early-modern sentimentality behind.

Evelyn Hall‘s famously idealistic cry, “I disapprove of what you say, but I will defend to the death your right to say it”, or the left-leaning ACLU’s defense of the right even of neo-Nazis to demonstrate and speak, give testimony of an era when upholding the principles of free speech and assembly to the very letter was understood to be far preferable to uncorking the genie of centralized censorship (or now, “deplatforming”), even if in order to score victories against the most repellent ideology. This was also an era that had the courage and clarity to recognize that these two approaches are, in the end, mutually exclusive.

But for all the dangers we now see to free thought and its expression, here is the deeper calamity: that we allowed our national political life to become so pitifully dependent on the Internet and on these three companies, that “freedom of speech” and “participation in the political commons” are now viewed as functionally indistinguishable from “access to social media platforms”.

***

Let me say it straight up: the Internet is now mostly an unmitigated disaster, exceeded only by the lemming-like enabling behavior with which billions have greeted it. Driven by a long-inculcated and ingenuous faith in technology as a moral good, these billions walked right up to the Internet somewhere in the late ‘aughts and, without understanding or caring quite what they were doing, as quickly as they could handed over nearly our entire social commons and civic life to what just so happens to be the most atomizing, delusion-breeding, monopolistic, emotionally toxic and conformity-inducing technology ever created.

And so now a huge and forlorn midsection of this country and others finds itself not only quite addicted to this digital crack (as was intended all along), but largely unable to remember or care how older generations ever made friends, formed communities, carried out politics, or pursued ideas and knowledge without it (even though every indication is that we did all of these things considerably better pre-Internet).

We have lost the physical world; we have lost our own efficacy. But this is not all. For when it became broadly apparent that this Internet, this beloved new manifestation of immersive techno-escapism, might in fact be a cult generator that continually buries truth and amity under foetid tides of rhetorical sludge (leading, to a much greater extent than the alleged Russian collusion ever did, to the election of our current president), our wise and virtuous elites somehow concluded that therefore, massive social-media monopolies such as Facebook, Twitter and Google should correct (read, censor) the Internet as they pleased—thereby rescuing us all from our deplorable selves!

The upshot is that the Internet rapidly is transitioning from a place of mob-based mudslinging, that is at least limited by being chaotic and decentralized, to a monolithic system of hyper-efficient eavesdropping and technocratically-curated falsehood. Now there’s a capital improvement!

The promise of Internet as a tool of “liberation” is and always was a fool’s promise, as a few writers farsightedly grasped. While it can serve to enable dissident organization, its overall course tends and always has tended, by its very nature, towards the dissolving of real-world social bonds in favor of the consolidation of remote social control. Indeed, real-world social bonds can be seen as a kind of coarse-grained and dangerously unpredictable rival to the power of the fine-grained algorithmic panjandrums, hence to be replaced by atomized conformity with all dispatch.

* * *

If it were the case that the censorship and thought-control were largely confined to the social media realm—and I have already made no bones about the latter’s perniciousness—then the situation would be less concerning. It might even be salutary, by driving more people away from these stultifying and addictive media and back into the far healthier, if now quixotic channels of original-source research, long-form discourse, and in-person interaction.

Alas the evidence has been coming in for years now that this great “closing of the online mind” is not just a matter of social media or even of the Internet generally, but is instead rapidly developing at the far more worrying level of government-abetted censorship, and in countries with a long tradition of speech and liberalism no less: in Australia, whistleblowers can now face life sentences and whole news networks can be raided—with carte blanche to “add, copy, delete or alter” information—with scarcely a shrug. In the UK, the government has now banned any advertisements containing gender stereotypes deemed “harmful” (like housewives doing chores, or masculine men), and police appear to be actually criminalizing opinion.

This real-life Minitrue carefully monitors social media accounts for any signs of thoughtcrime, ready to pounce with the threat of actual imprisonment. It even declares, portentously: “we take all reports of malicious communication seriously”. All reports!

Speech on religious matters is far from immune either. In many parts of the European Union, criticism of Islam and its founder in ways that show it in an unflattering light can now lead to outright legal penalties. Even in daily life, even limited face-to-face disagreement with new orthodoxies on gender identity and gay marriage, to name the biggest examples, has rapidly become potentially career- or friendship-ending.

* * *

The First Amendment buys time in the USA against the governmental censorship seen elsewhere, by spelling out “free speech” in big capital letters, so to speak, on the doorstep of the nation. But if we take seriously the warnings of Snowden and other whistleblowers—now already 5 years behind the times and the tech—eventually the First Amendement will be swamped, or simply redefined out of existence. For the Amendment, already a somewhat impressionistic and porous barrier by virtue of its very generality and simplicity, is now charged with holding back two huge floods from opposite directions: from the “private” sphere of the social media masterminds as well as from the “public” sphere of government.

This double attack is a consequence, not only of the oligarchic-fascistic merging between government and corporate power that has been underway for decades in the USA, but also of the fact that the ranks of new government officials ultimately flow from academia, which since the 1960s has been increasingly dominated by admirers of just such Orwellian doublespeak doctrines as “repressive tolerance”.

(If it seems unfair to describe only the left as Orwellian here, bear in mind, firstly, that institutional and philosophical legitimacy is overwhelmingly being accorded to even quite extreme leftist theories over rightist ones, while the latter are, as we have seen, disproportionately censored; and secondly, that Orwell chose to name the dominant political party in 1984 “Ingsoc”, or English Socialism, for very definite and pressing historical reasons.)

* * *

Speech has power, both to bind and to disintegrate. This always has been so; it is why rulers have sought continually to restrict it, to varying degrees. And with this power of speech has inevitably come the prospect of causing emotional distress or embarrassment.

But this risk has always been thought a very tolerable price to pay, because the project of free thought, accountability of power to truth, and ultimately individuality itself depends profoundly on the ability of individuals to independently call things as they see them, even at the risk that they may be in error or end up disliked.

This applies not just to the subjective-narcissistic “my truth”, now so lauded under the fork-tongued modern sense of “inclusivity” but, much more importantly, to actual truth and reality—to that which, as Philip K. Dick put it, “when you stop believing in it, doesn’t go away”.

Instead, the days of “sticks and stones will break my bones but words will never harm me” has given ground instead to the “word that wounds”, and wounds endlessly; with the license of subjectivism, insult and offense have been smuggled into the domain of “violence”. Caught in a great and growing mire of pain, obsession, and resentment, the whole project of the open society founders. The remedy? Suppression of all strong and sincere feeling, unless backed by a sufficiently powerful identity group (or corporation).

* * *

What will happen now? The same thing that always has to happen when power attempts to commandeer history, thought and opinion, but a determined minority is unwilling to accept such: alternative, covert or semi-covert channels of information and organization will have to percolate and spread. Some of these already exist; for instance smaller, anonymized, encrypted, or more libertarian communication methods such as Telegram or the “Dark Web” may be options. Blogs remain relatively untouched, but there are indications that WordPress is beginning to test the censorship waters as well.

But the problem remains that these are all still online tools, ultimately dependent upon gigantic server infrastructures maintainable only by governments and large corporations, and hence susceptible to the strange collectivist-yet-top-down control inherent in any highly networked yet centralized information system—and the flexing of those muscles of control is exactly what is at issue.

The only likely solution is to collectively, drastically cut our usage of the Internet and, as much and as soon as possible, prize it from its stranglehold over the social metabolism of our lives, our thoughts, and our nation.

Perhaps there will begin to be a trade in thumb-drives, or other physical media, or even—Heaven forbid, what atavistic blasphemy! Dare one even say it?—actual meetings and interactions of groups of real people, in actual places, to discuss matters that concern them and form actual interpersonal bonds and initiatives aimed towards the addressing of those matters. (Indeed, I suspect that we will soon see put to the test the extent to which that other essential component of the First Amendment—freedom of association—still lives.)

In past ages, freedom was won and maintained through the vigorous pursuit of the written word, and through direct personal meeting of actual humans, not tweets or flash videos. If we are not able to find the will to recover something of that tradition and that skill, which has been essentially left for dead in the lust for false progress, then we will soon find ourselves living under genuinely totalitarian conditions. And Orwell will then prove (rather as I think Malthus will, but that is another story), to have been not so much wrong, as late.

Unfortunately, this may already be inevitable. Aside from the widespread addiction to social media that has already rendered it compulsory to much of the adult and most of the youth population, the greatest risk is that the vast majority simply will not see or care enough to extricate themselves. For totalitarianism is like swimming in a fast-moving current: so long as one obeys it, one does not even feel that it is there.

A Depressing Ambiguity

I recently read an Atlantic article on the fiasco surrounding 5-HTTLPR, as well as psychiatrist Scott Alexander’s blog post on the topic, and am still straining to grasp all the implications.

To summarize, it now appears that over the last 25 years, anywhere from several hundred to over a thousand scientific papers were published, in reputable, peer-reviewed journals, based around presumed “genes for depression” (5-HTTLPR being perhaps the best-known) that are now thought to be completely bogus. The dramatic and abrupt discrediting of these genetic linkages is mostly due to an immense, 600,000-subject study by Border et al., released just last March, which investigated a collection of the most famous “depression genes” and came away finding no statistical support for any of them.

Alexander’s discussion of this, the scientific equivalent of a 500-car-pileup, is especially punchy and concentrated; he is, understandably, much shaken that such a huge body of seemingly reputable confirmatory research on 5-HTTLPR could have turned out, apparently, to be pure phantasm. He throws the absurdity of the whole situation—where perhaps hundreds of academic research groups all managed to convince each other for decades of the rock-solid validity of a host of nonexistent effects—into sharp relief, using a barrage of vivid analogies like the following:

“…This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.”

Yet even this evocative comparison doesn’t quite capture the bizarreness of the “depression genes” situation, for what we see here is less like one explorer, than an entire corps of hundreds of explorers, all going to the Orient and all coming back claiming they saw the same collection of extraordinary, unicorn-themed hijinks.

One obvious possibility, which Alexander gives perhaps too little credence, is that the allegedly dispositive 600,000-subject study, despite being larger, broader, and more modern than all the previous ones put together, may have nonetheless missed something in dismissing the other results. Certainly it seems easier to believe that one study, however large, might be inaccurate, than that hundreds of smaller but independent ones might be. That said, no flaw is yet apparent in the new study, and as Alexander points out, it is not the first work to cast serious doubt on 5-HTTLPR.

But what really haunts me while reading about this latest scientific mess, is a wilder, more Sheldrakean possibility: could it be that 5-HTTLPR and the other gene variants actually were associated with depression for a decade or two—during which these hundreds of studies simply reflected reality—but then simply ceased to be associated with depression, which was then also correctly reported by the new study? If one pauses to consider, it’s not obvious that this possibility is much crazier than the notion that 1,000 studies were carried out, written up, and accepted, year after year, about a completely nonexistent effect.

These are desperate epistemic times, indeed.

* * *

Such snowballing interpretations and re-interpretations, often involving hypotheses of increasingly surreal strangeness, are suggestive of a far more sinister epistemological breakdown at the heart of at least some branches of science.

The issues are by no means limited to 5-HTTLPR. For example, in another of Alexander’s posts, where he reflects on the growing difficulties in establishing reliable scientific truth through research. Strikingly, he recounts findings indicating that parapsychology—the study of such problematic, nay “unscientific” phenomena  as clairvoyance, telepathy, telekinesis, etc.—actually now manages to justify its results at a level of rigor equivalent to that required of “normal” scientific publications, and at about the same rate at normal scientific fields do. He sums the situation up this way:

“…with enough energy focused on a subject, you can always produce ‘experimental evidence’ for it that meets the usual scientific standards.”

This is a remarkable statement. Of course by it Alexander means to drive home something like, “real science’s evidentiary standards are in trouble, because even the parapsychologists, whom we know produce only unscientific nonsense, are now about equally able to meet these same standards”.

Yet as with 5-HTTLPR, we find that Alexander has again inadvertently set us face-to-face with still more Sheldrakean alternate interpretations. First and most obvious of these is that parapsychology’s success in meeting scientific standards of knowledge may actually imply that it is not wholly unscientific nonsense after all, and hence that Alexander’s working epistemic assumptions about it are no more solid than the mainstream wisdom about 5-HTTLPR apparently was. But secondly, and even more strikingly, Alexander’s remark aimed at dismissing parapsychology seems in itself to concede a kind of parapsychological effect: to wit, that “focused energy”, in the form of mass intentions and expectations, can in some way directly influence, even reverse, scientific outcomes, thus fanning the flames of the replication crisis.

* * *

At any rate, even if we do not dare propound such dangerous notions as a fundamental ambiguity in the distinction between parapsychology and “real” psychology, or a direct effect of mass mentality on the aetiology of psychological conditions, we may at least point out certain other, no less remarkable aspects of the current situation.

In particular, at the core of the 5-HTTLPR disaster—standing over its spent body, one might even say—is the now-ascendant omnigenic hypothesis, which asserts, based on a growing number of big-data GWAS studies with huge subject sizes much like Border et al., that almost none of the variability in most kinds of “complex” traits can be explained by single gene variants, or even by small clusters of genes or metabolic pathways. Instead, it appears that human characteristics of such obvious importance as height, intelligence, temperament, mental illness, and even skin color all have an important genetic component, but that this component can only be reckoned as a summation of extremely tiny effects exerted by thousands or tens of thousands of genetic variants.

So then the failure of the “depression genes” hypothesis, from an omnigenic point of view, is “simply to be expected”. In his 5-HTTLPR post, Alexander himself indulges in a bout of Whiggish, retroactive rationalization on this score, approvingly pointing to 5-HTTLPR’s downfall as reaffirming the strength of scientific progress. In science, he tells us, what was once real to us we later discover to be “really” the silly ignorance of our past selves; in the present case, we now see clearly that the whole idea of “depression genes” could never really have made sense, because it would conflict with omnigenics.

Yet why such self-congratulation or epic retconning should increase our confidence in science and truth in the context of escalating failures of knowledge is, to put it mildly, hard to to understand. What makes the modern revision more solid or stable than the old one we now mock, and what makes our present selves fundamentally different from the past selves that were so hilariously taken in, when the whole pattern really suggests our present views, too, may soon need similar retconning?

Much like his oddly parapsychological argument against parapsychology, this appears to be another case of Alexander inadvertently strengthening a point he means to persuade against (and in this we admittedly here take him as somewhat emblematic of the scientific mindset). One is reminded rather much of McGilchrist’s view of the left brain run amok, continually spinning stories, howsoever fabulous, to reassure us of the world’s tractability and our own powers of control. I would submit that 5-HTTLPR was one such story; the assurance that “failures of science actually are proof of its strength” is another. This is not to say that some stories may not have more objective validity than others, but that a situation like that now increasingly manifest in science—a rapid succession of stories without a clear way of judging whether there has been an actual improvement in explanatory or causal understanding from one to the next—is a sign of breakdown.

In light of this, the most disquieting aspect of the omnigenic model is not that it cannot be shown to explain more trait variance than models with fewer gene variants; it is that the gene variants found to account for most traits have, usually, no evident functional relation to each other. They typically do not segregate strongly by metabolic pathway, or chromosomal, physiological or cellular location, for instance. Often, the variants are not even in regions that code for protein. In this sense, the omnigenic model might best be characterized as the absence of any model, a limit on our understanding as much as it is an advance.

Consider an omnigenic accounting of depression risk. Here, the likelihood of an individual developing depression will be governed by some kind of weighted linear combination of tens of thousands of gene variants. Yet what makes this combination the “depression combination” rather than some other, equally random-seeming combination out of the trillions upon trillions possible in combinatorial space? In short, what is so special about it? In the omnigenic world, these questions have no answer—or else “answering” them will require models containing so many variables and assumptions that they give at most a hugely contingent and relatively weak statistical account, not a mechanistic one.

Yet assuming omnigenics is nonetheless “the case”, we are brought to the following fascinating idea: features that are salient and even indispensably important from the perspective of the mental, such as depression and intelligence, can yet have, from the perspective of the physical (or genetic), absolutely no discernable salience whatsoever.

Omnigenics makes much of the idea that these thousands of variants, once properly weighted and accounted for, “explain” the “missing heritability” of complex traits. Yet this depends very much what one means by “explain”. While the variants may collectively lead to better statistical estimates than those made with any one gene or few genes, they give no intuition about what is going on, and no suggestion of how to proceed towards such intuition. We must simply take the numbers and do what we can with them.

Omnigenics therefore represents a kind of abdication of causation—since the interactions between the tens of thousands of tiny genetic effects, although they may lead to a profoundly significant and totally undeniable mental consequence (depression), cannot be traced to any pathway or even to any discrete sequence of events. Complexity has not only swallowed up explanation; it has digested it.

This places us in the throes of a strange new sort of dualism. The omnigenic model suggests that the world as seen from the mental or intentional realm, as well as most of the other complex traits of life that we single out so intuitively within the realm of our perceptions, are in fact causally near-complete strangers as far as the realm of material is concerned. For, as well as we can see, the mappings between the two, while capable of being hacked out by brute statistics using huge sample sizes, are curiously arbitrary, in the sense of being governed by no determinate, comprehensible, or known law or causal factor. Analogously to the hyper-complex representations inside a trained artificial neural network, they are conceptually random; they give us no conceptual sense of how a linear combination of 20,000 genes strongly influences the development of the given mental trait, but only stipulate that it does. And if these mappings really are conceptually random/arbitrary, with no comprehensible sense of causation in which to ground them, then nothing much stands to argue against their being much more freely and rapidly changeable than evolutionary theory or mutation rates might suggest. Through omnigenics, Sheldrake (or possibly a weird dualist-postmodernist kind of irrealism) may have the last laugh.

Reflections: Mass-Crystallization and the New Theodicy

The formation of a mass-conformist hive-mentality out of an individualistic, free-inquiry-based culture is like a kind of symmetry-breaking: it is considerably analogous to the sudden crystallization of a supercooled liquid. Above a certain social “temperature” (for instance, a certain rate of innovation and change, or a certain average level of personal prosperity), the individual phase is stablest; people, like molecules in a liquid, then move and think with considerable, though never quite total, independence. Below that temperature, the collective yearning for defined social patterns and fixed ideas becomes increasingly overwhelming, but is not initially able to hit upon a new configuration to build around (a nucleus).

I suspect we have passed below that temperature, or are just doing so; as for the nucleus for the new configuration, it looks more and more likely that some combination of leftist narratives, self-adulatory memes, and identity-group tribalism will serve the purpose.

* * *

On the one hand one finds, particularly among environmentalist progressives, the pervasive idea that human beings are a kind of nature-devouring and ultimately self-annihilating blight upon the earth (and they may yet be proved right). Yet these same people tend to think of humans’ individual self-conceptions and motivations as superb and sacrosanct. Here is a profound tension: how can it be possible simultaneously to condemn a species as deeply destructive and at the same time think whatever its members wish to believe about themselves is the greatest thing possible? Probably because those who subscribe to both views at once don’t really support “everything” an individual could want, but only a certain quite circumscribed range of approved desiresmostly those based on hedonism, sentimentality, collectivism, or resentfulness. (The existence of “collectivist narcissists”, similarly, isn’t a contradiction, but the very condition of atomized conformity.)

* * *

The Wokester’s Theodicy. — The existence and provenance of hatred, in the purview of a philosophy where all sincerely-felt emotions of the individual are supposed to be pure and wonderful and worthy of unconditional acceptance, cannot help being closely similar to the traditional idea of the Fall, and of “sin entering into the world”. Just as the religious person is bitten by anxiety when she wonders “how can an all-good God have allowed evil in the world”, so is the postmodernist when he wonders, “if ‘my truth’ is necessarily sacrosanct, what happens if my hatred of some group is also my truth?”

The usual resolution here is to say it all depends on which group one’s hatred is directed at. Does one’s truth include hatred exclusively of the Oppressor? Then the hatred is fine. This is equivalent to saying that evil deeds are actually good so long as they are directed only against “bad people”—but of course this has the unintended consequence of tacitly legitimating the evil and taking a mulligan on the real moral puzzle, i.e., the definition of “bad people”.

* * *

Unfettered capitalism is, the more one looks at it, quite obviously an incubator for postmodernism and the regime of “my truth”. Consider the parallels—

Capitalist script: “…if it can make you richer and expand the economy, don’t hesitate—do it!”

Wokester script: “…if it can increase your political power and advance your narrative against oppressors, don’t hesitate—do it!”

Capitalist script: “…if this product seems to give you pleasure and convenience, don’t hesitate—buy it!”

Wokester script: “…if this story seems to gives you pleasure and help you feel empowered and self-actualized, don’t hesitate—believe it!”

And on it goes.

The Violence in the Virtual

If the process instigated by Nietzsche and carried forth in postmodernism is indeed the obliteration of any cogent distinction between simulation and reality—as in the ‘Aura of the Digital‘, for instance—then this process must obtain not just for matter, or money, or social mores, but moreover for violence as well: the distinction between virtual and physical violence, too, must wither away, and the two become increasingly interchangeable.

This interchangeability is now readily seen, in the virtual-to-physical direction, in the steadily growing list of mild insults and disagreements that are classified as “violence”, such as “microaggressions” or “victimizations” or “damaged self-esteem” or “being made to feel unsafe”. But it is the other, virtual-to-physical direction that is even more alarming, for there lies the possibility that visualizations and simulations of violence–the most pitiless and realistic of which already saturate our entertainment and popular culture–will cease to be even distinguishable from physical carnage, so that, being already inured to, accepting of, and indeed amused by the one, we will find no credible grounds for rejecting the other.

To repeat, “physical violence” can only be seriously considered more objectionable than “simulated violence” so long as there remains a trusted demarcation between “the physical” and “the simulated”. And yet this demarcation has already been mightily breached: it has become a cliché by now in popular discussions of physics to speak of “the universe as simulation”, the “holographic principle”, and so forth, while even at the heart of physics there has been an undeniable evolution towards sheer mathematical abstraction, exemplified in “quantum wave functions”, “metric tensors”, “string theories”, and even in much older notions like “action at a distance”. As for the side of simulation moving to become physics, one has only to skim the endless encomiums in the media to the coming “Internet of Things”, “augmented reality”, or “the mirrorworld” to see this complementary prong of the attack gleefully underway.

In sum, we have de-realized the universe from under our feet–and seem still hungry to carry the process to its every last logical conclusion. And so what sheer credulity is it to think that violence, alone, will somehow remain exempt? That it will keep its place, or content itself with mere gestures, sentiments, images? To think, as some do, that violence will only ever continue to move in the first direction, from physicality into virtuality, is hopeless–not simply because there is no compelling reason (let alone law) for that motion not to reverse but, even more problematically, because the direction itself necessarily becomes arbitrary once the real/virtual distinction is lost. The only thing that can stop the disintegration is a faith in the difference, faith in the Real—though that, too, has its problems.

The Blind That Lead the Woke

The goal in our time is erasure and indeed prohibition of all forms of difference as “forms of oppression”… excepting those differences that are wholly self-avowed. These latter—so long as they are not pre-designated as “oppressor”—are instead to be celebrated without exception, being invariably described as neither chosen nor coerced, but as arising from a deep inner, personal source or animistic essence: “my truth”. This “essence”, curiously, is seen as in no way constructed and both it and its needs as absolutely non-negotiable, even though it is in practice often promulgated for entirely external, consensual reasons, such as increased approval and status within the “woke” herd.

This also implies that the racial, ethnic & cultural differences that are so front-and-center in “social justice” movements are really apprehended only in a severely reductive and bowdlerized form–essentially as an individual-centric spiritual “flavor” that may, as if by happenstance, exist across a group, and not as a free-standing, deeply interwoven cultural structure with its own intricate world of meanings and demands that actually breathes its life into a group. These various traditions and ways-of-life, existing wholly outside of the SJW’s field of vision, are thus treated with words of gushy, conspicuous reverence, but with deeds that are profoundly patronizing (such as the hijab-donning politicians in New Zealand).

Thus is born “diversity”: a potpourri of meaningless, essentially narcissistic, pre-approved “flavors” of individual, all dissociated (atomized) from any of the sustaining meanings of their original cultures (let alone deities), all displaying with proud resentment the sacred brand of victimhood, and all merging (conforming) together under a single emergent culture of the debauched Self. This atomized conformity is, indeed, a “melting pot” of the highest intensity; but it is also one with no shape or mold into which to solidify, since “assimilation” is here deemed equivalent to chauvinism, and truth to prejudice.

We may say the greatest danger of the social-justiciars, then, is their complete blindness, despite (or even better, because of) their ideology of “social constructedness”, to the real meaning, value and richness of a functioning culture, of group identification—for these are things, bluntly, that none of them have ever seen or experienced. This blindness leads them to a vision of an “inclusive” world on the model of a chaotic mass, heaped together from the individual level with at most the aid of technical organization, but without any inspiration concerning the living structure, customs and principles that must intervene between the level of the individual and the whole in order for there to be culture, let alone civilization.

We see here that the fatal limitation of SJW—even if to be considered as nothing less than an emerging successor-culture to the decaying West—is the way in which it continually mistakes collectivism for culture (much in the way that, in other fields, derivation has been mistaken for truth). There is the collective, and within it there are, rattling around, the woke individuals with zheir inviolable inner essences and identities; but there is nothing to mediate the gulf between these levels, except technology and power. Thus the rule of SJW, if it is achieved, will defer, out of sheer lack of anything else, increasingly to these and these alone.

Unity and Multiplicity: Notes & Fragments

There are plenty of people to be found who will freely acknowledge that the predicament of the modern, liberal, late-stage-capitalist world comes from corporatist-growthist-bred kleptocracy, inequality and resource degradation. By contrast, there are also plenty to be found who can freely acknowledge that our predicament comes from social chaos wrought by the ever-faster dismantling of all aspects of localism, independence, tradition, spirituality, and distinctiveness. (There are also a few—the hard-boiled libertarians, for example, and the Panglosses who deny there is a predicament—who disagree with both sides.)

Yet much as one may look, there is hardly a soul who can bear to entertain that our predicament actually involves both of these tendencies together, in concert. Again and again one hears: “it must be one or the other—one or the other! Damn the one, but for Heaven’s sake keep the other!”

This intransigent “either/or” begets a tribal ferocity, between two visions of progress and liberalism: one side sees itself the defender of enterprise, universal free trade, innovation, efficiency, growth, and production; the other proclaims the rule of universal human rights and equality, harmony between all persons and peoples, reification of personal preferences, technically-assisted control of reproduction, sexual laissez-faire, “inclusiveness” towards all physical persons (though not necessarily of ideas or speech), and in general the systematic minimization (via both technology and policy) of all forms of strain, competition, discord or physical or emotional threat.

And so the usual exponents line up on their respective sides of the Progress-divide, often calling themselves “Right” or “Left”, “capitalist” or “socialist”, “individualist” or “collectivist”, and turn to excoriate their counterparts, or at least to call for their gradual phasing-out. There seems to be an unwritten rule of demarcation at work between these two, a dangerous fault-line that tenses violently whenever the ill effects of economic progressivism are pitted against the ill effects of cultural progressivism, and whereupon we hear from each side the desperate hope, again and again: “surely one of us is culpable; but even if we should fall, then let the other turn out blameless”! But note well: on both sides is progressivism, and only progressivism!

* * *

We cannot then help but ask: why then should this demarcation exist at all, let alone be defended so fiercely? Why, in other words, might it be so hard to conceive that a dismantling of civilizational values might go hand-in-glove with pervasive greed, gluttony, corruption and short-sightedness—especially when these are all typically done under the same banners of technology, standardization, futurity, progress? Why so hard to see that the compulsions of rapacious profit-seeking and technical standardization and optimization might, by steadily dismantling traditional and individual differences—or digesting them into functionally interchangeable and disconnected units ready for exploitation—both serve and be served by the interchangeable “freedoms” espoused by cultural liberalism? And so, finally, why might it be so hard to admit these trends might all point back to the same underlying pathology—representing, in essence, opposing jaws of the same beast?

We might go further in saying that not only are both sides expressions of the same “beast”, but that this “beast”, moreover, is already known to both sides of the divide by a panoply of names, each reflecting, so to speak, a particular scale of its armor: disenchantment, nihilism, scientism, liberalization, globalization, the Reign of Quantity, “standing-reserve”, technocracy, interchangeability. There are many more such “scales”; let us choose, however, to describe the Beast to which they are attached by the term, “Simplification”, which we take to mean the aggressive deconstruction of all individual and cultural standards and differences, as well as all non-material aspects of human existence, in the service of the fundamentally technological subjugation of all life.

“Everything solid melts into air”, Marx wrote, intending it to refer to the continual upheavals wrought within capitalist societies through technological revolutions in the means of production, leading in turn to complete destruction and abandonment of ways of life and ancient beliefs and principles. Yet this expression applies to far more than just the “capitalist” or “socialist” “side” of the divide: as such it is hardly so much a motto of capitalism alone, as of Simplification in general. We might just as well restate the motto as: “everything meaningful melts into nihilism”, or “everything subtle melts into formulas”.

* * *

On reflection, it seems increasingly incredible, that many of the people on both sides of this economic-cultural pseudo-divide within progressivism have not already reached these questions and this impasse—have not already gathered in their unconscious minds the facts necessary to see that both sides not only play their indispensable part in giving rise to the predicament but, moreover, draw their power from that same source, in the drive towards mass-Simplification. We would propose that they indeed have accumulated this necessary knowledge, and it is for this very reason that these “either-or” partisans dread to think one step further and bring the connection into consciousness. For to do so would reveal their great Battle of Cultural Opposites to be the internecine turf-war that it really is—an emotionally versatile instance of the narcissism of small differences. It also would leave them with no side to hang on to in their lives, no vestige of progression in which to put their future hopes (howsoever Simplified it may be).

Moreover, were the pseudo-partisans on either side to see their connectedness to the ostentatiously loathed other side—see that both “their” side and the “other” side are just tools of Simplification, helpmates to a form of ecological and spiritual mass-destruction—they suddenly would be forced to see themselves as not only no better than those they rail against, but in a covert sense in league with them, having been co-parties in all but name to the exact same mischief, and differing only in their preferences in fig-leafs. (Hence the only apparent hypocrisy of so many who inveigh against environmental woes yet happily pursue profit and change not one speck of their comfortable lifestyles; or inversely, of those who advocate for “minimal government”, “efficiency”, and “competitiveness”, yet happily go along with intrusive restrictions on speech, political disagreement, personal autonomy, and private belief wherever these might lead to real consequences.)

* * *

In sum, the “divide” amounts to a classic instance of psychological repression: the two pieces of the conundrum must never be joined in their minds, not because their joining would not yield an answer, but exactly because it would yield an answer—one too terrible, too shameful to bear, that of mass-Simplification’s overwhelming predominance in deed and thought.

And so for now the division remains: either advocate the wholesale decomposition of values, differences and traditions into the most minimal and insipid possible forms (under the names “diversity” and “inclusion”) or endorse the subordination of all life, human and nonhuman, to the demands of unchained greed and short-term efficacy (under the names of “prosperity” and “growth”).

And yet, while this pseudo-division does persist, it can by no means be said to be stable. For the repression on which it is based is surely destined, year upon year, to grow more exhausting, more onerous to maintain, especially as the contradictions and dissatisfactions of Simplification engender more and more discord, misfortune, and ressentiment. When at last the repression gives way under this pressure, revealing the full ugliness of both sides, there will be a mad rush for alternate viewpoints—ones that promise to return the hope and fulness that Simplification so deliberately excluded. Yet most of these, as in previous examples of mass ideological intoxication, will prove at best underdeveloped and ineffectual, and at worst grossly defective.

* * *

Surely related to the apparent divide between the two “jaws” of Simplification is the fact that, more and more in the acceptable range of political debate, we find ourselves forced to commit to the faith that each culture and even each personal preference holds a truth that is uniquely valid and valuable, incommensurable with any other, and which must be accepted and respected on its own terms—while at the same moment, we are just as strongly exhorted to believe, in usually no less righteous terms, that we are all one humanity.

Both of these views are elements of progressivism. We are torn, in other words, between visions of unity and multiplicity, and forced to hold both visions in mind at the same time. Nature is constituted by one supreme set of laws, yet the laws are a matter of cultural construction and moreover vary from “paradigm” to “paradigm” (Kuhn); all human beings are fundamentally the same and peace will only result from teaching them such, yet their cultural differences are of supreme importance and a diversity of them is an unquestionable good; and finally, while the “arc of history” (bending towards justice) is sure to weld the human family together forevermore in a monolithic, reason-based system of universal rights, truth, trade, and tech, at the very same time history has no meaning or direction other than that defined by power, mostly exerted through the arbitrary “creative” configuring of information that has now reached its consummation in the digital.

These two groups of ideals, if taken as impassioned absolutes, are of course incompatible. One cannot look forward to a world composed of completely incommensurable, distinctive traditions, yet happily subordinate them under values that claim absolute universality, without destroying something of their original sincerity—nor inversely. Either the traditions are sideshows, heirlooms, museum-pieces kept for the sake of color and curiosity—as a way to unwind from the serious business of homogenizing the universe into a porridge of atomized conformity—or the universal development must bend its knee to the local and traditional, without irony. Therefore whoever claims to hold to both of these visions, to keep both faiths, must be less than steadfast in their commitment to at least one of them.

Comparing the Enlightenment roots of modern political liberalism, Allan Bloom observed that the conception suffers from:

“…two contradictory understandings of what counts for man. One tells us that what is important is what all men have in common; the other that what men have in common is low, while what they have from separate cultures gives them their depth and their interest.” [COTAM, 191]

We naturally must ask, then: “which is it to be”? Are we most precious in our diversity—real diversity, that volatile, rebarbative, mischief-prone thing, that challenges and asserts and draws lines and in short, stands for things—or are we to be melded together into a smooth, monolithic system? Multiplicity as difference, or overarching order and commensurability? Inclusiveness of inner, independent conceptions, or inclusiveness of externalized, group identity? Or have these categories themselves already succumbed to Simplification, so that we are choosing, yet again, between opposing jaws of the same beast? Is there an alternative—either a view of Progress that does not demand these violent internal ruptures and contradictions, or else a path leading outside of progressivism altogether?

* * *

A fundamental oddity of the liberal mindset (both cultural and economic) is the belief that while people’s economic and ideological behavior can be rationally shaped ad libitum by creating the right system of incentives, certain other kinds of behavior—notably sexuality or drug abuse or criminality—are immune or even off limits to such incentives, to the extent that any talk of normative standards in these areas must be denounced as intolerant and misguided. Thus, paradoxically, that which is to be controlled and subject to norms (economy, ideology), is given the status of “choice”, while that which is wholly self-defining and free (sexuality, identity, drug use, crime), is stipulated to be in no way a matter of choice. (Whatever is left over, in general, is classed as “society’s” fault: but this misalignment is to be remedied through the plasticity of the economic and the ideological.)

* * *

  Under nihilism, any principles—no matter how seemingly commonsensical—by definition cannot hold in the end, for they have been “revealed” as no more than quaint (or despised) shibboleths. Principles are no more than an imposture of habit; as habits, they will inevitably be slowly digested by the palpable comforts of self-deceit and local expediency, until only atomized personal “feelings” remain—radical subjectivity, “personal truth”, identity politics, etc.

This becomes even more clear once it becomes typical to see principles purely positivistically—that is, as conventional propositions only, anchored neither through transcendental verities nor by virtue of being integral parts of a kind of living whole. In the positivist perspective, all propositions must seem arbitrary, hovering in space, needing no further explanation; like a row of switches, they appear unproblematically separable, mere functional inputs that invite us to customize—to toggle them “on” or “off” as  “self-expression”, explanatory paradigm, or even personal taste dictates. Retaining the forms of truth and value, while underneath taking its orders from nihilism, the positivistic opens its doorway onto the postmodern.

* * *

Self-negation is as powerful a force as self-inflation, but more irreversible, more likely to maim; like a diamond-blade that numbs and cauterizes on contact, we may not even notice what was lost, and this is the danger. Just as free will comes into being precisely in our positing it—and dies in our rejecting it, leaving ourselves invisibly diminished and poorer—so nobility, greatness, fineness, depth and beauty exist well and truly, but only so long as we acknowledge and honor them, without playing at extracting and isolating their “causes”. But once we declare them figments merely because this search for their causes has failed—or worse, because we believe we have found a cause, one that turns out to be thoroughly instrumental or mechanical—we are wont to banish them… whereupon they might not be got back again, even should we retain enough consciousness to recognize the void left by them and to plead for their return. The “cause” here therefore functions, deviously, as the very opposite of a cause—it is not the origin of the quality at all, but the thing that dispels, destroys it. (Not all that disappears upon dissection is illusory.)

* * *

The real wonder, then, is that the much-derided “bourgeois” mores, and also the conventions of mathematics and casual reasoning that form the backbone of daily life in most of the West have taken this long to completely deliquesce: the onset of nihilism, after all, had become widely acknowledged by the late 1800s. Yet these mores seem to have continued well into the modern era as a sort of autonomic pattern, a muscle-memory born of sheer habit and ironclad utility. It was for this autonomic, spiritless quality, and this weed-like tenacity that they were so long and so ostentatiously detested by the intellectual classes—but whoever could suspect (despite the Nietzschean blasts against the solidity of truth) that they would eventually seem to us almost colorfully antiquated, as willfully archaic as the Latin Mass?

* * *

…spiritual power is in no way based on numbers, whose law is that of matter“. —René Guénon

Mathematics is frequently put forward as one of the few remaining sources of indestructible truth, a bulwark against the arbitrariness, chaos, schism, and contradiction that afflict us. Yet we cannot help but question where the mathematical arises and, if it is truly “indestructible”, how it stands in relation to the problems that plague us, such as Simplification.

It is easy, for instance, to imagine that mathematics is born of our encounter with separate objects in our experience that strike us as similar—of repetition, essentially, from which comes counting. For this repetition to be possible, we extend this vague impression of similarity into the world of abstraction by inventing, analogously, standards defined to be identical and countable—units, or standard identicals. By the imaginative leap of superimposing these standard identicals upon objects in our experience, followed by application of counting to these identicals, we arrive at the process of measurement. Thus, encountering a stone in our path, we imagine a “cubic centimeter”—a standard identical of “volume”—and then imagine filling the stone with these so no room is left, then count the units.

Prior to this process, even the impression of magnitude—of this stone being strikingly bigger or smaller than another—is, like the impression of similarity, not inherently mathematical. But after the process, we find that the character of magnitude itself has subtly but drastically changed; we have made it subject to quantification, placed it at quantification’s disposal. By this trick not only is mathematics set on its feet, but more and more of our experience is then locked in subjection to it—a phenomenon often given the name “progress”. The units, actually re-digestions of experience, become reified as essences—the essences then go forth to control, and cover up, the world as it was originally experienced.

Math, in such a view, does not come from the Platonic realm; rather, if anything (assuming math is part of the Platonic), the Platonic realm is configured by a mathematical demand. (The Platonic “essence” of the identical unit is posited after-the-fact, in order to escape the problem of how two such identicals can be absolutely the same, yet at the same time distinct—the problem of “the equal” which Socrates saw clearly in the Phaedo. The answer becomes that the different instances of the unit all partake in the same “Unity”. This is question-begging in the highest spiritual garb.)

In this view, multiplicity becomes distinct from unity, and hence mathematics comes into being, only through the prior demand for these “standard identicals”. Without this demand, there would be no conception of “things to count” in the first place. But this is really no different from the demand for interchangeability: for the treatment of objects as identical without respect to their subtle (or ill-understood) differences.

Note also that counting itself is absolutely progressive—once the interchangeability of things-to-be-counted is posited, a single sequence of numbers is sufficient to count any collection of any type of thing. The numerical embodies values not only in the sense of magnitude or the “unique properties” of a certain number, but in the determination to count in the first place, and in the motivation to progress in an absolutely determinate way.

It is not then the world that begins in the methodical unfolding of Number, nor even the “Platonic” world of Truth, but merely Simplification, standing both in front of and as the world.

* * *

Money is another kind of standard identical—in economic parlance, the “universal equivalent”. The raw magnitude that it mathematizes is, roughly speaking, desirability and so also value itself.

Note that money belongs, as it must, to the world of values (both numerical and moral), and hence cannot be immune to the depredations of nihilism and relativism—even if it is considered as the very basest form of value/morality, the most mindless or “materialistic”, still money’s operations must rely on certain principles and standards, moreover on relationships and attitudes, whose erosion in the stream of solipsistic emotion must in time rob it of its articulations. Imperceptibly at first, money falls victim to the very “base”, “materialist” urges it is meant to embody, channel and facilitate (financialization, bubbles, snowballing debt, astronomical inequality result).

In itself, the erratic, chaotic nature of stock markets and of economic figures in general—their constant jostling over immense ranges in short periods, their relentless pursuit of bubbles, panics and other deceptions—gives the lie to the idea of rational agents: if markets were indeed “rational”, i.e. responded to purely objective “truths” accessible and plain to all these marvelous minds, they would instead show a tendency for all their participants to soon converge, as one might hope to “converge on the truth” though some kind of physical investigation. Instead, markets can only be seen as aggregated irrationality—that is, as dynamical, superficially mathematized speculation in different value-systems (albeit confined to ones of a roughly capitalistic flavor).

* * *

Reading John Gray’s latest book, “Seven Types of Atheism”, brings one to the sense that there must be an inseparable millenarian aspect to progressive beliefs: for, even having attained “perfect” social and economic justice, where then does one go? Number continues endlessly, and so, we assume, does time; yet the vision of an “end time” is common to both religious and secular faiths. Then, there must be some means by which Progress is discarded as aggressively as it was once taken up, and some timeless perfection stills the engines of time.

There is no real consideration of the absurdity and difficulty of maintaining such a condition, in suspenso, in the real world, for the rest of eternity–of preventing any movement in any other direction, ever–in short of maintaining a supremely anti-progressive regime. Rather the implication must be that this is an end-point, whereupon the progressives will have simply “won” and the whole game ends. Presumably then the progressive Faithful will be “raptured” off into a transcendental state of history-less perfection… in this they are more apocalyptic than the average mainline Christian… indeed they are more Christian too in some sense, as Gray points out. (One incidentally cannot help marveling at the coincidence that these people are called “millennials”.)

Bloom was aware of this tension, noting that

“Engels had a divination of what is needed when he said that the classless society would last, if not forever, a very long time. This reminds us of Dottore Dulcamare in The Elixir of Love, who says that he is known throughout the whole universe—and elsewhere. All one has to do is forget about eternity or blur the distinction between it and temporality; then the most intractable of man’s problems will have been resolved.” (COTAM, 230)

But one also sees this blurring in more “establishment” progressive writers like Pinker and Rosling: extolling advances against hunger and poverty and many diseases, they fail to note that these have been accompanied by exponential increases in resource consumption and pollution—the “Great Acceleration”. They cannot imagine that the very “progress” they write about very plausibly makes a worse crisis more likely down the road; for them, world-time can only go in one direction, so that nothing, once gained, can ever be lost—or at least, they mutter into their sleeves, not for “a very long time”. How can this strike us as anything but an unforgivable shortsightedness, if not duplicity?

* * *

The journalist and social critic Chris Hedges recently noted,

‘In “The Postmodern Condition” the philosopher Jean-François Lyotard painted a picture of the future neoliberal order as one in which “the temporary contract” supplants “permanent institutions in the professional, emotional, sexual, cultural, family and international domains, as well as in political affairs.” This temporal relationship to people, things, institutions and the natural world ensures collective self-annihilation.’

Hedges generally is known as a strong, if not radical leftist/progressive/pacifist. But doesn’t this kind of wistful talk about the loss of “permanent institutions” and the “annihilation” this would bring sound almost like an inchoate traditionalism? Here is another glaring signpost of our confusions: even the fiercest progressives have begun to sneak wistful gazes at the distant past. (This tends to take the form of a pseudo-scientific exaltation of primitive tribal societies, which they would see as lower-tech kibbitzes, or perhaps as “hippie communes done right”.)

* * *

It may be only a platitude born of modern complacency that tells us gods are impossibly distant and uninvolved. Indeed, for the person or nation that reaches out imploringly enough—in extremis, let us say, or from the chaos of nihilism and self-contradiction—some god or other will almost surely turn up close at hand—for good or ill, depending on the god. Yet the modern man’s flatlander vision—which sees only the Simplified “material” (whatever that is), and views life largely as a switchboard of customizable, countable propositions operating within the isolated, information-laden “self”—makes him unable to appreciate these dangers. The demands of his top-heavy, painfully individualized consciousness drag on him like shackles that he would in a heartbeat do away with; yet these very shackles are none other than the restraining (often “bourgeois”) habits left by his departed former truths. Thus when the god does come, his intoxication is more massive, more compulsive, more thorough than that of old.

We would do well to hope, then, that whatever god presents itself to us in the times ahead has an indulgent nature where we are concerned—very indulgent indeed.

Progression and Simplification

For all the high praise and fawning devotion the founding philosophy of the United States of America customarily receives, and for all the attempts that have been made (albeit from oft-sycophantic motives) to imitate it, I find the more I look at it the more I notice something coldly impersonal, mechanistic, nondescript in it—in short, I find it captivated by a typically Enlightenment-style emphasis on universalized/standardized/abstracted form and procedure, completely at the expense of the needs of place, belonging, spirit, and above all heart.

(This faceless, rote mood incidentally even matches the name of the country, “USA”, which like so many other modern contrivances offers not a distinctive proper place-noun—”American” is after all claimed by all denizens of the hemisphere—but instead something almost willfully placeless, a chain of terminology mashed into an unpronounceable acronym.)

While the idea of a short list of rights such as we see in the first ten Amendments is indeed a useful minimal insurance policy for thinkers and nonconformists (that is, the real individuals) against the eventuality of degenerate and incompetent rulership, or mob justice, even in this capacity it is best understood only as a buying of time. Ultimately, the USA’s—and liberal democracy’s—founding Weltanschauung is all but tailored to favor a deepening descent into technical regimentation and bureaucratic, anomic facelessness, alongside which the advent of absolute majoritarian tyranny will eventually seem vanilla, an anticlimatctic fait accompli. At that time, assuming anyone bothers to look back, they will wonder how these two things—bureaucracy and majoritarianism—could ever have been thought incompatible. (The situation may even grow dire enough that, if any honest human passion remains to be found in the land, people may begin to pine for the good old days of the Articles of Confederation.)

* * *

At its best, “Enlightenment” means nothing more than this: it is the tool whereby one may consider radically different worlds and values, without however being consumed by them. It thereby serves as a proper and necessary foil to the relativism unleashed by Nietzsche, not by refuting it, but by coordinating it, by maintaining its suspension and tension—to “stand outside” a system, at least for a time, and see and credit other ways of thought.

Almost any expansion of the meaning of “Enlightenment” beyond this sense (and let us freely admit, pretense) of calm, detached evaluation (not excluding, however, the possibility of eventual judgment), constitutes a usurpation—it leads to rampaging reductionism, to overweening mechanism, to secular fanaticism—to what J. R. Saul calls the Dictatorship of Reason.

* * *

Diversity is prized in our age—we hear this continually. Yet the closer we look the more we find this diversity’s aim is not actually to celebrate difference, but to underscore and enforce a fundamental sameness (interchangeability) among everyone. This is the sameness of atomized conformity.

Put another way, our culture relishes diversity in the same way a foundry “relishes” a fresh shipment of pig iron: as raw material to be cast into the same few standardized, mechanical shapes. (Here, indeed, is the real sense behind the storied American “melting pot”!) That we have so easily ceded ground to identity politics and abandoned any notion of “common culture” only shows that our commitment to culture of any kind has long since become vestigial, rededicated in almost all respects to the demands of technocracy.

Of culture and morality, essentially nothing remains, except some bare-bones imperatives designed to be easily graspable even for the most mediocre of mind:

1) avoidance of all physical, and increasingly also emotional harm;

2) avoidance of conflict, including disagreement over any substantive issue;

3) reflexive, absolute equalism (an aspirational commitment to “equality” of both opportunity and outcome, subject to neither reason nor appeal nor circumstance);

4) routinized hedonism, through consumption, self-numbing and distraction;

5) an instinctual, exploitative self-interestedness, oftentimes misconstrued as “rationality”.

Even these are scarcely worthy of being seen as moral principles, although when challenged they can certainly evoke prodigies of moral posturing in many individuals; at root they are less normative than instrumental, indeed mechanical in conception, essentially intended as the “programming” of the smaller automata so as best to ensure optimal functioning of the wider machinery of government and market.

Indeed, the “Age of Reason” could just as well be called the “Age of Mass Simplification“. Complexity has risen spectacularly, it is true, in the means of purely technical production and organization, but there only; in all other areas of life, including morality, philosophy and imaginative power, a relentless smoothening, narrowing and coarsening of human vistas has been unmistakeable since at least the early-mid 1800s.

* * *

Progressivism and Enlightenment have always had special appeal to, and derived most of their political momentum from, the young—but not because the young are more open-minded, as is now admiringly said, but more because the young tend to be arrogant, ignorant, impetuous, and usually resentful when told by elders that they do not understand something or are not yet ready to undertake it. Far more than their elders, they are eager to accept fast and easy simplifications in place of those things that are difficult, or arcane, or subtle, in order that they may match the power of these elders and throw them off. The enterprising progressive or philosophe, in his own gambit for power over society, capitalizes upon this perennial resentment of the young towards the old, finding the most receptive ears again and again among those who know the least about their own civilization (and for that matter, about their own natures). These young then grow up—pleased with themselves for having cast off as ancient bunk the understanding of their forebears (though without having ever really grasped it for themselves); feeling a debt of sorts to progressivism as the supplier of that power by which they came into their own; and with a softened heart towards such rejectionism in their own children. (From this last there develops in progressive movements and states a tendency to lionize the young at the expense of the more experienced: “don’t trust anyone over thirty”.)

The overall result is a generation-by-generation simplification that continually picks up speed, stripping away information, intuition, detail, differentiation, even culture itself, until only the most coarse-grained, overt and mediocre aspects of human experience—the areas of technique, and the bare individual—stand forth alone, in commanding relief. In this regard societies where the progressive urge has taken hold can be said not to age in the normal sense, but to age backwards—till a point of complete infancy (and entropy) is attained.

* * *

The democratic citizen says to his leaders, “we hire you to rule us”—but of course, such an arrangement is not really rule at all, but a kind of circular codependency. It is even less truly rulership than the capitalist employer’s miserly and mostly cowering “rule” over his employees. The techno-democratic ideal, inflamed with equalism and constantly craving simplification, naturally resents and attacks any concept of a ruling class, and chases it forthwith from its sight. The result is that, instead of power, superiority, or destiny, ruling instead becomes anchored in the possession of knowledge—for the most part, knowledge of a deliberately arcane and unproductive kind. The natural democratic (or socialist) ruler is thus not a demagogue but an indecipherable and obscure bureaucrat, because this is the only form in which ruling authority can still be hidden from the masses. And: the more indecipherable the bureaucrat, the more indispensable (s)he becomes!

This is what J. R. Saul is referring to when he talks about the secrecy and private languages of the technocratic class; what he steadfastly cannot admit, though, is that this development is a product of democracy, even if it later rears up as an obstruction to the popular understanding and will.

Thoughts on the Wired article on Karl Friston

There’s a lengthy article at Wired about professor Karl Friston, one of the more recent superstars in the popular-science pantheon. Interestingly, the article is far more interesting for its psychological or philosophical aspects, for its striking window on the motivations behind Friston’s general worldview—and that of a great many scientists working today—than it is for evaluating the actual meaning, utility, or novelty of Friston’s theories. (We may flatter ourselves to imagine that Friston, himself a psychiatrist by training, would not begrudge our focusing on inner motives.)

Perhaps the most striking tendency in Professor Friston, far more pronounced even than in most other scientific reductionists, is his predisposition to a kind of overpowering univocality: everything that exists must be the absolutely deterministic unfolding of a simple, completely unambiguous code, which cannot be seen as in any way provisional or open to growth or disagreement. While Professor Friston himself is undoubtedly a charming and brilliant individual, this urge to univocality, at times, attains to such intensity and such unreflectiveness that the effect seems even monstrous.

We are introduced, for instance, to Friston’s “obsession, dating back to childhood, with finding ways to integrate, unify, and make simple the apparent noise of the world.” As a tangible example, we are given a recounting of one of Friston’s most cherished moments—his childhood conclusion that wood-lice on a suddenly-upturned log do not move faster in order to seek the shade, but simply run faster when they feel the sun. Friston deems this

“…his first scientific insight, a moment when ‘all these contrived, anthropomorphized explanations of purpose and survival and the like all seemed to just peel away,’ he says. ‘And the thing you were observing just was. In the sense that it could be no other way.'”

Yet there is something odd here, a sort of ghost at the feast. For although we are meant to see this as an object-lesson in young genius triumphant, the article actually quietly mentions that Friston’s conclusion is, in fact, still unproven. It may well, in fact, be untrue—yet this simply does not matter, because it must first be weighed to satisfy the pre-existing demand for absolute simplicity and absolute determinacy. “It could be no other way“—that, one senses, is the true motive from which all the rest flows, not from the actual world or even from data about that world. This is an emotional kernel—a psychological preference of Friston’s, not a deduction. Here is another example:

“When Friston was in his mid-teens, he had another wood-lice moment. He had just come up to his bedroom from watching TV and noticed the cherry trees in bloom outside the window. He suddenly became possessed by a thought that has never let go of him since. ‘There must be a way of understanding everything by starting from nothing,’ he thought. ‘If I’m only allowed to start off with one point in the entire universe, can I derive everything else I need from that?'”

In a way, there is nothing new at all here. We have the solipsistic dream, quite common in physics and science generally, of “deriving” everything about life and reality from a single principle (or in Friston’s case, from nothing at all)—what Nietzsche pinpointed as the Socratic urge to “correct existence”.

Yet there is something especially chilly in this moment, in the way that Friston, possessed by his univocality daemon, completely disregards the cherry trees for themselves and simply subordinates them—along with the whole universe—under a matrix of assumed, abstract formalisms, to be created by himself alone. It is at this point that one feels one is in the presence not just of a need to simplify (or perhaps oversimplify) reality, nor yet to “correct” it, but of a kind of all-consuming demand that seeks to crush reality down to whatever level of simplicity will allow it to be controlled or contained.

This is where we sense the monstrous element in Friston’s psychology—the realm that William Blake called “Single vision, and Newton’s sleep“, and also the realm of the totalitarian, for whom there simply must be a framework, a simplification, that eliminates all things that are ambiguous, changing, that cannot be formulated or controlled.

* * *

In light of all this, it gives one special pause to consider that Friston first made his mark in the refinement of brain imaging—a suite of techniques that, it now turns out, have unleashed a deluge of underpowered, irreproducible, or simply misleading but highly fashionable “findings” and theories-du-jour about the brain, which are often treated as if practically dispositive. It is as if we here see, in actual research practice, the proliferation (and fruits) of that compulsion so exemplified in Friston: make the theory, then jam everything else in the world into it. Indeed, nothing better sums up this mentality than Friston’s own words:

‘“We sample the world […] to ensure our predictions become a self-fulfilling prophecy.”’

Self-fulfilling prophecy, as the basis for a new science of mind! Again it seems undeniable that an inward, psychological drive or need has been projected on the outside world, something like: “since self-fulfilling prophecies are all we produce or care to see, they must form the entire world of thinking in general”. We also here see the sway of unreflectiveness—hardly Friston’s alone but pronounced in every quarter—in that what is set up as “genius” increasingly is codified as the exclusion of contradicting possibilities from consciousness. We shoehorn Nature into the theories that give us the most thrill or prestige, and jam our fists in our ears to keep out the rest. To which one can only say: if such a mindset really is the great new hope of neuroscience, then neuroscience is yet due for a great deal more meandering and mishap—however fashionable.

(Incidentally, one sees similar tendencies even farther advanced in physics, specifically with string theory, where theorists now rather blatantly choose tribal loyalty, and loyalty to a project of univocal “unification”, even in the face of empirical disconfirmation.)

This is the territory of dueling university press-releases, of the thrilling, jargon-y sound-bites of science journalism and popular physics bestsellers, of that nihilistic awe where, without knowing what we are being asked to believe in, we are nonetheless enjoined to marvel in it. It is the territory that John Horgan, in his ever more prophetic-seeming 1996 work The End of Science, called ironic science: scientific-sounding theorizing that furnishes a sense of mystery, beauty, and grandeur, while lacking in testability, sublimity, or often even minimal comprehensibility. We have uneventfully slipped into the era of science according to the Three Wise Monkeys.

* * *

Friston’s rise to wider scientific stardom over recent years stems, however, not from his contributions to brain-imaging methods, but from his ostensible magnum opus, the Free-Energy Principle, which proposes to reduce all life and cognition to a minimization of free-energy—essentially analogous to “surprise”, or entropy. Again, much like string theory, in every quarter where it is discussed, the Free-Energy Principle is not so much simply noted for its difficulty and abstruseness, as it is renowned for it. For example, the article in question recounts, with a kind of admiration, how whole workshops of high-end physicists and engineers have failed to come to grips with the idea.

Faced with such accounts, which all seem to take great pains to establish the gorgeously incomprehensible profundity of Friston’s Principle, a mischievous thought occurs. Might the reality be the exact opposite—that the Free-Energy Principle is actually too simple, so that the “thought-leaders” and such who extol it and so valiantly pursue it must convince themselves of its awe-inspiring difficulty and depth in order to get the required narcissistic reward from pursuing it? After all, no one in a cognitive elite worthy of the name could truly pride themselves on understanding something that was merely simple or intuitive. And yet, on the face of it, the idea of “minimizing quantity X, in a system separated by boundary B, using gradient descent of an information measure Y” seems extraordinarily unoriginal; it is, rather, a trope, emblematic of that computational flavor of reductionism that is so favored in today’s most widely-disseminated “explanations” of the nature of reality—the “holographic universe”, the “universe-as-simulation”, and so forth.

There is a risk in critiquing, even in broad outline, a theory one does not understand in every detail (though in this incomplete understanding, it appears I am joined by nearly everyone in the world, possibly including Friston himself). At any rate, given the many unsettlingly totalizing and circular tendencies implied in so many of Friston’s remarks, motives, and experiences, and the strange celebration of abstruseness and evasion of simple testability that beshroud his Free-Energy Principle (which surely make it a prime candidate for “ironic science”), it may be no great surprise that equally disturbing questions come to mind when we contemplate the Free-Energy Principle’s implications.

For instance: does not the idea of explaining life as seeking an ultimate minimum of anything, free energy or otherwise, imply also a tendency towards eventual convergence and stoppage at that minimum? What happens if, somehow, that goal is achieved?

If the quantity being minimized is surprise, in particular, then the Free-Energy Principle suddenly stands forth as an uber-totalizing kind of intellectual heat-death, as all minds eventually coalesce into a trap of their own perpetually self-fulfilling expectations. But a war against Surprise is a war against wonder, against renewal—both things we all know living things, at their healthiest, actively seek. In other words: Friston’s Principle is the exact embodiment of the viewpoints indicated above in his own remarks—of the overpowering urge to simplify down to a simple, absolute, final state of belief, beginning from a single and invariable point of view, after which all further thought and experience becomes unnecessary. It is hardly neutral or objective at all.

If this is so, we should beware that the Free Energy Principle may be far less a theory of life, or thought, than its exact opposite—a theory of how to make these things dead.

Fascist Intimations–in the Deep Mainstream

At a party over the holidays, I was treated to a round of a new parlor-game that’s sweeping the nation, winning awards for thoughtful game design, bringing innocent delight to households great and small: “Secret Hitler“!

As the name suggests, one player is designated secretly to be “Hitler”, and three others secretly to be “fascists”. The remaining “liberals” must try to figure out the latter’s identities before it’s too late. As the gameplay unfolds, paranoia abounds and accusations fly ever faster, but calm deduction will avail you precious little.

I took away four key impressions from this experience:

1) Americans, more than ever, can be relied on to mindlessly “gameify” (or inane-ify) simply anything. This process is almost a reflex at this point, providing almost our sole foil to that other, much-loved, but more strenuous coping-mechanism, the Moral Outrage Sweepstakes.

Interestingly, just as Moral Outrage in our day seems to subsist on incidents of an increasingly minuscule sort—the occasional wrong pronoun, say, or culturally-appropriative sombreros at Halloween, or even someone failing to completely agree on a crucial fine point of your post-Marxist critical narrative theory of heteronormative subtexts in bonobo enclaves—so, inversely, this trivializing impulse of gameification seems increasingly to rejoice in refashioning into banal amusements topics that really, really should be kept serious (such as, well, Hitler).

Incidentally, this pattern—this loss of solemnity and proportion over every scale of life—is a further sign of the now-overwhelming predominance of the Last-Men, those glib “discoverers of happiness” out of Zarathustra, who “make everything small”. For the Last-Man, real seriousness is foreign and unbearable (carrying, as it does, the potential confronting of tragedy); whereas fake outrage and light amusement are both always-welcome salves to his gnawing inner emptiness.

2) The USA’s morbid fascination with fascism—a political system that to all outward appearances has been buried for nearly 80 years—has now extended into the pastimes of those very bien-pensant Last-Men who now, in the age of Trump, so excitedly style themselves as guardians (albeit mostly gameified ones) against fascism. (Also, yes, that this post is itself discussing fascism does play into my point–if you noticed that, give yourself a star!)

As living memory of it has dwindled, World War II has gotten less and less real and more and more fantastical, more virtual. It now serves almost as modern-day Americans’ creation myth, a grand adventure-epic in which, out of the ashes of Europe’s final, comic-book-like Götterdammerung, our nation swept in just in time to save the West, establish the Free World, and glean unending superpowerdom and moral supremacy in the process; it is often the farthest back in history we bother knowing about, even in popularized form, because the universe did not exist before then, at least as far as we care.

Meanwhile, in keeping with the nigh-cosmogonic importance of the War he launched, Hitler seems to have stealthily risen in the everyday American (and Western) imagination to a new office as our de facto god of the underworld, which, in an essentially post-moral society that prides itself on having done away with the silliness of most moral absolutes, has made him a quasi-Mephistophelian locus of boundlessly titillating horror and hypnotic fascinations. The mass-popular re-imaginings just keep coming, and seem to plumb ever-new depths of flippancy: from “Hunting Hitler”, “Iron Sky”, and “Look Who’s Back”, to the endless “Downfall” parodies that form virtually a whole separate genre on Youtube. In the publishing business, meanwhile, there are few more sure-fire recipes for a best-seller than accounts of the Nazi era and its leader; in the marketplace, as well as popular imagination, the Hitler vortex deepens apace.

This complex fascination, which seems only to grow the further we get from the days of actually-existing fascism, tells of something deeper and possibly more dire going on, though it has been going on for a long time, and goes well beyond pop-culture. In The Closing of the American Mind—now over 30 years ago—Allan Bloom in fact warned of certain striking parallels between the state of the US intelligentsia and that of the Weimar Republic.

In particular, he noted, the USA’s intelligentsia and subsequently popular culture had imbibed, with a stunning gusto and equally stunning obliviousness, the same fateful brew of German philosophy as had captivated the doomed Republic in the 1920s: Weber, Freud, Nietzsche and Heidegger, principally. In America, this philosophical invasion was disguised by the national instinct to cheerfully trivialize everything serious, to ignore deeper currents—to gameify, in short. As Bloom nicely puts it,

“…the new American life-style has become a Disneyland version of the Weimar Republic for the whole family.” (147)

Surely the concept of “Hidden Hitler”, though a small addition to the heap, fits with this overall mood most uncannily.

The irony, naturally, is that the very fascination with fascism helps pattern an actual resurgence: it is a prefiguration, ideation, suggestive of a mental pregnancy, or a subconscious planning-in-advance. One good economic shove (or a well-coaxed national security threat), one senses, combined with the right sort of demagogue, is all it could take.

3) Another interesting detail: on the cards used to play “Secret Hitler”, the fascists are depicted never as people, but as ugly, reptile-like creatures. Now—depicting members of a disliked political group purely as inhuman monster-caricatures: is this not itself a rather fascist habit? And inversely, perhaps a tad complacent as well?

4) The game was not very fun to play, since there is no way to guess what is really going on—one simply yells, and points, and makes shot-in-the-dark accusations. But I suppose this, too, is a match for our time and place.

***

The intellectual and foreign-policy elite, meanwhile, of “adults-in-the-room” fame, seems to offer anything but a clear-sighted bastion amid the general fascist-fascination. For example, notice the careful terminological tip-toeing in this recent, much-discussed essay: (https://www.cato-unbound.org/2018/12/10/stephen-davies/great-realignment-understanding-politics-today)

Stunningly, the author, Mr. Davies, discerns a “realignment” underway in Western politics, where one newly emergent pole of the political spectrum will be what he awkwardly terms “national collectivists“. What does this term remind us of?

Of course, the more natural and well-worn term to use here would actually be not “collectivists”, but “socialists“. For obvious historical reasons, however, to use that term would be too impossibly fraught, especially for the august annals of the Cato Institute—even though the resulting compound, “national socialists“, indeed sheds much more light on the true structure and urgency of our situation.

But were this term “national-socialist” to be substituted anyway in Mr. Davies’ argument—as a thought-experiment, let us say, by some utterly tasteless individual who knows nothing of the finer mores of discourse—it would begin to usefully expose the frightening naïvety inherent in several of Mr. Davies’ positions: firstly, in his assertion that the “realignment” now underway is indeed “normal”; and secondly, in his thinking this realignment will duly lead to some new “stable equilibrium” between whatever two major blocs that eventually coalesce out.

The absurdity of the latter expectation—stability—becomes even more plain when we consider what Mr. Davies posits as the likeliest main opposition to these ravening national collecto-socialists: none other than “radical leftists“!

So we are then supposed to have, in effect, national socialism versus radical leftism as the newly-dominant axis of political ideology in the Western world—and stability is to follow from this! Through what illimitable genius of self-delusion can one seriously imagine such a configuration as either “normal” or “stable”? Can anyone bother to recall how this arrangement played out the last few times it was tried? (For a hint, think back to Bloom.) “Sanguine” hardly begins to sum up Mr. Davies’ attitude here; “somnambulistic” may be nearer the truth.

***

This kind of severe misperception likely stems from one of the gravest blindnesses of received political wisdom in the postwar period, one that, like the gameifying tendency, has only deepened with time—namely, that Germany’s national socialist episode was purely a mad fluke, essentially limited in spread by certain repressed, insular, grim-minded peculiarities unique to the German psyche–and also, implicitly, that Hitler was purely a moronic, one-time madman whose like we need not really worry about encountering again (except, that is, when mining the Internet for comedic gold).

In fact, as Davies has just unintentionally demonstrated, the basic complexation of nationalism with socialism (or collectivism) is in itself not some moon-shot Teutonic lunacy, but an extremely general political possibility, translatable to a wide variety of societies given certain combinations of popular mood and stress. This generality comes from the way in which nationalism and socialism can represent, concisely, the two main sides of tribalism, that red-in-tooth vade mecum of virtually all geopolitical organization up to the present day. We may call these two aspects of tribalism the “outward-looking”, and the “inward-looking”, respectively. In sum:

Nationalism represents outward-looking tribalism: “we define ourselves as a single empowered entity, as a People, the Nation-tribe, in distinction from, even opposition to, all other nation-tribes”;

Socialism represents inward-looking tribalism: “we take very good care of each other within the tribe, because each tribe member is a precious part of the Nation”.

Put together, these two amount to possibly the most brazen, direct, bread-and-circuses, red-meat rabble-rousing political strategy in the book, and also one of the most seductive: its offerings include group pride and glory; cooperatively assured security; crisply- yet generously-delineated enemies, ready for your hating pleasure; and of course, loads of goodies from the government.

Now, can anyone name at least one country that currently seems to be experiencing (albeit by seemingly separate factions) a profound upsurge in both aspects of tribalism just now? It so happens that here in the USA we currently have a rather unusually jingoist, proudly nationalist” chief of state who is proposing to declare an open-ended “national emergency” over a perceived threat to tribal organization of the country (in the form of a string of overhyped but not wholly imaginary disasters along the southern border). We also have, simultaneously, an equally unusual formation of Morally Outraged cultural collectivists/socialists, who seem to have an intense interest in enlisting the state to guarantee every aspect of personal well-being, down to the level of policing any language even imagined to be offensive.

What baleful hybrids may yet come of this; what rough beast slouches towards Washington to be born? Remember, the Left and Right are not opposites, so much as two parts of a process; as Bloom recounts, summarizing Nietzsche:

“the Left, socialism, is not the opposite of the special kind of Right that is capitalism, but is its fulfillment.” (143, my emphasis)

So finally here is a warning—to the country; to erudite quietist fools such as Mr. Davies; to the blackshirt-bewitched game designers happy to turn a buck regardless of meaning or message; to the moiling millions of bien-pensants who, from the safety of their spacious suburban foyers, vaguely thrill at imagining “secret fascists” all around them; and to the legions of rheumy-eyed, Weltschmertz-drenched Youtubers who find re-enacted tantrums of Der Führer a delightfully edgy diversion from their 21st-century mal du siecle—to all of these and more I say, if you do not wake up to the real ramifications of what you are normalizing, and fetishizing, and gameifying, we will eventually see far more of “stable realignments” than any of us know what to do with!