The Shame-Joy of the Savant Class (PART II)

No less questionable than the motivations behind the searchers for genetic contributions to general intelligence and other complex traits—their rhetorical slip-knots, their cocooned assumptions of science’s moral purity, which we have discussed previously—is their level of certainty regarding the results themselves. Charles Murray, for instance, in his interview with Sam Harris, implies much like Reich that the codification of indisputable genetic determinants of general intelligence is absolutely inevitable, and just around the corner.

Yet if anything behavioral genetics gives every impression of having been—and continuing to be—a humiliating disappointment, as hundreds of genome-wide association studies (GWAS) have failed to converge on single-gene or even few-gene causes for complex qualities like intelligence and mental illness. Typically, the general finding has been that there are significant contributions from dozens if not hundreds if not thousands of genes, each producing a truly  minuscule effect; embarrassingly, however, the lists of such predisposing genes, produced by different research groups, have often failed to converge or experimentally replicate. In one of the most notorious pronouncements on the subject, the editors of the journal Behavioral Genetics in 2012, called for all but starting from scratch, claiming that:

“…the psychiatric and behavior genetics literature has become confusing and it now seems likely that many of the published findings of the last decade are wrong or misleading and have not contributed to real advances in knowledge” (Behavioral Genetics, 2012).

But, unwilling to abandon their optimism (and funding) for genetically formulatizing complex human qualities once and for all, scientists have continued on in the mindset of “bigger is better”, piling on larger datasets and study sizes and more computational power, hoping this will finally yield a definite answer. This has now led to studies of complex traits involving tens or hundreds of thousands of participants; for the nonce it seems the list of “causative” genes for schizophrenia, for example, has steadily grown and now stands at 145 or so, while for intelligence, it is at least at least 500. However, for the most part, it seems that the plight of behavioral genetics has not improved greatly since the chastening realizations of 2012, and results of the search for “smart genes” in particular have been similarly underwhelming. Even very recent articles, such as Reich’s, that portentously describe an imminent age of intellectual classification by genetic testing—cue Brave New World‘s castes of “alphas and epsilons”—admit at the same time that while the heritability of intelligence is certainly substantial, only small fractions of the total variation are explained.

One therefore has to wonder: how and why are people like Murray, Reich, and others reaching such sudden, urgent certainty about the looming mastery of intelligence by genetic research? Where are the stupendous results, the impenetrable and narrowing cordon that will delimit the genetic basis of intelligence to such a precision that it would be indeed somehow socially remiss not to act upon it, not to shout it from the rooftops?

At the heart of the new furore seems to be a recent review, “The New Genetics of Intelligence” by Robert Plomin, one of the foremost researchers on the genetics of general intelligence. Plomin himself seemed to underscore rather than diminish the problems of behavioral genetics of intelligence when he wrote in 2016 that:

“Recent studies of hundreds of thousands of individuals have found genes that explain about 5 percent of the differences among people in intelligence”

But in his more recent review, Plomin is expansive, confident that the genetic conquest of complex traits—and hence the explanation of group differences—is almost a fait accompli. He admits that the change of outlook is very, very recent, and that “From the 1990s until 2017, no replicable associations were found”; but, he continues, the ultimate discovery of genetic bases of intelligence could not have been in doubt, since, arguing from twin studies, it was known that “inherited differences in DNA sequence account for about half of the variance in measures of intelligence”.

Half the variance is a considerable amount, but hardly enough to usefully “predict” any individual—and this assumes we have no problems with twin studies, an assumption that Plomin only admits, halfway through the review, is by no means trivial. But let us pass over that. So the (alleged) watershed is very, very recent indeed, according to Plomin—almost entirely down to the last 6-12 months. Moreover, we learn the advance rests almost entirely upon two developments: first, the undertaking of even larger GWAS studies—dubbed genome-wide polygenic score (GPSs)—involving not tens but hundreds of thousands or even millions of participants, thus increasing the possibility of detecting incredibly minute effects; second, the discovery of a statistically useful proxy for intelligence, namely “educational achievement”.

On the first point, the use of absolutely enormous sample sizes, he resulting predictive power of these studies seems, frankly, appropriate to the puniness of the individual effects. One GPS study from 2016, with 125,000 participants, was able to account for a mere 4% of the variance in intelligence. Another, with a whopping 280,000 participants, also accounted for 4%. Still another now in progress, with more than a million participants, is incomplete but it is thought may account for over 10%. (What, one wonders, is the value of science that requires screening a sizable fraction of a whole population to develop even such weak baseline hypotheses?)

On the second point—that of the new proxy—it seems rather remarkable that much of the recent sound and fury about intelligence, race and genetics owes to the less-than-shocking “discovery” that “years of education is highly correlated phenotypically (0.50) and genetically (0.65) with intelligence”.

One problem with this story (besides that intelligence itself, already somewhat fraught as a concept, is not even being directly measured) is that, although Plomin et al. adopt with little circumspection the explanation “the smarter you are, the more you go in for more school,” alternative possible explanations for these correlations suggest themselves easily. For instance, students with a very poor grasp of material may be kept in school for longer, or take more time to attain the same level; extremely brilliant ones may leave in frustration or prefer to self-educate; or perhaps the may show a more practical-minded intelligence by choosing not to take on vast educational debt by staying longer.

Another problem is that “years of education” inherently does not lend itself to distinguishing at the very high or low ends. One tends to finish a PhD in a certain standard amount of time, for example, regardless of intelligence; therefore variation among PhD achievers will be ignored by such a rubric. Plomin et al. admit this proxy is “largely bimodal”, indicating mostly just whether an individual completed university or not. Surely, this bi-modality will ensure that much more subtle and important gradations of mental ability will be largely washed out by the college/no college distinction.

(It also is worth considering that to equate intelligence in any way with sheer amount of time in the educational system is, inadvertently, to pay an immense and ill-warranted compliment to the educational system as is—one of many implicit homages to the social status quo buried in the work of Plomin and others.)

As for the IQ-affecting genetic variants that result from employing these unprecedentedly gigantic experimental groups—these variants whose discovery causes such sudden delight and excitement and feeds hopeful talk of an inevitable ethno-psychometric partitioning of society—they turn out to be unbelievably weak, each on average explaining about 0.02% of the variance in IQ. Many of the variants are not in genes at all, as had been originally hoped in the early days of GWAS studies, but appear in tiny regions of DNA between genes that apparently have minuscule, indirect regulatory effects.

And so let us trace the rickety path of reasoning and wishing behind the new exultation of Plomin and others: we have gone from dismissing doubts about IQ as a true measure of genius (versus mere talent)… to still-questionable twin-study estimates that even if accepted say genetics only covers half of the variance in IQ… to the realization that there are no strong single genetic predictors of IQ but rather a basket of minuscule ones… to the use not of actual IQ but a partly obvious and partly dubious proxy measure of IQ… to the finding that all these still only predict far less than the half the variance predicted by said (partly questionable) twin studies, and thereby… to, veni, vidi, vici, the brave assurance that the links between intelligence and genetics are all but solved!

Given that the scientific uncertainty seems to be being underplayed on the nature of genetics and intelligence, and that the sheer size of the undertakings required to garner these “game-changing” results resembles, from a modest distance, not so much triumph as a dogged persistence against diminishing returns, we cannot help but wonder how they are worth the trouble—and what might motivations might be driving them on despite such headwinds. Plomin tells us that “GPSs for intelligence will open new avenues for research into the causes and consequences of intelli­gence”; even more astonishingly, we learn that “heritability is an index of equality of opportunity and meritocracy”. Apparently it is scientifically uncontroversial to equate hereditary wealth with meritocracy, or to suggest the more prosperity stays confined within family lines, the more just the society is—and, apparently, no other serious alternative explanations are imaginable.

Finally, heritability of intelligence as assessed by the twin studies Plomin references seems to increase with age. Here we see another example of choosing the preferred rationalization: usually, strong early-life correlation is taken as a sign that an effect is present even before the environment has been able to influence it, hence is more innate—indeed, elite IQ-themed academies such as the Davidson Institute are founded on this premise of assessing young children for “intellectual gifts” at an early age. But Plomin et al. find no difficulty in reversing the argument as it suits them, and seeing the greater correlation later in life instead as an uncovering of an innate tendency, a gradual throwing off of environmentally-imposed restraints: “DNA variants increasingly have an impact on intelligence as individuals select environments corre­lated with their genetic propensities”.

***

Of course, Plomin et al. do supply a dose of the obligatory sober-faced Reichian concern about the need to “acknowledge the risks of discrimination”—yet if, as they strongly advocate, intelligence tests ought to be much more widely deployed as a condition for gaining employment and other societal perquisites, they do not make at all clear what the point of such tests would then be other than to discriminate. Such a clarification becomes all the more critical in light of their comments that seem to lend “scientific” endorsement for the siloing of wealth within family lines.

What, really, can be the motivation here? At the most, if we gather even more data—perhaps from tens of millions of individuals this time—we can perhaps one day create a panel of genetic variants that will explain fully 50% of the difference between any person’s IQ and the mean. But what possible use is that? Are we planning to begin job screening by genetic test? Even in that case, if intelligence is the target, giving an actual IQ test would surely be more useful.

Of course there is also the prospect of using genetic testing purely for personal edification, say to learn one’s “genetic susceptibility” for high or low intelligence, and many people have already done so through online services like GenePlaza, DNA Land, and 23andMe—of which all but the last have already gone ahead with offering “genetic IQ” tests. But this seems like little more than a frivolity. Without the promise of categorizing and pre-determining people’s potentials and futures, and of legitimating existing inequalities, scant uses for such testing suggest themselves and, perhaps not incidentally, Plomin et al. propose very few. It is perhaps telling that they “cannot resist” the double-entendre between the “GPS” abbreviation and the Global Positioning System—a monolithic arrangement for localizing and tracking individuals or, in their case, creating “profiles of genetic strengths and weaknesses for individuals who could be targets for personalized prediction, prevention and intervention”. (The “intervention” part, to say the least, is unsettling.)

The ossification of socioeconomic reality into a rigid underclass and upper class throughout the world is already a fact in our time, having been growing dramatically for the last forty years; the question at hand is the scientific legitimation of such circumstances by way of the concepts of psychometrics and genetics. What more potent defense of the new hyper-unequal status quo could there be than to say that the existing arrangement of superiority and inferiority is for the most part natural, just, and even optimal? Plomin and his colleagues’ vision seems to be one of human optimization through self-segregation, occasionally reinforced by stringent psychometric testing and, partly in its stead, genetic profiling; Taylorism, it seems, is to be put to work on our thoughts, our cells, our very genes.

Once again, the secret glee in ratifying and even intensifying the divisions of power that already exist in society, the barely-contained longing to expand the prestige such scientists and intellectuals already enjoy due to their IQ—is all too evident to ignore. Through the crack under the laboratory door, the shadow of schadenfreude extends and plays out its now-familiar arabesques.

***

As this picture takes shape before us, one sees more and more of Chomsky’s point that there is not much interesting science here—albeit perhaps an interesting agenda. After all is said and done, Plomin’s talk about understanding new causes of intelligence is mostly vacuous, since the variants being found in the studies he extols are ubiquitous in the genome, contribute almost no predictive power individually, and show no mechanistic connection with neurological development, metabolism or any other specific biological sub-system.

One is tempted to say that these variants are not so much “found”, as “dredged“. There is no understanding here, no attempt to explain why any given genetic variant has any effect on IQ whatsoever—a hope which has effectively vanished with the explosion of contributing factors, their minuscule size and their almost limitlessly tangled interactions—there is just the gathering of more and more data to get better and better predictive statistics which can be used to classify and control people, for underlying reasons that are never made overt. Such an activity has little to distinguish it, scientifically, from many other correlative “big data” investigations that have rapidly become fashionable in science, such as the correlation of SNPs with finger length ratio or of geography with musical taste. (Chomsky uses intelligence and height as another example of a such plausible-yet-nugatory correlations.)

Indeed, this pattern of a Pyrrhic victory that wins little or no understanding after offering up a colossal body of raw information is coming to be startlingly typical across the biological sciences, so much so that it may ironically in itself represent a much more interesting sort of meta-level discovery about the nature of living things. Recent studies at Stanford, using data from large GWAS and other genomic datasets, have run with this idea—and found that not only intelligence, but many if not most complex traits and diseases such as height, schizophrenia and rheumatoid arthritis do not have anything like a tractable number of “core” causes, but instead are influenced by literally hundreds of thousands of genetic variants, most having extremely weak effects. Even more remarkably, these influences are dispersed almost equally across different cellular systems, almost indifferent to the cellular system most affected in a given trait/disease. In other words, complex phenotypes like intelligence may have no cause as such at all, instead being influenced by nearly all parts of the genome at once—a picture the Stanford authors describe as the “omnigenic” model.

It is hard (thought not impossible, with enough advanced training) not to notice that such a model, however fascinating in itself, represents a kind of death-knell for the hope of reductionistic, mechanistic “explanation” of disease or traits like intelligence. It is equally hard not to notice that the same will almost surely be true of the prospect of interventions to change these qualities; it is exceedingly (and increasingly) difficult to drug even one target successfully, but 100,000 at once? And while one could still imagine deriving some scientific insight in trying to discover what brain attributes seem essential to produce higher general intelligence, the way to pursue this likely involves the study, not of scattershot genetic variants that contribute minuscule and untraceable effects, but of brains themselves, through functional imaging studies—though these, too, have immense reproducibility problems of their own.

In an omnigenic world, alas, contradictory results seem to be almost part of the territory. As biologist Robert Weinberg put it, referring to analogous attempts to find key explanatory genes for cancer using big data:

“The gaping distance between these data sets and a true understanding of cancer biology is illustrated by the amusing fact that two distinct expression array analyses of cells in breast cancers have been found to be equally useful in predicting future clinical behavior of these tumors but contain almost no genes and thus proteins in common.” (Weinberg)

Let us not underestimate the scientistic-economic optimism that insists such traits will turn out to be manipulable anyway, surely for the good of society. Where the prospect of controlling large numbers of one’s perceived inferiors is at stake—and data and numbers stand ready to assist—the sense of mission may be almost as irresistible as the assumption that one’s cause, simply for being scientific, must be just.

***

If there is any potentially positive aspect to the whole matter of genetically significant group differences, one might say it is that it constitutes a serious blow against interchangeabilism—the view, increasingly pervasive in progressive and even “individualistic” countries, that all human individuals are fundamentally the same and therefore can and should be treated like fungible tokens in a gigantic social machine, much like individual dollars in a globalized economy or 1’s and 0’s within a digital processor.

In this vision (which has gained traction to a large degree through the wholesale adoption of digital devices, interactions, and metaphors), the individual’s role in society is to be exchanged, transferred, and utilized as the needs of the system dictate: to respond as trained, to stay put until despatched elsewhere, to produce as demanded, to create no discord; not to question, not to reason, not to dream, and certainly not to revolt. The goal becomes, on the one hand, to produce sufficiently standardized individuals as to allow them to be interchangeable, thus increasing the efficiency of production, and on the other, to construct a societal “machine” ingenious enough both to maintain itself against any challenges or anomalies and make optimal use of the tokens (aka people) placed at its disposal. (This formula is essentially equivalent to turning humanity into what Heidegger called “standing-reserve”.)

Interchangeabilism is itself the unifying principle between economic and social liberalism as commonly practiced, as well as in the most modern socially liberal conceptions of justice. One can spot this connection in many places; for instance, in the work of the arch-liberal political philosopher John Rawls, who proposes that a just society must be designed so as to be acceptable in advance by featureless, quintessentially interchangeable “reasonable citizens”, unmistakable kith-and-kin to that equally interchangeable (and increasingly untenable) cipher, the Homo economicus of liberal economics. Yet these two liberal aspects, the social and the economic, are popularly taken to be implacable opposites, sharing the term “liberal”, it is assumed, only through some infelicitous coincidence; and indeed they have come to be associated with very different imagery. The economic “liberal”, for example, evokes corporate-financial functionaries in glass office towers, single-mindedly and often ruthlessly strategizing to achieve maximal returns and the wholesale expansion of technical and material activity on all fronts. The social “liberal”, on the other hand, brings with him an appearance—an aura, perhaps?—of looseness, accommodation, permissiveness, often a rage at injustices that can seem to him as all-pervading as the air. His stated prime goal is not profit, but the loosening of all operative cultural, national, ethnic, religious, and sexual differences, norms and restrictions (which he almost reflexively equates with injustices), with the simultaneous promotion of a kind of carefully de-fanged “diversity” whereby differences are lavished with praise and even made sacrosanct.

Yet for all these seeming differences between the unflinching, tide-like expansion of capital and the yearning to rupture all forms and customs to create a perfectly just “diverse” society, and for all they may malign each other, the two sides commonly work in tandem (not forgetting that they may, as is increasingly common, coexist within the same person). Here is the general scheme: first, the economic liberal creates and disseminates the standardized distractions and luxuries that the cultural liberal craves as a touchstone of his identity, and also—through the great and essentially nihilistic power of the market principle’s focus on profit, increase, and mass-production for their own sake—delivers the first undermining blows against the various institutions of any newly-encountered culture. The cultural liberal, in turn, does not resist but actually furthers the homogenization process begun by economic liberalism (here is his great conceit), refining newly acquired people and institutions by discharging their differences as injustices to be fought and erased: “indiscriminateness is a moral imperative because its opposite is discrimination”, in Allan Bloom’s formulation. The cultural liberal thus strives to warmly and seemingly unconditionally welcome the newcomer into the market-herd, typically while combining this welcome with economic incentives that foster dependency on the market.

In general, the cultural liberal facilitates the destruction of active cultural difference and individual idiosyncrasy, by gently transmuting them into quaint and much-desired museum-pieces or commodities. Excluding those elements of a prospective culture that cannot but be viewed as extreme barbarisms and so must be forbidden for legal or public relations reasons, there are three main paths for any incoming novel element of difference. Firstly, museumization, whereby the elements are shorn of their functioning, living context and reposed in designated places of what one might call “instrumental reverence”, to be preserved, admired superficially, obligatorily and perhaps academically, and thus made part of the standardizing process as educational artifact, instead of a possible active obstacle to the pursuit of interchangeability. Secondly, the differences may themselves simply be fetishized into products—not just museum-pieces then, but exotic objects of desire or distraction, likewise shorn of their original cultural function and therefore safe, ready for commodification and digitization. Thirdly, differences may be converted into political power-tokens through the now-familiar machinery of identity-politics, which too resembles commodification in practice, incorporating as it does various and sundry groups into interchangeable instances of the “oppressed” ready for mobilization by the relevant political elites.

These paths are not always mutually exclusive, but in any case, the cultural liberal, by praising “diversity” while actually dismembering, leveraging, and museumizing it, corrals these rebarbative (or even barbaric) non-interchangeable cultural elements away from any potential confrontation with the mechanisms of capital formation or the overall nihilism of the system; if possible, (s)he will go one better and actually make them marketable. Or, to quote Bloom again: “…in attacking ethnocentrism, what they actually do is to assert unawares the superiority of their scientific understanding and the inferiority of the other cultures” (COTAM, 29)

At this point, the cultural liberal once again hands off to the economic liberal, who puts the newly interchangeable human tokens to use in the ceaseless expansion of the market-principle (most likely as wage-labor), and takes any new commodities devised out of the acquired and digested culture as a perquisite. The requirements of interchangeability are fully met now; both sides have done their work; society emerges larger, richer, busier, in some sense “more diverse”… and yet paradoxically even more homogeneous and anonymous than it was before the acquisition. And so liberalism here has been turned upon itself, producing an increasingly illiberal conformity and a blinkered materialism that seem to bear little resemblance the original and laudable liberal goal of free unfolding of individual thoughts, actions, and rights.

Why does this arrangement connect to the question of genetic group or racial differences in intelligence, or any other complex trait of personality or behavior? Simply because genetic discoveries of such differences—if they were somehow shown to be totally objective and reliable, totally free of schadenfreude and other invidious contexts—would indicate that there are not only differences on the individual, but also on group and hence even culture-sustaining scales, of a kind that cannot be simply pounded out of existence by market forces or other tools of homogenization-through-decontextualized-diversity. Were there a mere handful of “master genes” for intelligence or other qualities, then it would have been conceivable, though still very difficult, to “cure” those with lower IQs, or with “difficult” personalities and so on, and eventually conform them to the system as standardized units of labor; instead, the finding that tens or hundreds of thousands of variants are responsible makes such efforts impossible. In however repellent a way, this obstruction would put a kind of hard limit on the drive towards complete interchangeability, the complete homogenization and formulatizing of culture and thought. But again if this is a sort of victory against interchangeabilism it is an unsatisfying one, because we see in it not a true grasping of individual and group possibility and power, of for want of a better word spirit or soul, but a matrix of missteps tainted by greed, by arrogance, by narrow instrumentalism, by biologism, and of course by schadenfreude.

Admittedly, even among those who work with utmost devotion toward the goal of interchangeability there must be, however grudgingly, acknowledgement of difference. Certainly the differences in economic power between different individuals and classes have been enlarging over time the world over, as already attested. Also, certain specializations demand an advanced or deep understanding of complex or obscure topics, or require the development of highly precise skills—or, more bluntly, may demand sheer wealth in order to finance this or that venture. So in these areas, indeed, non-interchangeability—that is, idiosyncrasy and uniqueness—seems to keep a foothold.

But note that even this foothold remains even narrower than it already seems, and in important ways. First, most obviously, it mostly amounts to a dispensation or indulgence of difference only for special (powerful) individuals; the most important professionals and oligarchs are permitted their eccentricities or their unusual aptitudes, and are accepted as such because their power and indispensability buys them that freedom. This is just a dreary matter of “might makes right”. But also, and more insidiously, even under these special elite conditions difference insofar as it is acknowledged is not given some free rein or appreciated in itself, but is mostly reckoned purely in terms of the statistical distribution of some simple variable(s)—not as a truly unique phenomenon that points toward something deeply non-systematizable about both individuals and groups, but on the contrary, as essentially interchangeability plus an information-free error term that can be compensated for and then dispensed with. Thus, we have simply more Taylorism; to the extent that non-interchangeability is conceded, it is also constrained. Individual differences are reckoned not as an immense gamut of human possibility, an invitation to adventure, but in terms of single metrics such as achievement scores, degrees earned, net worth—and of course IQ, a paragon of such a coldly probabilistic conception of difference, and the very one which Murray, Plomin, Reich and others eagerly foresee underwriting a kind of virtuous inequality in the (no-longer-so-new) “knowledge economy”.

It is at this last point that the abandonment of soul for intelligence as the mark of humanity bites. For even when souls are taken to be equal—as they are before God, say, in most Christian tradition—they are never taken to be identical or interchangeable, but indeed utterly unique and free. The soul—not only of individuals, but of whole cultures and ethnicities—is a going concern, something that cannot be torn away precisely because it cannot be reckoned; a collection of numbers, on the other hand, whatever their statistical distribution, readily admits to ranking, rationalization, utilization, and in the end commodification.

And so we realize that by the time we have reached the IQ-difference-based “genetic meritocracy” idolized by Plomin and others, the problem of interchangeableism has already worked its way into the cake, so to speak, indeed is as entrenched as ever, because the focus of the difference is on the wrong thing—not on individuality in all its possibility, but as it appears when compressed and desaturated through the lens of the metric. The result of this fixation on single values, which shares with “big data” the same thinking and goals despite the latter’s high-dimensional trappings and the same flaws and narrowness, is a thoroughly modern fetish one might call “metrical blindness”.

As noted, since the differences turn out to depend on such a huge and complex set of genetic changes, each individually almost negligible, the likelihood of genetically mass-engineering humans to standardize and eliminate complex-trait differences (so as to make them into interchangeable, standardized units) is remote. But that the complexity of the genetic causes is surely intractable for purposes of active social control is scant comfort. For from the workings of metrical blindness comes the greatest irony of all: through the insistence on the statistical, the dominion of mediocrity is already established in the very form in which differences are first articulated, in the terms and structure of the game, in the very attempt to mediocritize exceptionalness (for to make interchangeable is inevitably to mediocritize). All this has happened through the structure of the approach, not just in the metrical but the statistical; for the statistical takes mediocrity as a starting-point, even before the implicitly desired, genetically-inspired policy-based “clean up work” could even begin to produce a truly difference-less, uniformly excellent kind of human, a fungible unit of workforce-productivity, one perfectly and uncomplainingly suited for mass integration into market technocracy.

“Children are not yet fools” the psychiatrist R. D. Laing once wrote, “but we shall turn them into imbeciles like ourselves, with high I. Q.’s if possible.” He might have done still better to have said “mediocrities” instead of “imbeciles”, for those dismissed as imbeciles at least have the potential, now and then, of doing surprising and even unpopular things. Looking back, we see this mediocritization acutely in the use of “educational attainment” as a proxy of intelligence so celebrated by Plomin et al. It may be a reasonable first approximation that smarter people tend to enjoy learning more, and so go in for more education, and inversely for less intelligent people. But it is a bizarre error to carry this rough correlation beyond the most hopelessly broad conclusions; the far extremes of intelligence, in fact, may show opposite trends to that of the moderate extremes, and there is much interesting evidence through the years that this is the case.

Meanwhile, the famous Flynn effect—the observation that IQ scores have risen globally by about 3 points per decade since 1951, or about a standard deviation per generation—suggests the “g” is not really the hard ineffable truth some would want it to be; that how one scores is pliable in some way we don’t understand, that one can get it “from the air”. Looking afield of psychometrics and taking the risk of viewing anecdotal evidence seriously, there are many indications that people were anything but “less bright” than today’s average, and may in fact have been better educated and/or more widely knowledgeable and imaginative. (Note also Randall Jarrell’s essay “The Schools of Yesteryear” and various interesting if fragmentary reflections on how far higher educational standards obtained in past generations, despite their supposedly pitifully lower IQs.)

Again, one suspects that again part of the problem with the scheme stems from the meaninglessness of measuring intelligences that are too rare to be statistically encapsulated—not just the very high and the very low, but the off-the-scale altogether—all of which might humble the system and are therefore actually interesting. And so we may begin to suspect that what is evident in the Flynn effect and elsewhere, and with the genetic profiling of individuals, groups and races, is not so much a great brightening, as a great narrowing: a mass-cultural homing-in on certain specialized kinds of “formal operational” intelligence, paired with a deepening neglect or even obliviousness to other aspects (with which the formal-operational may be significantly but far from perfectly correlated).

The very fact that intelligence is turning out to be intractably complex and multifactorial in its causes inevitably suggests that general intelligence itself may be extremely multifactorial, and that what we call “g” owes not to a singular force, but to an extremely complex and specific coincidence of events, even if on the average a single number is a reasonable predictor of socioeconomic excellence. Plomin et al. of course are sure this cannot be the case, averring for instance that “extremely high intelligence is only quantitatively, not qualitatively, different genetic­ally from the normal distribution”. Since no small subset of genes exists that strongly affects intelligence test results, it follows for them that all forms of intelligence are purely gradations of some underlying, inviolate, quasi-Platonic essence. That this claim can be made in a high-level journal when no statistical test is capable of quantifying truly extraordinary (i.e., statistically un-analyzable) forms of intelligence, alongside with data that is admitted “explains” at best 10% of the variance, all while offering zero mechanistic clues (an “atheoretical approach”), can only be seen as a paragon of the rapt dedication to scientism that has become fashionable among our increasingly desperate technocrat-savants.

In the end, by necessarily leaving out the truly extraordinary mentalities—those whose measure cannot be meaningfully taken in a scale designed around what is already graspable and commonplace, even if to a chosen few, or by genetic decomposition into simple causes—what is really being enforced is just the mediocre, disguised as higher- or lower-functioning versions of it, Plomin’s “purely quantitative differences”, which are all that IQ is competent to assess. But it is the exception that is the mystery and the germ of change. As the philosopher Kierkegaard put it,

“Over time, one tires of the interminable chatter about the universal and the universal, which is repeated until it becomes boring and vapid. There are exceptions. If one cannot explain them, then neither can one explain the universal. One generally fails to notice this, because one does not normally grasp the universal passionately, but only superficially. The exception, on the other hand, grasps the universal with intense passion.

When one does this, a new order of precedence emerges, and the poor exception, if it is ever any good, appears again, as the poor step-daughter in the fairy tale, restored to a position of honour.” (Repetition, p. 78)

One could, in earlier situations, similarly hope that, though the universalizing (or mediocritizing) tendency was ascendant, at least “the exception” could, with sufficient passion, win through and even teach the universal a thing or two. But one is no longer so sure with broad-brush, schadenfreude-fueled “big data” approaches, whose essence is to leave aside understanding of mechanisms or the imagining of new possible explanations and fields and instead institute rule by the statistical, by the view of minds as simply another aspect of standing-reserve. Regrettably, that such rule inevitably produces closure of thought and self-fulfilling prophecies aplenty, does not seem to have yet crossed the (doubtlessly high-IQ) minds of the mandarins of psychometrics.

***

In closing it might be useful to make a sort of brief, impressionistic effort to put all these developments in a more general historical-political context.

Not long ago, one would have not thought such things in imaginable in open public discussion. But as time carries us further and further away from the geopolitical eruption of WWII—out of whose volcanic madness there crystallized so much of the economic, philosophical, and social consensus that we now take as almost tantamount to civilized life—so too the last living and visceral memory of the terrors of scientific racism that were so integral in that conflict have begun to fade.

In the immediate postwar era, a quite legitimate horror at the atrocities committed under the cover of racially-based reasoning during the conflict led to a concerted movement in the opposite direction, to a critique of society based entirely on social construction and on human interchangeability (which also, as it happened, chimed well with certain aspirations to unhampered, global exchange of labor and capital). This critique included a categorical rejection of any possible notion of innate and heritable differences between sub-populations of H. sapiens as even a coherent concept, except for certain obvious matters of appearance or simple genetic traits. The result, perhaps most famously condensed in the UN’s 1950 declaration on “the race question”, is what author Kenan Malik dubs “UNESCO man”: a standardized vision of human dignity and diversity, rejecting at the same time any claim of non-cultural differences between individuals in groups.

In retrospect, this approach amounted, in large part, to simply punting on the problems of racism, ethnic chauvinism, and revanchism that had led up to WWII. Clearly, after the war, there was a need to dismantle or radically revise these perspectives in order to head off future atrocities and violence. Yet difference, problematic or not, finds its way out, and preoccupies the minds of people and nations; to be different is to be born, to stand independent, to articulate a new vision, to feel a different heartbeat that one knows is not another’s. The urge to differentiate one’s self or one’s group, to proclaim a deep, physical and non-contingent uniqueness, is often the whole work of a lifetime, or of a people; it is as fundamental and implacable an urge among men and women as its opposite, of seeking common ground and union. By declaring the problematic (and oft-monstrous) views of difference that had led into the war to be null and void by moral fiat while replacing them with essentially nothing—by offering no alternative, dignified way of thinking of differences as anything but environmental and cultural fictions or quaint tokens of exoticism, the UN assured that “UNESCO man” would be brittle, requiring a precarious quietism on group differences research of the kind proposed by Chomsky, Horgan and others in order to endure.

At the same time, the notion of interchangeabilism within the “UNESCO man” doctrine gained vast approval, instituting with it the mediocritizing, statistical view of man (that had already taken hold in industry, education, and science) as the inadvertent new descriptor of all human difference. Non-quantifiable but perhaps much more ennobling ideas on the source of true uniqueness and merit, is, such as the spirit or shared heritage of peoples and groups, or the ineffable essence of an individual mind, were dissolved; only numbers, with IQ prominent among them, retained legitimacy. Interchangeabilism thus paradoxically was planted the seed for a resurgence of statistical classification (and stratification) of groups and a rebirth of a new, maybe even more dehumanizing interpretation of difference.

And so we now face strange new things—some of which, however, turn out to be revivals of very old phenomena, whose “newness” is only due to the blithe ignorance of history that plagues our increasingly attention-deficit, web-addicted civilization. The inexorable force of forgetting, the loss of vigorous, living, one might even say spiritual memory that our own technological egoism has encouraged—especially of the lessons of the totalitarian 1930s and ’40s—is now combining with the flawed postwar reaction of asserting interchangeability and cultural relativism to open a secret door to the very things we supposedly find anathema. Monsters are becoming thinkable, venturing out tentatively upon legs that that were long ago thought broken for good. We are witness to a resurgence of racism, of a narrative of differentiation in a fork-tongued form, with interchangeabilism and its insatiable appetite for mediocrity intact and even in full accord with it.

Such times present a grand possibilities for re-imagining the future, in ways that may be either rejuvenating or wretched and cataclysmic. Intelligence, despite its undoubted importance, is likely to turn out a test case: the use of non-genetic data to observe, manipulate, and constrain peoples’ lives, and in particular to create a scientifically legitimated conformity in tranches, is already proceeding at a spectacular rate. Already many or most potentially new employees are subject to seemingly arbitrary psychometric testing and drug screens, so that both mind and the body can be freely scoured by employers and governements. If “big data” can be used to construct homogenized group classifications around intellectual potential on the basis of genetic information too, then rest assured that such information will soon come to be used in countless other ways as well, some currently unimaginable, others all too much so.

Advertisements

The Shame-Joy of the Savant Class (Part I)

The discussion of race and intelligence, and what modern genetic science may say about their interconnection, has been heating up. Long studied quietly in recondite realms of cognitive psychology, the issue emerged into the US public consciousness in meteoric fashion back in 1994, when Herrnstein and Murray’s The Bell Curve notoriously brought up the black-white IQ gap in the US, did not rule out that there might be a genetic factor involved in the difference, and also offered policy perspectives that struck many as, if nothing else, imperiously social-Darwinian (“…the people we now refer to as the underclass are in that condition through no fault of their own but because of inherent shortcomings about which little can be done”; p. 523). After simmering again out of sight for a while, the subject re-entered public prominence during the late 2000s and early 2010s. In 2007 James Watson, co-discoverer of the DNA double-helix, made remarks to the effect that the perennial economic difficulties of sub-Saharan Africa could largely be due to genetic differences in intelligence. In 2013 there was the matter of Jason Richwine, who argued in his Harvard thesis and in subsequent work at the Heritage Foundation that a) more recent immigrants to the USA tended to have lower intelligence scores, that b) this was likely linked to genetic factors, and that c) therefore lower-skilled immigration ought to be restricted. The resulting uproar was such that Richwine was obliged to resign from his post at Heritage.

Now the tangled, uneasy nexus of genetics and race is being stirred once again, this time by another Harvard researcher, geneticist David Reich, with his NYT op-ed on intelligence and genetics (01). Reich comes bearing a weighty warning: a new age of unprecedentedly precise genetic studies is dawning all around us, studies so indisputable and so powerful that there will be no ignoring their implications, no going back. And these studies, like it or not, are finding clear and significant new genetic differences between the traditional racial groupings—such as Caucasians, Northeast Asians, and sub-Saharan Africans.

The key distinction of the new studies is that they infer genetic effects by pinpointing specific causative DNA sequences, instead of from patterns of inheritance, for instance as studies of identical twins do. In the new age of hyper-resolution genetic profiling, Reich solemnly warns us, it will no longer be possible to invoke non-genetic factors like upbringing or environment or oppressive social constructions as possible explanations for the differences between groups (which are already acknowledged among professionals). As the knowledge of predictive genetics accumulates for countless traits and propensities, and as cheap genetic testing becomes more pervasive, we will have to accommodate ourselves to the new reality. More and more, the genes will make the man.

Reich’s performance as the semi-rueful bearer of fateful news is finely calibrated, and his piece is nothing if not carefully hedged—for instance, he avoids making any explicit suggestion about group or racial genetic differences affect intelligence, and even goes as far as to discount Watson’s earlier claims. Most of the denunciations of his claims have taken the form of fists-in-ears denial or tortuous evasions and self-contradictions; the London Guardian for instance decried the idea that genetic variations could predict problem-solving abilities at all as “a tall claim”, then conceded there is “an undoubted genetic basis to individual difference”, all in a single rhetorical huff.

But for all his caution, and for all the excesses of the opposition, Reich’s message nonetheless is filled with what seem like crocodile tears, a too-rueful-by-half wringing of the hands that, being necessary to avoid real censure (and not just the ignorable Guardian or Vox variety), cannot be taken to carry any useful information as to the author’s inner feelings on the matter. He tells us matter-of-factly that after all, none of the news about population differences should be that surprising to us: “The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work.” One nods: a fair enough point—why should we all be identical after so much time apart, after all? Variety is the spice of life, no? Surely even more touching is Reich’s concern that his and others’ work on the subject could be misused “to provide rhetorical cover for hateful ideas and old racist canards”. Indeed, this is most fortunate, since if actual racists were to seize the terms of the discussion on the racial differences (or as we are supposed to say in dog-whistlese, group differences) that Reich and others have so painstakingly prepared, validated, and now publicized, the consequences could indeed be a disastrous upswing of racist beliefs and racist unrest. But thankfully, as long as scientists are in charge of the discussion, only sunshine and light will issue: “…knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public” and “leave a vacuum that gets filled by pseudoscience”. Again we nod gratefully. What good fortune that the scientists so busily laying out potential validation for racists everywhere are themselves inherently rational, and therefore non-racist! What good news, as well, to learn that scientists apparently still have “the trust of the public” to begin with!

Reich’s rhetorical pirouettes are, as I have said, of the most fluent and impeccable kind. But through his seemingly virtuous pleadings, one senses a warm heart of doublespeak. Instead of objecting to racist stereotyping per se, Reich and his colleagues seem mainly loath to think that the wrong racist stereotypes should be assumed, and without proper empirical ratification by them, the virtuous scientists (who, again, by definition cannot be racist, only faultlessly objective and reasonable). Above all, one comes away with a distinct feeling that what really nettles Reich and his fellow hereditarians more than anything is a lack of priority, of due deference to their right to control the discussion of the researches they have labored over and now claim are inevitable. Fascinatingly for ones supposedly so rational and knowledgeable, these luminaries seem content to profess complete innocence as to the inherent consequences, possible uses, and possible underlying motivations of their findings—all of which are evident enough and potentially brutal—and to assume that the only possible complication has to do, not with their content, but with the possibility that the wrong kind of people—deplorables, possibly?—might repeat and use them for their own agendas. Towards this end, they seem to hang their hopes on a kind of historical miracle: namely that for once, the shrieking gyres of Internet opinion will fall silent and yield pride of place to academic experts, if only those experts are so good as to speak up first.

And so in the end, Reich’s magnificent hedging hardly saves him; one feels a coyness underneath the solemn and earnest mask. It is surely a fine last rhetorical touch of his that the very conclusions that he warns might be drawn, even if left tastefully half-unsaid (for everything Reich writes is half-unsaid), are themselves disseminated by his ostensive warning about them. So let us drop this half-pretense and spell them out: if we know racially- and ethnically-defined groups exhibit distinctive genetic hallmarks, and we know also that there are distinctive genetic hallmarks that strongly correlate with IQ, then it stands to reason that a number of the genetic hallmarks between racial groups could also be ones that determine IQ. Also, there appear to be substantial and persistent and heritable average IQ differences between races. Then the implied next step, half-unsaid or not, seems to be that on average, certain groups or races are demonstrably, genetically “smarter” than others.

There is no avoiding this logical possibility, as Reich pretends to want to do, even as he so carefully prepares us to hear and think exactly that, in the manner of the famous paradox that tells us not to think of a pink elephant. Again and again he seems to want to have his cake and eat it: displaying his concerned even-mindedness, while preparing the inevitability of certain conclusion. He seems to tell us in one breath, “I’m not saying this is so—that would be wrong—but get ready for lots of scientists just like me to argue that it’s so very soon. There’s no stopping it; moreover it’s our duty to uncover it; and you should calmly embrace whatever we say… because otherwise extremists will control the discussion!”

* * *

Now, what do I mean by saying “pretends to want”? Just that in all of this discussion and controversy, one cannot easily help noticing an absolutely crucial psychological factor running through and through—one which, though it has nothing to do with psychometrics or genetics per se and has gone almost totally unremarked in the debate, cannot be disentangled from it if we look at it seriously. There is nothing new about this factor, and in fact there is a fittingly toothsome German word for it: schadenfreude, literally “harm-joy”.

“In the misfortunes of our friends,” said François de La Rochefoucauld, “there is always something not entirely unpleasing.” This often-unconscious rejoicing in relative superiority over others, and particularly their misfortune, is the essence of schadenfreude. In this same vein, as a general proposition, I would suggest that whenever a person who holds a superior position seems to profess their heartfelt regret at their superiority, then we would do well to view the whole situation, at the very least, as a possible locus of schadenfreude, even if the person in question has the most sterling record of self-restraint and egalitarian feeling. Who, after all, can humanly say, “I am better than those people“—full well believing it true—and not feel somewhere inside them a certain sly relish at this superiority, or at least a relief at not being in the inferior position? Moreover, how much does such a person really gird themselves against this guiltiest of pleasures merely by adding the disclaimer: “…but I only believe I am better because science forces me to“?

For the most part, then, underlying most of talk about IQ, as with any other discussion involving superiority and inferiority, it is impossible not to detect the aroma of schadenfreude, that strangely piquant and luxuriant mixture of self-reproach and egoistic preening on the part of the privileged (in this case, those whose IQ scores turned out high enough to ensure an unimpeachable place in the psychometric pecking-order). This type of privilege must surely apply for Harvard professors and alumni with national media platforms, ones just like Reich (and Richwine, and Murray, and Watson), who after all function in our society as something like “high priests of smartness”. Yet schadenfreude applies equally well to many who will argue vehemently against the findings, against the half-hidden half-conclusion—those who shout that the conclusion cannot really be scientific (because they assume science, as the vouchsafe of progressivism’s intellectual superiority, must perforce be an ally of equality), and at the same time shout it with the instinctive aim of setting themselves up on the delightfully ennobling “rescuer” side of Karpmann’s triangle. And so we find ourselves hemmed in by schadenfreude on all sides. This is why we must say that Reich and others in this discussion “pretend to want” to prevent racist interpretations of their work: because their actions in pursuing and widely publicizing such findings, as well as their own egoic motivations, strongly point in the opposite direction.

What then are the stakes in this debate, in outline? What are the consequences of saying one group is “smarter” than another, as opposed to some other complex trait? The prospect that is opened up, if we are honest, is none other than the open granting of a scientific imprimatur to deem quite specific groups of people as generally inferior and others as generally superior, and based largely on proxy information.

Others will object to various aspects of this. They may deny that intelligence contains baked-in connotations of superiority and inferiority, of ranking—a denial that can be nicely disposed of by asking when they last called someone “dumb” and meant it as a compliment, or called someone “brilliant” and meant it as an insult.

Also they may insist, by way of consolation—as Reich does, and also Charles Murray and Sam Harris, whose lengthy podcast on the “Bell Curve” author’s research is relevant listening—that differences in individuals vary far more widely than populations. Or finally, they may seek to assure us that always and forever the race-linked findings of the population geneticists will be of purely medical interest, without any intimations of power or superiority dynamics—for the targeted treatment of illnesses, say. In this domain, Reich gives the example of a newly-detected genetic basis for higher prostate cancer rates in black men. Yet rest assured, no individuals will harmed in the process! Rest assured, reasonableness will continue to govern all our personal judgments and actions, and most surely in science! (As if this has ever been the case to begin with. What credit we give ourselves, we reasonable ilk of the enlightened age…)

This storied reasonableness of our general behavior should not detain us for long; it has long been a empty signifier, complete with its own identity group, with professional “rationalists” such as Harris only the more recent to hoist its flag as a source of ego-validation, social approval, and large book deals. And so, in keeping with their mixed provenance and no less mixed motivations, the most reasonable reassurances about observing individual differences only and stoically ignoring the group ones show themselves after only a little thought to be little more than empty truism. For one thing, any large population distribution will have individuals far out at the extremes—and assuming a bell-curve, these decline super-exponentially, so that even small changes in the difference between the means will drastically affect the proportion of outliers. (This fact is well noted in climate science: a single degree change in the average can double or triple the likelihood of the most extreme weather events.) For another, if we only have two or three group populations that we are comparing with regard to a single emergent effect like IQ scores (as opposed to, say, sequences at a set of thousands of genetic loci, which however may have little or no functional relationship), the idea of a “distribution” over such a small number of groups is nearly meaningless—there are too few groups to build a distribution with. Using this fraught idea, then, of “difference between groups” for comparison with the more soundly determined variance within each group, makes a red herring the reassurance that “only the individual is judged”.

Yet Reich’s and others’ claims to “respect the merit of the individual”, as though this can be surgically separated from the treatment of the group itself, bumps up against other obvious absurdities in the domain of practical, actual human behavior—the treacherous land where factors like schadenfreude constantly are in play. Here is a simple thought experiment: imagine some attribute X that differs between two groups, and moreover let us imagine that X has strongly desirable implications of prestige, functionality, rank, etc. Let us imagine that an “open, reasonable discussion” about this difference in X ensues between someone from a group rated “hereditarily high in X” and yourself, who happens to belong to the “hereditarily lower in X” group. Casually, our friend from the “high X” group tells you:

  “Of course I judge you purely as an individual; I would never dream of doing otherwise. It’s just that your kind of people are on average clearly far below-par in terms of X, and that’s immutable (i.e., hereditary) reality. It’s just science.”

Whether they admit it or not, what Reich and the others are arguing is that we should entertain that the above sort of remark (albeit perhaps phrased less obnoxiously) could become perfectly reasonable, value-neutral, part of the discussion, “just science”. Leaving completely aside whether the science in question turns out to be correct and immutable, or whether theoretically the precise rules of logic allow one to systematically devalue an entire group while treating the individuals of that group pristinely, can anyone pretend with a straight face—as Reich and many too-conspicuously try—that this will not have vast and often extremely unpleasant consequences on racism in practice?

Whether or not it is so intended (and the element of schadenfreude suggests it is more intended than will ever be admitted), unless the consensus reverses direction in a way that has not happened at all since 1994, these studies will flow, imperceptibly and in most cases only semi-consciously, towards nothing less than the gradual rehabilitation of the word “racism” (or more likely in the short term, some equivalent but jargon-clothed euphemism). For if racial differences are in fact genetic, and linked to traits associated with rank and desirability, then whether scientists “lay out a rational framework”, as Reich so upstandingly puts it, will make no difference in the basic syllogism above, or the troubling case of “attribute X”. For to be clear, both are fundamentally expressions of racism; it does not matter who says them, scientist or skinhead. From there, fortified with the almost-priestly character accorded to scientific authority in our technics-obsessed culture, such expressions cannot help but gain a popular legitimacy that they would not have had if the subject had been left alone or banned, in the manner of human reproductive cloning and so many millions of imaginable medical experiments that might indeed “expand our knowledge” but at terrible moral cost.

Yet one cannot now easily declare the findings untrue or insist that no one look at them, though this has been proposed, with predictable failure, for instance by Noam Chomsky in his 1972 article in Ramparts, where he deemed the subject of racial-genetic factors in IQ to be of “very little scientific importance [and] no social importance, except under the assumptions of a racist society” (17) and others, like journalist John Horgan (18), whose positions echo Chomsky. The message has been: leave it alone; don’t even study it. Well-motivated as these recommendations have often been, one can hardly miss in them a certain sense of dread at what might be found, as though they already are sure of it despite themselves. And much as the prurient interest of scandal tends to double ticket sales for movie stars and lead to packed halls for politicians, of course the suggestion not to study something only triples the interest level in the subject. Of course bans have indeed been possible for things like human reproductive cloning—things which do not involve anything so dangerous as the stealthy rehabilitation of racist thought in scholarly circles after its long, post-WWII submersion—but none of these things have the power of elite schadenfreude stacked against them. This, maybe more than anything, is the distinction that makes such research officially “inevitable” instead of “unethical”.

As a practical matter then, if scientific findings on race and genetics of intelligence continue to be pursued and are made socially acceptable—as introductory gestures like Reich’s go a long way towards achieving—they have the potential to tip over an infinite cauldron of resentment that will make the racial rancor of the past few decades appear tame.

If accepted, one cannot expect racially-based changes in general behavior (and policy) not to issue from them as they sink in. For instance there is Murray, who argues that such findings show the destructiveness of affirmative action by promoting minorities to colleges and jobs where they harrowingly find themselves unable to compete on their own merit. As discussed, the findings will inevitably change the expectations that are routinely applied by one large group of people towards another, giving these new and inevitably judgmental expectations the fertilizer of mainstream acceptance, and despite the i-dotting protestations of the researchers in question, that is a profound thing.

* * *

What is Man without his prized intelligence? The question still frightens; it cuts to the quick. Where once Man counted on the numinous force of his soul to hold himself safely apart from the indignity and doom of animal life, to sustain a hope of transcending his own obvious creatureliness, the demolition of religious faith in recent centuries forced him to adopt a substitute distinction: the faculty of reason, or intellect. Descartes helped begin the shift by holding that the faculty of reason, and language—the res cogitanswas the soul, precisely the divine gift that distinguished man from animals, which were purely mindless automata. Through this elevation of the abstract rational capacity there developed, against Descartes’ intentions to preserve it under a new more Enlightenment-friendly definition, the now-widespread “physicalist” practice of seeing the spiritual aspect of the human as superfluous if not a quaint error, a shibboleth to be cast off as another stride along the bright march of progress. In this way intellect truly became, in Spengler’s wonderful phrase, “the petrifact of extinct soul” (Decline of the West, 354). 

While we have become far more accepting of the notion that animals, too, have experiences and mentality of some sort, intelligence has by no means given up its pride of place as the premium metric of human value (and it is odd of Murray and others to suppose that it is undervalued simply because, for instance, not enough job applications are decided on the grounds on something as blatant as IQ scores). The brahmins of our time, the opinion-making secular priesthood populated by characters such as Watson, Reich, Richwine, Murray, and many others, is defined almost exclusively in terms of intellectual capacity—or at least academic capacity, which is wordlessly taken as a proxy for the same. While beauty and athletic prowess still captivate (and as such are squeezed for every last drop of potential profit), and once in a while some account of valor or even wisdom or artistry penetrates the fog of the news, the highest and most solemn praises, the heart’s inmost hope and the hack’s uttermost hype, now are consecrated to the scientific, technological and mathematical geniuses.

In the popular culture, the “nerd” has become a kind of style statement, a stock entertainment, a ubiquitous object of striving, a pleading for the almighty Intellect’s absolution from those who likely cannot master or even do not care for its content. For much of our youth, nothing is so important as proving one’s smarts—or at least the social-ceremonial equivalent of it, usually by piling on years if not decades of undergraduate then postgraduate education, ideally after acceptance into one of the more prestigious institutions where, it is assumed, the reliquaries of Intelligence are reverentially maintained. As for the workplace, by Murray’s own account, our employment system has become obsessed with the status conferred by these trappings of high intelligence (if not always with intelligence as explicitly measured), so that for him and many other psychometric researchers, “life is an intelligence test”—though note that there are also some who very pointedly disagree with this view.

In day-to-day life, the effects of so much as carrying on one’s person the right signifier of one of the exalted and anointed topics can be striking, even in circles not traditionally thought “intellectual”. A friend once brought a book of mathematics into a bar with us; as we sat down, the bartender glanced at the book, then gazed at my friend with an odd reverence and said to him, with a serious nod and trembling lip, “God bless you”. “God bless you“! As though by carrying the right sort of book—and one filled with dry derivations no less—one could assume the mantle of some saintly hope! In years past it would have been a Bible, or possibly in more select circles poetry, or ancient Greek epics, that won such a reaction; now it is, singularly and solely, the power to calculate, categorize and abstract that inspires. What more telling expression can there be than this remark, when it comes to the confused and tormented results of substituting humanity’s intellectual and particularly computational efforts for the exiled but still-gnawing need for both transcendence, aesthetics, and tribal meaning? Intellect has become, as Spengler suggested, the Ersatz for a spiritual existence that mankind is no longer allowed to think about (and disallowed, it should be noted, by the very same priesthood that now steps out from its tower of genetic calculations to explain the ineffable laws of intellectual inheritability).

All this fevered and prayerful glorification of human intellect (though crucially only a certain narrowly mechanical and control-obsessed formulation of it, and despite its embarrassing history of gruesome failures and missteps) cannot help have a shadow, and in that is that to have less intellect is to have less humanity, less of the crucial element that differentiates the human from the animal or mineral essences. In the world of the spirit, it could be imagined that the rich man was as likely to enter heaven as a camel to pass through the eye of a needle, and that the poor in spirit were blessed. But what of the “poor in intellect”? In the world where spirit still was imaginable, or even to a lesser degree in Descartes’ dualism, the measured intellectual ability of a man or woman could have roughly as much bearing on the question of their transcendental merit as whether they calluses on their feet. But in a world where the crucial human attribute is exhausted in psychometrics, there cannot but be underclasses and masters and moreover an endorsement of such, a mixture of Social Darwinism and “Brave New World”-style intellectual partitioning. If the Calvinists took the first steps on this path by grimly propounding personal wealth as the sign of divine favor and poverty as indicating moral turpitude, so our new era will suggest its own secular Calvinism, with high IQ as its priestly virtue and low IQ as its sinfulness.

So let us be clear: the discussion of differences in intelligence is fundamentally different from the discussion of other possibly genetic differences such as a slightly higher propensity to get a certain kind of cancer versus another, or a metabolic tendency to put on weight in one’s belly or one’s thighs, or to attain an adult height a few inches above or below the average. For intelligence, unlike any of these, has light and shadows, and extremely sharp ones; it is predicated on superiority and inferiority, and in extremely dire terms, not just on what we mean by the word “merit”, on many of the most fundamental things that our culture has of late decided to deem valuable in a person, but on what we mean when we say we are human as opposed to something else. The very structure of our  aspirations for humanity—or what remains of those aspirations, anyway—has been hung on the stout, square, utilitarian peg of intellectual ability; all other exits from the intolerability of existence have been sealed, disavowed, or forgotten.

And so when we say that one man has less intelligence than another man, in the cooly quantitative way that the scientists have now formulated “intelligence”, we are saying something uniquely forceful given our time and circumstances, and saying it in a way that hits an unusually sensitive spot—we are saying that this less intelligent man is closer to animal life than the other. When such is said about a group of men rather than an individual, then if anything the force of the assertion and the power of its implications is even stronger—not weaker, as Reich and others coyly pretend. To demote an isolated individual to a more animal-like status is done quite often, and often stays isolated in the instance (as, say, by deserved punishment, or bad character). But to demote a whole group, with meretricious talk of “individual exceptions” or not, is to assert the supremacy of a whole form of life and the relative animality of another; it is a distancing of the latter from the circle of human distinction. Such distancing is a deeply political move; it simply cannot be made otherwise, no matter how sincere the ignorance that declares it apolitical, and no matter how “good” the science that is used to argue for it.

* * *

I remain no more convinced of the IQ score as an iron-clad indicator of what we really mean when we say “intelligence”—and particularly of creativity, depth, insightfulness—than I am, on the other extreme, that the IQ score measures nothing whatsoever of importance.

Granted, there seems to be a vague notion of “mental horsepower” that we use intuitively all the time, and to which these tests are able to give an operationally-defined number, expressed as “g” or “general intelligence”. And so other things being roughly equal, people who generally are quick at solving problems, grasping concepts, and above all at gathering things under their control, will indeed tend to have higher scores of this “g” than those who are slow at such tasks or even never master them at all. So far, so good.

Yet above a certain score range the power of the test to predict the public lifelong achievements (nothing is said of the private sort) that we would associate with “brilliance” or “genius” falls off considerably. How, after all, would we accurately measure the intelligence of someone with a mind beyond our own ability to conceive? Would we give them math problems we can’t crack, ask them to coordinate a large industrial supply chain, see how many pages of textbooks they can memorize verbatim, challenge them to build a computer out of paperclips? (These demands, and others like them, tend to strike one as entertainingly clever and yet strangely superficial or pointless.) Is it not a fallacy, in general, to apply statistical reasoning to a type of person whose uniqueness, whose potential for attaining what the rest of us cannot yet describe, is precisely what is supposedly interesting about them?

Surely many of us have had the personal experience of dealing with high-IQ individuals who are happy to flaunt their scores but show themselves to be curiously conventional and un-insightful, or who devote their intelligence to backstopping bizarre hobbies and hateful beliefs, or who deploy intellectual ability purely in subservience to the values of power and money (all matters which Chomsky very rightly cautions against).

When the culture of psychometrics holds sway in education, as in certain attempts to carefully isolate children based on high IQ scores and groom them as some new vanguard of human attainment, it is hard to avoid the impression that while these children may be intelligent, they are rarely quite as exceptional as the scores would suggest, and not much more so than those passed over as having too low a score (as, for instance, in the “Genetic Studies of Genius“, which embarrassingly rejected two future Nobel laureates).

It seems equally plausible that such academies, being designed around a single metric, tend to produce subtly narrow thinkers—very high-functioning functionaries, sleek systematizers or derivers, but essentially narrow and subservient in their thought, and possibly more so than they would have been in other circumstances. (Though as anywhere, there will always be a few who find their own way whether they are specially groomed or not.)

From the website for the Davidson institute, one such experimental IQ-silo for the young and gifted (maintained at public expense), one reads, in bold: “Profoundly gifted students are those who score in the 99.9th percentile on IQ and achievement tests.” Is this to mean no person even a few percentage points below that percentile score can be “profoundly gifted” at something? (Here perhaps we see hints of that subtle narrowness of thought.)

None of these considerations appear to have stood in the way of a widespread temptation to view general intelligence not as an approximate operational construct but as pseudo-Platonic entity, distinct and inviolate. And so it seems fitting here to repeat Schopenhauer’s superb remark about the difference between genius and talent:

“Talent hits a target no one else can hit; genius hits a target no one else can see. “

In sum, by predicating intelligence on the ability to reach a score that no one else can reach, rather than to observe ideas and assertions that no one else can see—a gift which by definition cannot be tested for—the obsession with IQ risks the unforgivable mistake of confusing genius with talent. It is not that IQ is meaningless or useless, but the undue fascination it exerts over many who, just coincidentally, do well at it makes it suspect; it has all too much of the victor’s schadenfreude in it for us to regard it, as it would have us do, as tidily agenda-free.

(to be continued)

Three Tiny Essays: Last Men, Judge Doom, and the Ennui of Physics

I just came across a video clip “explaining” Nietzsche’s concept of the Last Man, which it depicts with the example of a fat man on a couch, looking fairly decrepit and watching television.

While certainly this scene is mindless and dispiriting enough, it is absolutely not what the concept of the Last Man is about, any more than cowboy hats and gumbo describe everything south of the Mason-Dixon line.

In actuality, the Last Man is every bit as likely to be outgoing, good-looking, clean-cut, keen on fitness and careful about his diet, dedicated to his job, a productive and enthusiastic worker, someone “going places”, a pillar of the community. For the problem of the Last Man is not that on the outside he is fat, but that on the inside he is frozen. He cannot imagine, challenge or risk anything outside of the lines (and values) he has been given, and so is given over to a secret inner passivity, which he scrupulously ignores but which creates in him a constant nagging discontent. Out of the instinct to relieve this discontent, he finds nothing worth seeking but comforts and numbing distractions; he bases his life on “metrics”; he blinks his way into success and love—or rather, into the shadow of these things, which he ever mistakes for them.

* * *

Not too long ago, I re-watched “Who Framed Roger Rabbit”. The splicing together of animation and live-action is well-done—especially for 30 years ago—but really all I could think was, Christopher Lloyd did one hell of a number as Judge Doom. It’s almost too good; it overpowers the rest of the movie. I remember as a kid it scared me so much I couldn’t watch the whole thing to the end—and apparently this kind of reaction was reasonably common.

Even now I regard the backstory of the Doom character as by far the biggest and most interesting unresolved question mark of the movie. No morally bearable explanation could be given, I suspect. A cartoon who learns to pass for human, then rises to become a worldly, evil mastermind—now here is a “self-made man”, someone who overcame their humble origins, with all that that really entails! He’s also the most prophetic if you think in terms of his cherished vision of a thruway-based suburban hell in southern California, which completely came to pass (anyone who has seen LA knows the last 10 minutes of the movie is sheer wistful revisionism; in our universe, Doom obviously won).

This led me to wonder: is it possible for a being to be so intensely mirthful, so completely unapologetic in their fun and so free of the camouflage of irony (which inevitably indicates a drearily respectable modicum of self-reproach), that no other ordinary being can grasp how momentously funny and original they really are? Their colorfulness is so intense, so to speak, as to be off the spectrum, invisible to normal eyes. One might see Judge Doom this way: his black outfit is not really black, but ultraviolet.

* * *

Many scientists and philosophers, like Eugene Wigner in his famous essay “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”, say that the very fact that physics is possible—in the sense that observed events pantomime strict mathematical regularities, or vice versa—is a “wonderful gift”. But could we not just as well describe it as a curse? What, after all does it mean? Physics essentially presents us with a form of “eternal recurrence of the same”: it posits that the same kinds of events, and only those kinds, will keep happening for all time—without reinvention, without reprieve, only conceptual tinkering round the edges, and even that only when desperately necessary. (The great French mathematician Poincaré even proved a theorem reminiscent of this, though not quite the same.) Possibility, then, becomes miserably circumscribed, hyperconservative; everything soon becomes mundane, without depth. What is possible we quickly codify in theories and formulae, and once these reach a certain precision there is nothing more one need ever be sure of. From that point, reality itself seems to suffer from a kind of exhaustion.

What can the physicist’s fixation upon “reproducibility” be construed as if not a desire for repetition, for recurrence, a perpetual re-visiting of the same limited compass of well-behaved experiences? (An ascetic tendency, most likely.) And because existence, after physics, has been left with no depth, one can never even hope to grasp why this monotony, why this frozen sameness, nor see an end to it! The quantum level tells us the underlying reality is of the most peculiar, foggy, insubstantial, incomprehensible strangeness; this is not un-mysterious, but rather mysterious in a way that still offers no hope. This is what it means to feel that the universe has been rendered as something dead, as somehow a husk. One only hopes it is wrong, wrong, wrong—or else one waits longingly for the Nietzschean Overman, the one who overcomes his origins, who somehow will joyfully endure it all with a humor we cannot comprehend!

Keeping the Fourth

Now and then, and for some reason especially of late, I find myself stuck on (not necessarily in) the fourth dimension. (By “4D” I mean Euclidean 4-space, not spacetime which is also four-dimensional but very different). Thinking about it is a fun time-killer actually, if you’re a bit visuospatially inclined—even in the inevitable failure to really picture the tesseract or the Clifford torus, say, there is a fascination in the notion that what we see in front of us may not be all there is even to its appearance. When an added invisible dimension comes into play, even such simple familiar things as rotation seem deceptive: they may, in fact, produce changes that violate everything we intuitively know about how “real” shapes are supposed to act.

There is a kind of lush, baroque appeal to the thought of an unseen continuation of the space that we think we know so well, a continuation which may contain who knows what wonders or terrors. The idea of additional dimensions almost perfectly captures the sense of mysterious “beyondness”, the urge to expand or to reach for new depth and ultimately for infinity, that particularly entrances the imagination of the West, as evident in so much of its religion, art, science and technology. Therefore, it is little surprising that the concept of the fourth dimension has inspired reams of literature, notably in the 19th and early 20th century, and even has become something of a trope, including such works as Abbott’s 1884 “Flatland”, H.G. Wells’ “The Time Machine”, Madeline L’Engle’s “A Wrinkle In Time”, H.G. Lovecraft’s “The Color Out of Space”, and Nelson Bond’s “The Monster from Nowhere”. In the visual arts, the most famous beneficiary of four-dimensionality might be Cubism, which aimed to represent all sides of a 3D object simultaneously, much as a 4D being might.

It is interesting that most such works have undertones, not only of mathematics or science-fiction or pure entertainment, but also often the occult—of secret or privileged knowledge. Indeed an additional dimension would be “occult” in the most literal sense, as the word is originally from the Latin occultus, for “secret”, or “unseen”. Hidden yet inescapable, the fourth dimension, even if it does not exist, offers a powerful modern metaphor for the occult, both in its possibility of shocking new truths and powers and in its implicit suggestion that all that one needs to gain such powers is to change one’s sight, to turn one’s gaze in a direction that had been there all along but was simply neglected by the ignorant masses. Higher hidden dimensions can also be a metaphor for other phenomena that nevertheless have the flavor of divination-by-other-means, for instance in Schopenhauer’s dimensionally-suggestive remark about the nature of genius (“talent hits a target no one else can hit; genius hits a target no one else can see“), or the very common descriptions of artistic inspiration as “otherworldly” in origin or “from another plane”.

Furthermore, it is commonly argued or implied in literary instances of the fourth dimension that four-dimensional beings would possess powers and understanding of our world that would appear to us as nearly indistinguishable from the godlike. In our three-dimensional world, they could appear and disappear at will out of apparent nothingness; they could read the insides of sealed objects as we read the pages of a book; they could achieve seeming-impossible transformations of matter and structure. This only adds to the curiously mystical aura which naturally accretes around thoughts of higher dimensions. In short, it is as though four-dimensional space affords the inmates of our scientific, rationalism-bound culture an intellectually respectable venue for belief in the supernatural (or supranatural)—after all, the fourth and higher dimensions are, quite by definition, out of this world.

And yet, to a mathematician, the concept of four-dimensional or much higher or even infinite dimensional spaces is a completely hard-headed and even prosaic one, and the mystic accretions are of little interest, mere “woo”. The problem with this matter-of-fact attitude seems to be that the now-prosaic step of combining a fourth spatial dimension with the three we “know” did not become a widely used or understood procedure in mathematics until sometime in the mid-nineteenth century. Was this perfectly prosaic device of the fourth dimension and higher spaces then “discovered”, or “invented”—or something else?

***

No less fascinating than the creative or pseudo-mystical possibilities opened up by thinking about the fourth dimension is the way it exercises our philosophical attitudes about the dimensions we know—or think we know. For if we seriously consider the matter, and make an honest attempt to understand how we would convert our intuitive conception of space to include this extra dimension, the extreme difficulty of doing so starts to erode even the mathematical confidence of what “space” is—that it is really equivalent to the Cartesian product of straight lines at right angles to each other, and so on. Instead the mood turns increasingly Kantian, as it dawns that space as we know it may not be the mathematician’s space at all, but an irreducible, inaccessible category of our understanding, for which certain mathematical formalisms only happen to give a very good model. Those formalisms also happen to allow many fruitful conceptual extensions, as mathematical concepts so frequently do—but nothing more; in Kant’s words, space “…is nothing but the form of all appearances of outer sense. It is the subjective condition of sensibility, under which alone outer intuition is possible for us” (COPR, 71). Space “…is not a discursive or, as we say, general concept of relations of things in general, but a pure intuition” (COPR, 69).

To the extent that I can tell from my own experience and what I glean of others’, the interesting truth is that we do not ever really think in terms of “three” dimensions as we are going about our business in the world—this is an abstraction, a model, after the fact. The process of identifying space with a Cartesian grid made of orthogonal left-right, up-down, and in-out directions (or spherical, or cylindrical, or any other triplet of coordinates) requires considerable abstraction, if not training in geometry. And so when we face the fourth dimension, and try to picture 3D space “as we know it”, except now with a specifically added direction, the notion of appears intuitively bizarre and arbitrary, despite that the concept seems coherent and the math works.

This is not to say that the arising of the number “three” when we apply said abstraction to our experience is not fascinatingly mysterious—only that direct intuition refuses to underwrite the extensions and variations that the mathematics, charging ahead, insists are completely “natural” and even “real”. Reams of bewildered questions spill forth. What is it, this extra dimension, taken outside of mathematics? Why is it, why should it “be” there, as part of actual space, even in our imagination? What does it even mean for a dimension to “be” or “not be”? What is this arbitrary “adding” process supposed to be, if the Cartesian product of a line with our familiar 3-space fails to jibe with anything our intuition can call space? And even if I do successfully “add” another dimension, what is to tell me that I added it in the right way—is there some way that a completely new direction would not be at a right angle to the others? Should it be a small looped dimension or an infinite straight one, and how do I tell other than just assigning it to be one or the other? Where, after all, is this other dimension? How do we make it “attach” to the other three—and if it fails to attach, “where” does it go?

Telling oneself instead something like “and now in addition to this 3D, let there be another thing that I can vary continuously” allows a certain amount of intuition to seep in, but it is emphatically not a spatial understanding in the sense of space as we daily and constantly perceive it, since this again makes one axis “odd man out” as an “another thing”, when the lived experience of “space” implies complete integrality, free rotation and movement, again, with no choice of or even attention usually given to axes, coordinates, etc.

Alternatively, one can try to “paint” a kind of intuitive spatial quale, a raw feeling of “closeness” or “farness”, onto a 3D region to indicate different hyper-depths, but this still has the strange effect of singling out the fourth dimension, making it an add-on instead of fully equivalent to the other three or, what is really desired, intuitively not a specific “dimension” at all, but fully natural percept of “four-space”. Particularly, this approach fails on the condition of rotation—if we think of our added fourth dimension as something like “temperature”, say, we realize as soon as we try to turn our heads to face “temperature-wise” that there is a problem, as trying to rotate a real object in real space into the temperature axis is completely nonsensical (even to the greatest synesthete). Yet mathematically this can be accomplished easily—many kinds of data analysis like PCA we may choose new axes without much concern for the original units.

What is worse, as soon as we begin to ask these questions about the fourth dimension using the mathematical imprimatur, the way is open to begin decomposing and doubting our own natural space, which becomes perforce “3D”. We are haunted now not only by the aforementioned mystery of “why three dimensions (and not some other)”, but also of how the x, y, and z axes “fit” together and whether there is some kind of spatial “glue” that keeps the parallel planes from sliding over each other and creating a mess, like graphite. Appeals to lower-dimensional analogy in general, as in the famous 2D “flatland” stories, are typically presented as the road to proper understanding in the direction of increasing dimension, but in the direction of decreasing dimension we find they do not really get us where they are intended to at all. For “Flatland” aside, in truth to us “2D life” is as inconceivable and artificial an idea as 4D (or, for that matter, 3D) life. To exist on a plane of zero thickness and perfect flatness is, as far as we can tell, almost as nonsensical a concept as “turning to face temperature”—and we cannot in fact “picture” anything of zero or infinitesimal thickness—so what are we supposed to gain by the analogy?

Maybe the most interesting thing about attempts to intuitively visualize four-space is the way that the experience starts to resemble that of pondering many of the ancient conundrums of philosophy. One begins to feel one has made progress, only to realize that one has been assuming the answer all along, or else making blatant mistakes. Could the fourth dimension, like these intractable philosophical riddles, be nothing more than one of the linguistically-created “spirits”, born of “a tendency to sublime the logic of our language”, which Wittgenstein warned us about? Or is the difficulty we face in intuiting such higher spaces simply an example of how, to quote Fodor in The Modularity of Mind, “it is surely in the cards that there should be some problems whose structure the mind has no computational resources for coping with”?

Chomsky, often a fellow-traveler of Fodor’s in these kinds of arguments, often invokes a little tableau involving a Martian who, watching human philosophers fight endlessly about the nature of free will, is astonished that the humans simply never hit upon the way to answer the question—an answer which, given the Martian’s distinctive mental structure, is perfectly obvious. Visualization of the fourth dimension perhaps is analogous, in which case the fault is in our brains. But what is difficult about this response is that it implies a kind of Platonism—for it insinuates that the “problems”, with their solutions, must exist in some sense even when there is no one around with the right brain modules to conceive of them. The truth about free will, or 4D visualization, is out there, waiting, so that other beings than us, endowed with the right mental capacities, can catch hold of it. In that case, however, what distinguishes the 4D space that already “exists” out there, waiting to be experienced—say by some lucky Martian—from the 4D space that appears not to exist, based on the best evidence we have from the world around us?

Faced with this barrage of questions, the semi-mystic function of “higher spaces” or “the beyond” as a placeholder and metaphor for a missing understanding serves us all too well; for to the question of where these wondrous Platonic verities might be found, we may as well suggest, however saucily: perhaps they themselves are in another dimension!

***

For those interested in experiencing the unique feeling of 4-dimensional bewilderment for themselves (in case the above wasn’t enough), it’s hard to beat the iPhone app “Rotation4D”—which, for a puny 2.6 megabytes of memory space, lets you watch very nicely rendered color projections of the 4-dimensional regular polytopes (the four-dimensional equivalent of the Platonic solids) and set them rolling through various kinds of higher-dimensional rotations at the flick of a finger. There is a hypnotic beauty in seeing the different constituent shapes or “cells” of the polytope materialize out of the “hidden” fourth dimension, twirl into view, and then slowly flatten as they tilt out of our 3D “hyperplane” and fade back out of sight—and it’s probably as good a way as any to approach an intuitive understanding of 4D.

How To Avoid Seeing The Amish

I highly recommend this recent NBC segment on the Amish and their ever-evolving stance towards mainstream (or as they call it, “English”) technology. As much as any such short piece can do, it serves as a master class in propaganda as it is really carried out. Here we see propaganda not as brightly-highlighted howlers from officially acknowledged evildoers—as in certain selections of Pravda or Völkischer Beobachter that students are taught to look at in smug incredulity, with the purpose of reassuring us that propaganda as such will always announce itself as laughable absurdity to astute and balanced citizens like ourselves. Instead, here we have propaganda in its natural form, subtle and perfectly workaday—the art, first and foremost, of presenting and building up, as innocuously as possible, unspoken assumptions.

In the case in question, it’s fascinating to watch the ubiquitous narrative of “breakneck technological progress”, of our own cultural superiority through connectedness and virtuality, of the inevitability of these things, as if governed by natural laws—here represented by the scripting of the voiceover and by the reporter—as it crashes headlong into a culture that has found remarkable success and fulfillment, in large part, precisely by being skeptical of this narrative.

At the outset, we hear the motto, on voiceover: “Like it or not, this is a technological world“, says a woman from the Mennonite Order. Note that this Order, though related to the Amish Orders, is in fact distinct and typically far more permissive on technological usage. But this vital difference is not mentioned, immediately giving the impression that the Amish as a whole have already bought into the motto, and that now, slowly but surely like a child making first steps, they will begin to “come to their senses”; in short, to be more like us. The world is, and must be, a technological one only; no other conceptualization is valid, no other direction is possible.

Again, with the telltale assumptiveness of propaganda, the announcer pronounces the Amish to be “stuck” in the 1850s. Stuck! Pity the poor souls—one thinks of a lame horse “stuck” in the barn, or a genetically ill child with withered legs “stuck” in a wheelchair—victims of fate, needing our enlightened help! Yet this characterization in fact says far more about the announcer’s ignorance or deception than about the Amish themselves, as the Amish restriction of certain technologies is, on the one hand, very much a conscious and deliberate choice—made with full awareness of the gadgetry available among the “English”—and on the other, is not and never has been anywhere near an absolute rejection, within or across the Orders. This wide variation in technological usage also renders the segment’s title, “Amish opening up to modern tech in some communities”, almost as empty as saying “Iowans opening up to soybeans in some communities”.

“In our lives, it can feel like technological change is inexorable,” continues the narrator, “There’s always more and more, and more is always better. But here’s this community that says ‘no, we’re going to take this, but not that.’ And I wonder if we don’t all crave that kind of control sometimes.” Here the mood seems charitable—as if the writers of the segment were willing, for a moment, to entertain that heterodox idea that the Amish may have things to teach us. But this would be to miss the subtext of tongue-in-cheek parental forebearance: “if only we did not have to be so grown up in our wholesale subjection to the technological; if only we could be picky about it, like these sentimental anachronists in their sweet little mouse-holes!”

“Is the boundary between Amish and not-Amish kind of blurring?” is the closing (and leading) question. Let us leave that question aside to ponder the next time we see the sober black horse-drawn buggy being drawn along the roadside. Ask then: Have they added a turbocharger yet? GPS? Are the children in back watching Surprise Egg videos all the way home? Does this look like an “inexorable” caving to an exclusively “technological world”?

Most of all though, media phenomena such as this suggest how the triumphal parade of Faustian progress—the “march towards infinity” that dominates our own world-view and expectations—has begun to clash with the perceived reality of our experience. Notwithstanding the many believers who will reverentially look at an iPhone X and swear it is as far removed from an iPhone 4 as the iPhone 4 was removed from a rotary phone, the reality over the last few decades seems to be that life has not changed very much for the better for those under the sway of the technological faith. The cataract of wonderments has become a trickle. Exactly because of this, the vision of breakneck progress must all the more be maintained in thought as obligatory slogans and propaganda; at the very least, this drowns out the anxiety of having to consider that there may be equally trenchant alternative points of view. A more penetrating and serious journalist, then, whether or not s/he agreed with every aspect of Amish culture, would strive to see in the Amish a quite sincerely held instance of such points of view, and one that has done quite well for centuries—not a quaint case of delusion whose demise can safely be timed to the arrival of 5G connectivity.

But this is part and parcel with a not-too-subtly condescending evangelism of our technocentric way of life. For it is the peculiar fate of the West that this way of life, this continual demanding of new accelerants and stimulants, is the only thing we widely put faith or hope in now—and so it must be defended with all the delusionality of those fundamentalists we love to scorn. Here arises the superb overarching irony of the segment, missed by both sides: the irony of watching one fundamentalism squint across at another, while at least one (guess which) denies it is a fundamentalism at all.

In his classic study “When Prophecy Fails”, Leon Festinger recounted in detail the arc of a UFO cult that became convinced that a great disaster was imminent and that true believers would be evacuated beforehand by a flying saucer. When the apocalypse failed to occur, the members of the group shifted from self-contained self-assurance to aggressive attempts to gain publicity and converts. Festinger saw this shift to the goal of conversion as a way of managing the cognitive dissonance of the failed prophecy: after all, if I can convince others to join in my discredited belief, then maybe I was still right to believe in it.

In a similar way, as the technological prophecy of unbounded progress and unlimited wonders through science and machinery falls conspicuously shy of its (undeniably immense) past accomplishments, and particularly as the environmental and social side-effects of this prophecy become more dire, the Faustian peoples’ need to believe will have to be compensated more and more not just by hyping so-so or disappointing technologies, but by a perceived longing for conversion on the part of outsiders.

It is true that in the past the West became a focus for the aspirations of much of the rest of the world, and that these huge populations often eagerly embraced Western technologies, techniques and artifacts. But their motives were never what we told ourselves: what these peoples saw was not the Western (and especially American) creed of infinity-seeking, or even a world of ever more intricate and invasive gadgetry, but simply abundant food, fast rides, entertainments, sexy pictures—and above all wealth, copious, fulsome, glutting, undreamt-of wealth, the kind a man might slaver after for a lifetime and never tire of. Now, we see this wealth has been to a great degree transmitted already, and with that, the prestige of the West has entered a slow dive. The Amish double our cognitive dissonance over this, for not only do they not care a whit for the infinity-seeking creed (at least in this world), but they tend to eye with a half-bored skepticism even the wealth-accumulating offspring of that creed. After all, their very ethic emphasizes an entirely different kind of wealth—the wealth of the community, of physical heartiness, of simplicity, of satiety, of justice to family, God, and fellow men. To think that propaganda would spare them for that blasphemy is to think naively.

Four Poems

// 1
The farther out,
the more contingent, more illusory:
Society bobs and rises. Smiles all around.
(A handshake, while we finish these documents.)

The closer,
the more empty, and the clearer too,
a demonic clearness, tautology’s looking-glass.
(Alphabets, lookup-tables, and just-because.)

Where lies the teacher
of the perfect world? Where is his army
sundering the deep, O mystic column?

He has won many battles—too many—
and so, full of goodness, retires to his academy;
so now let him rest, in that endless middle distance.

// 2
Bethany, wildwood, barrier islands:
People are drawn to haunting, floating places,
Waiting half-naked in the sun, as if possessed
By a taming, or a brave covering-up,
Projections of a play-acted life.

In the quiet cumuli of years
Thoughts of here keep turning, piling up.
Underfoot I see white shores,
Sea-foam, sea-creatures, clear as lenses,
Dredged-up epithets from the primordial;
I recall joy, aging friends, simmering shrimp,
Dinners amidst faces drawn tight, keen as a syringe-tip.

I remember madness
In the mirror:
A me who was not me, the still ocean
In the window calling us down to its edge
With visions of a parallel time, new selves,

And out there, past the little hill, the sand
That spoke crisply of all things burning:
Of passions lost
and caught midair,
But mostly of life, rigged out
In all its solitary beauty,

Like the sail
Of some coast-hugging ship,
Tacking carefully, yet almost lost, all but plunging
Full tilt into a blue infinity.

// 3
In the winter’s dark sanctum, I see
Parts returning of me.
A heavy black snow is falling on everything.
Where now is the secret fire?
She has left everything in a pile,
In these ashes at my feet.

Truly, the nightmare begins
When we cease dreaming
That we share the same dream.

// 4
Mathematics
Is nothing but the gift
Of painting with rules:
Peeking through the symbols, suckling-faced, the new cosmos beams.

Governance
Is a powerful magic fife:
Well-played, it soothes man’s madness,
Ill-played, it dances him towards death.

But engrossed in canvas,
Wrestling a verse,
Or lip curled in thought
Over the keyboard:
The artist’s distant look
Is like the child humanity,
Straining to descry
Some truer calling.

The New Prosperity: Aristocrats and Corporatists

The economy and the market are back in good times—so the tribunes and augurs have been singing. Except, that is, for Friday and especially Monday, when the Dow dropped as much as 1,500 points and ended down over 1,100 points, both records in its history. Even so we are already being told eyes forward, keep on moving—and that “the fundamentals are strong”.

What fundamentals, exactly? Everything but how people are doing, it seems.

We just learned the US life expectancy dropped, for a second year in a row. We’ve also got epochal levels of global debt, falling energy quality (lower EROI due to our much-touted reliance on fracking), a return of great-power geopolitical rivalry, declining global power (“post-primacy”), severe legislative gridlock, totally dysfunctional healthcare (showy moves by Buffett and Bezos notwithstanding), stagnating innovation, basic infrastructure disasters like Flint, New Orleans and Puerto Rico, opiates flooding the streets, climate instability causing record costs

Maybe most significant among these “non-fundamental” issues is the continuing explosion of inequality; as recently (and repeatedly) documented by Oxfam, nearly all gains of the post-crash “recovery” have gone to a sliver of the world’s population, while the rest have stood still or backtracked. In the US, for example, real household income has remained stagnant since the 1970s. A large majority have less than $1000 in savings.

Make no mistake, America and the world are standing still in their own trash, perhaps even edging backwards.

What luck that none of it is fundamental!

***

What apparently is fundamental is the grand fact that we are now immersed in a new kind of economy—one where the concept of earning an honest profit has gradually been engineered out of existence. All the globe is awash in shadowmoney. Even wildly unprofitable companies like Netflix and Tesla, whose debt is rated “junk” (assuming the ratings agencies are trustworthy), continue to be able to issue debt at rates not much higher than the Fed. This last fact tells us either that these companies’ debt is wildly under-risked, or that the Federal reserve’s debt is, or both.

Politically too, nothing of the situation in this country is normal any more. From Antifa riots and white supremacists feeling the first thrilling stabs of power and visibility, to final, total regulatory capture within the government, to the pre-Alzheimer’s tweeting habits of President Donald J. Trump, the masterpiece perpetual-motion markets have been content to treat anything and everything as normal-by-fiat, or normal-by-groupthink. The fundamentals are strong.

Elsewhere we read that personal indebtedness in the country has for the first time surpassed what it was on the eve of the financial crisis… but that this is really a good sign. Prudence, saving, and self-sufficiency, we are told, are destructive to American growth, and if these illiberal character flaws taint the public mind too much it may be even be time to punish the reactionaries responsible with bail-ins. So far, the public has been most obliging, binging again and again on cheap credit to buy new cars and (now) largely useless higher education.

In Bitcoin and the cryptos, meanwhile, we have an especially pure example of the obsession of the Age of Virtuality: an entity that has no net merit, that lacks even the fiat currency’s backing by a powerful nation-state—and that is in fact highly parasitic, given the huge waste in electricity and hence CO2 production it brings—but can nonetheless become a symbol of futurity.

That symbolism is all that is needed, for it promises a complete escape into the Virtual, now tacitly our civilization’s only goal and hope. Through such tokenistic thinking, many now believe that there needed be no decline in the markets, ever; after all, correction and price discovery depends on there being some observer-independent standard of correctness, itself equipped with some concept of limitation or scarcity. These concepts are absent in Virtuality, or can be treated as infinitely plastic, or dismissed as outdated, or ignored altogether. Lead us not into solvency, but let there be bubbles in everything, is the new credo.

Even so, as in the stock market recently, a few drops’ worth of cold reality at least seems to have leaked through the cracks of the cryptos. For now these virtual parasite-currencies are collapsing mercilessly across the board, and have become so unstable and proved so difficult to use that even attendees of cryptocurrency conferences are still required to pay in national currencies. (Here again we face an example of the nagging concept which many in our culture seem to have been educated specifically not to understand: that a thing is new is no assurance that it is better.)

Another interesting sign of the times can be found when we ask: what is it that most think triggered the recent flash-crashes in the Dow? Along with fears that the central banks may begin to charge interest on loans at a rate even half of what was normal just 10 years ago, the “wake-up call” took the form of nothing less shocking than reports of a modest increase in wages.

It’s hard to think of a more clear-cut example of how the imperatives of instinctive class enmity are at work in the decisions being made in this country: working people’s wages briefly edge up to a slightly less than starvation level, and the markets immediately panic.

***

For a long time now, the central question about this stock market, fueled as it is by central bank cash infusions as far as the eye can see, has seemed to be: is it a market at all, or an engineered aristocracy, a system contrived to generate mad money and infinite security for the investor class, and cement the rest of the world in debt-serfdom?

If it is such, then reality may take a long, long time indeed to leak in. Much as many doomers might yearn to see the captains of finance finally get the messy downfall they deserve for (redoing) what happened in 2008—to have the whole thing implode under the weight of its own wishful thinking and corruption—the glum truth of history is that such systems can go on for centuries. Or, given the madness already on full view in our latest presidential cycle, the knives could come out next year in a crash or a revolution. Whether we are convinced of perpetual-motion or not, we fool ourselves if we think that such designs can have predictable endings, especially once such high stakes are gambled so wildly.

In hindsight—though it’s giving the viziers of finance far too much credit to say it was planned as such—the Financial Crisis appears more and more to have been a perhaps inadvertent, de facto coup by the financial sector and large corporations in general. For the “bailouts” and near-decade of free-money “quantitative easing” policy that followed the Crisis marked not a return to true economic health, but an unprecedented merging of the interests of big companies and the agencies supposed to restrain them into one big, self-stimulating, noncompetitive blob—true proto-fascistic corporatism. Top it off with a billionaire, proto-fascistic president from a dynasty of big money and the framing is almost complete.

Imagine something like that plot device in some movie heists, where the ingenious bank robbers splice a repeating loop of “nothing happening” into the security camera feed, then make their move unseen. But in this case, the effect of the “heist” was to allow the full replacement of market systems, complete with their nasty tantrums and hard lessons even for the rich investor, by a new kind of wired-in, risk-free aristocracy, an investor/ownership class of “Architects” that creates limitlessly redeemable bubbles of virtuality for itself while securing the immiseration, through mass surveillance and debt serfdom, of the poorer classes, their natural enemies. 

Going further, in 2008, it appears somewhere the tape-splicing was botched: some guard saw the same fly buzz past the lens over and over, and that was the tipoff for calamity. The triggers malfunctioned, some Architects actually got burned financially, and so the bubble abruptly became important and known to all and the crisis took hold.

Ironically, the problem with 2008 was that too many of the old, non-virtual forms of market accountability still worked, forcing ugly (if curtailed) consequences, though mostly on the rest of us. This time the Architects are confident they will not fail. 

Now the coup is quietly put aside, and the Crisis itself tacitly treated as if it did not happen. Certainly as far as mainstream economic thought goes this is the case—the sooner 2008 is forgotten or waved off as a no-fault “fluke”, the better. Here is more false progress: we shall act as though problems have been solved, when in reality we have simply decided to force the problems into unconsciousness and declare victory. “The fundamentals are strong!”

2008 ought to have been made into a harsh cautionary example for the elites of this nation against their outrages of corruption and cronyism, and strict laws should have restored the sanity and “boringness” of the banking sector—but because of the no-fault approach of the already pitifully-indentured government, the teaching moment ended up being twisted into opposite, a $60 trillion (and how much more more?) bill for inverse-reforming an already disastrous system into something worse. If an aristocracy is too soon to call, call the result the “band-aid” economy, or the “see no evil” economy: the handling of the crisis amounted to a solemn oath to the financial sector that there really would be no limits, that going forward any behavior however insane would be accommodated and excused.

Like so many issues we face, the problem is a moral one. For, despite plenty of hand-wringing ten years ago about the “moral hazard” of TBTF and the grumbling about letting the perpetrators get off scot-free (even granting that the TBTF banks are now better capitalized), we see now that moral hazard is nigh-unmentionable—and it’s also the only game in town.

With that elimination of “moral hazard” has gone, characteristically, any reasonable pretense of “free markets” or “animal spirits”. We have slid, over the course of the past 40 years, from a mixed-economy with at least some market-based accountability into a corporatism redolent of the Fascist era, according to Edmund S. Phelps. Patronage has replaced competition, he observes, clogging the arteries of the old capitalist spirits (though he also continues to believe, much in keeping with the faith of the Age of Virtuality, that the dearth of major innovations since the 1960s is merely due to sociological incentive rather than hard scientific limits, a matter I have at addressed elsewhere). In this way the market has, at least for a while longer, wound up without any teeth at all: those corporate entities too big to fail know they will, with a wink and a nod, be refinanced at lovely terms by their comrades in the Treasury and Federal Reserve. They know it, and everyone invested in their heavily-repurchased stocks knows it.

***

What we see so far, though, is not yet enough like the aristocracies of old to stabilize itself for long as such, for at least the aristocracies were steadied by generations of social custom, family ties and often feudally-based loyalties, all of which are highly attenuated or absent in the US. If instead the direction is towards proto-fascist corporatism, then given the outcome of the fascist governments known to history the prospects of long-term stability are worse still.

What we have so far seen is instead a gargantuan levitation, a contrivance against gravity and reality, mainly achieved by tossing the weight of 90-95% of the people right out the airlock. In such a situation, where principles and accountability have been suspended, there is also a levitation in time itself, in that there is no deeper history to correct a certain pattern of conduct (many traders now in the markets already are too young to remember a time when declines were understood to be part and parcel of capitalism). And so the direction of drift will invariably be towards increasing excesses, and increasing excuses to dismiss any problem as “not fundamental”. The result will be moral, and likely financial, ruin. Why should any citizen bother making honest profits when corporations and financiers can borrow (or burrow) their money for free, indefinitely?

But as with climate change, no one gives a damn, for now, as long as there is still gas in the tank and the cheap-money doesn’t turn into inflation and wages don’t actually rise. And so the ride may yet go on, possibly for quite a while, until reality seeps in in that surprising way it does, corroding through the system in places where we least expect it. There is no reason to expect a “correction” or two, however spectacular, to accomplish that.

And yet deep down people also do give a damn. For much of what is holding up today’s market is actually a terror deeper than that of 2008—for the debts and chicanery are only more vast now, the bailouts and free money have created only a tepid recovery in the real economy, and less-than-nil in quality of life.

As Phelps notes, there have been no recent developments or discoveries to open grand new areas of the economy, only dubious re-modelings of old ideas like fracking, vac-trains, ride-sharing and indistinguishably-fancier smartphones that people are starting to get sort of bored of anyway. Behind this, in turn, lie certain unbearable truths about the dire condition of the scientific enterprise, out of which, like a huge Christmas-stocking, our economy has over the decades gotten used to pulling one magic money-making toy after another, to slake our unblinking need for novelty.

Yet these ideas must be thrust into unconsciousness, or waved off as not “fundamental” enough—for if the reality of the current Everything Bubble is so much as allowed to be acknowledged, this time the resulting swan dive will mean there will not even be a hope of fake recovery. Even the ruse of capitalism, like the absent king whose image is still enshrined over every door in the land, would fall, never to be revived. What, then, would we believe in?

***

Above all the show must go on—meaning, in the larger scope, the ever-accelerating consumption of a finite material base, disguised temporarily by addictive virtual tokening. And so it will. “Needs” of the most arbitrary and artificial type, designed on high and then pushed on a population made dejected and impressionable by false dreams and false education, will continue to be manufactured and dressed monotonously as innovation. For however absurd this prospect seems, it is the only stability our system now knows; and the very poverty of thought that it has created in us can no longer produce its own antidote, only intensifications and repetitions.

It is a sad fact that most of the time human beings prefer not to think, then act, but to act, then rationalize. What cannot be rationalized, in turn, is excluded from awareness, added to the pile of things too bothersome to waste time on. Such reflexive mental triage is sometimes necessary in order not to be crippled by self-doubt; yet in the amounts that have become habitual in our economic system and other parts of our national (and global) life, they are the telltale of a sickly and spreading unconsciousness. For consciousness is nothing at all without the ability to recognize, and step outside of, one’s prior ways of thought.

So it is natural that our Homo Ignoramus economists will go on to say after this record drop, and perhaps many more to come, that the “fundamentals” are fine. But what fundamentals are they even conscious of anymore?

Sarewitz’s New Science

Just got through reading an interesting assessment of the problems facing contemporary science, by Daniel Sarewitz, at The New Atlantis.

The article is both spot-on and frustrating. Spot-on, in that Sarewitz very nicely gathers together a list of the forces behind why science is failing—particularly, diminishing returns due to crippling complexity and ambiguity of the remaining problems. He sees such problems as bedeviled by what he calls “trans-scientific” issues that fundamentally are too messy to ever be decided by science. This creates a world where scientists can “research” a problem indefinitely without creating any stable or useful gain in knowledge—which, it bears noting, sounds exactly like the “ironic science” that John Horgan was warning of over 20 years ago.

But then, having thus placed himself on the verge of a powerful but unpleasant conclusion—that science as a vital, vanguard-progressive enterprise is destined to transform largely into an archival, practical, and often sophistic one—Sarewitz seems to blink and change tack, deciding to argue that Progress could surely be restarted if only science became more command-and-control or goal-oriented, and specifically more focused on technological deliverables. His model is the Pentagon’s supposedly no-nonsense results-oriented approach to improving jet engine efficiency and information technology.

Underlying this belief, the credulous “as if” assumptions Sarewitz must make are so numerous as to be hard even to list. As if research labs don’t clamor to snap up the newest technologies as soon as they are able! As if there aren’t currently legions of scientists very much searching and desiring to cure breast cancer! As if Einstein in his patent office or Fleming in his lab only performed their wonders by being set strict objectives by a somehow all-knowing boss! As if the answer to a problem of wicked complexity is to introduce an additional layer of managers and deadlines! And most of all, as if there have not been legions of failed and wasteful but very much “problem-solving” focused initiatives undertaken by engineers and defense agencies, even in the heyday of such initiatives!

That the examples Sarewitz cites—a cancer vaccine initiative that hasn’t found anything yet and a, ahem, woodpecker preservation initiative—are (with all due respect to our feathered friends) almost the opposite of compelling, or that the very trans-scientific nature of the problems crippling science would be just as intractable under any management style or incentive system, seems not to occur to him.

In the end, though his searingly clear-sighted assessment richly describes academic science’s abysmal if not terminal condition, Sarewitz’s remedy boils down to a mere ignoring of his own conclusions, and a contrived faith in can-doism. He can see the fatal contradictions in today’s research culture, and he can see their tracks leading up to the doors of Technology and Big Defense, but he cannot countenance that these precious bastions, too, have been blighted. Thus the necessary reckoning with the limits of our deepest assumptions is put off, and the conditioned reflexive belief in the eternal technological fix rears up, wearily and tediously, once again.

Some Thoughts on the Economy-as-Organism

Watching the markets and cryptocurrencies explode upwards lately for little or no apparent reason, it seems more and more like it may be in the nature of money to eventually be driven to hallucinatory status by speculation, fantasy, regulatory capture and limitless debt. It is invented out of nowhere by the trillions of dollars for the benefit of the ultra rich and the ultra large companies.

Thermodynamics or energy-based accounting is unbiased in principle, having as it does some sort of grounding in physical laws,  and it seems to offer an alternative to the money-centered view, but it is essentially impossible to straightforwardly apply it to a process as complicated as a whole economy.

So what is real? What do we look at as an indicator of societal robustness? Of diminishing returns? Perhaps none such exists; as long as there is new stuff to dig out of the earth and new things to covet, the growth continues, and money flows are simply a rationalization or fig leaf for the deeper trend.

Given these bafflements, it may be that the only system we know of that is complex enough to be usefully comparable to an economy, or any sort of guide to it, is the metabolic/genetic life of a growing organism (probably not ecosystems, as these don’t grow and multiply or have distinct pathways comparable to economic sectors). Call it the econo-organism.

In this view, money is not the source of change nor the energy that propels it, only a kind of anabolic hormone that tells the econo-organism, “make more stuff”. (There is no catabolic hormone, or if there is, it appears to be a Thing That Must Not Be Spoken Of.)

Moreover, because many resources are essential to the econo-organism (non-substitutable), higher prices are not a given if such prices threaten the organism. For the resource MUST continue to be produced “economically” for the economy to survive.

So the price will be kept low, if necessary by money infusions to the producers. But by that point money has, again, become meaningless, or at least now reflects reality in a completely unintentional way. The signal has become pathological, rather like runaway inflammation.

Growth is a reflexive urge of the econo-organism. Only when every last pore of ground has been scoured will the econo-organism tip from triumphal log-growth into profound illness or quiescence.

Or equivalently, there may be a hidden threshold where a given resource becomes too dilute for even new technology to exploit it to net advantage, even with invisible subsidy. Then something like a vital cofactor deficiency will take hold, despite adjustments by the system.

For example, copper, an indispensable and largely non-substitutable element for the econo-organism, is now mined from porphyry deposits that may contain as little as 0.15% copper. The first deposits to be exploited on the other hand were up to 10-30% copper, but have since been depleted.

It is impressive on the one hand that the econo-organism has evolved or deployed a way of getting enough copper from deposits that seem rather dilute.

But on the other hand, that also means it now has to move and crush and chemically treat about 660 parts rock to get at 1 part copper (probably more since extraction isn’t totally efficient), versus 3-10 parts rock in the old days. At what point does diluteness, with its costs in energy and environmental damage, overcome the combined powers of cleverness and increased scale?

(Similarly with fracking and wind power: the econo-organism manages to keep up with its energy needs, but at the expense of much larger land use.)

This is analogous to an organism that begins to express higher levels of a high-affinity transporter to absorb a trace mineral it’s not been getting enough of through the usual lower-affinity one (or perhaps a goiter where the iodine-absorbing organ becomes disproportionately expanded due to iodine deprivation). The organism may switch on pathways that let it conserve and reuse more of the mineral. Or there may even be a mutation (innovation) that increases the transporter affinity even more.

These are all clever and helpful responses on the part of the (econo)organism. Yet eventually, if there is just no mineral at all, or not enough to sustain the pathway it is needed for, the organism will die–or at least stop growing.

The earth is mostly dilute, but it is also quite big. That is what makes it hard to tell whether there is a big problem, or a minor problem, or no problem at all. However the econo-organism is quite different from a natural organism not only in lacking a catabolic signal, but in having apparently no ability to switch to a quiescent state. Either it grows, or it begins to fall apart.

Reading List Roundup: Haroun and the Sea of Stories, by Salman Rushdie

Young Haroun lives in a sad city—one “so ruinously sad”, in fact, that it’s forgotten its name. His father Rashid is a prodigious storyteller, celebrated for his apparently limitless ability to spin witty yarns on the spot, which puts him much in demand in Sad City and wins him nicknames like “the Ocean of Notions”. But then Haroun’s mother gets fed up with her husband’s airy happy-go-luckiness and takes off with his exact antithesis: the oily, conniving and bureaucratic Mr. Sengupta, “a skinny, scrawny, measly, weaselly, snivelling clerical type”. “What’s the use of stories that aren’t even true?” she grouses before flying the coop—a refrain which becomes a kind of leitmotif for weary adult pragmatism throughout the book.

And so begins an avalanche of increasingly wild and surreal events that turns out to be no less than a voyage into the center of the storytelling psyche—the Wellspring of the Sea of Stories.

“Haroun and the Sea of Stories” is of course rich in imagination—at times almost defiantly so—and it teems with strange and memorable characters and images that testify to a virtuosic creative mind at work. Within a few pages of the start, it plunges the reader into a fantasy world of such whack-a-mole vibrancy and surreally intricate texture as to have an almost synesthetic quality, as smells and colors and sounds seem to blur and merge and emerge with new-born intensities and meanings. Neologisms and odd poetic rhythms explode across the page, some annoying and some unforgettable (such as “P2C2E”, or Process Too Complicated To Explain, which becomes another leitmotif of sorts, this time for the inscrutability of technical-bureaucratic thinking). It also has the distinct advantage of having at least three characters named “Butt”.

Given this defiant high-spiritedness, its childlike directness and audacity, and its too-real-to-be-real vividness, I found myself wondering if “Haroun” might have originated as a treatment for animated or Pixar movie. But the last is impossible: “Haroun” was written five years before Toy Story launched the now-ubiquitous phenomenon of digital feature films.

But the connection with Pixar feels apt in another way, as probably the harshest criticism one could level against “Haroun” is that, in its very florid imaginativeness, it sometimes tips into a kind of literary version of the “uncanny valley”—a term for digital animations where the rendering is detailed enough for the characters to closely resemble real human beings, yet not detailed enough to make those peoples’ motions and expressions appear lifelike. The result is something that the mind sees neither as cartoon or person, but more like a zombie (see Zemeckis’s “The Polar Express” for some shudderingly creepy examples).

There are places in “Haroun”, then, where the reader will have time deciding whether they’re enjoying a light Saturday morning cartoon, or a coded vision of madness and menace. Examples range from creepy rhyming fish to manic robotic birds to a prince-rescues-princess story that goes grotesquely wrong to an evil black ship of darkness that begins to nightmarishly melt around our protagonists, not to mention the oddly sickening description of the “Disconnector Tool”, which plays a key role in the plot: “…it had the general outline of a wrench, but it was somehow more fluid than solid, and was made up of thousands of little veins flowing with differently coloured liquids…”

This uncanniness extends to most of “Haroun”s characters too. For all their number, color and antic fancy, most are paper-thin confections, robotic and often eerily repetitive in speech, giving no hint of development or subjective awareness.

All this may be another way of saying that “Haroun” is a deeper and darker work than it itself wants to be. So there is darkness and macabre aplenty in these story-waters, something like the books of Dr. Seuss, whose dizzying unsteady landscapes and grotesque, pained-looking creatures I always shied away from as a kid.

But then, maybe conjuring uncanniness was at least partly Mr. Rushdie’s intention. Part of the “uncanny valley” in “Haroun” might have to do with the fact that it seems to hit its deepest themes and reflections in considering (however fancifully) the relation of people to their own “shadows” (and even stories all have shadowy “anti-stories”). In “Haroun”, the shadows can even dominate:

“…in the Land of Chup, a Shadow very often has a stronger personality than the Person, or Self, or Substance to whom or to which it is joined! So often the Shadow leads, and it is the Person or Self or Substance that follows”.

As any Jungian would tell you, the Shadow is an archetype, representing the repressed negative contents of the personality—the “dark side” of ourselves that we don’t want to face. “Everyone carries a shadow, and the less it is embodied in the individual’s conscious life, the blacker and denser it is”, said Jung. And while Jung thought it crucial to face the shadow and own up to its contents, thus integrating the psyche, the super-villain of the story, khattam-Shud, has done the opposite, to an incredible extreme: “…he has done what no other Chupwala has ever dreamt of—that is, he has separated himself from his Shadow!”

In this light, it might be that this creepy uncanniness in “Haroun” is a kind of masterstroke, as it forms the “shadow” to the book’s otherwise blindingly illuminated surfaces.

Of course, this journey into darkness is a brief and vigorous one, winning straight through to a happy ending that, without spoiling too much, is so unabashedly formulaic that it somehow miraculously ends up being note-perfect.

And so, putting it together, “Haroun” is a madcap, ripping yarn—a manically irreverent, shimmery-shiny, somewhat unkempt (and proud of it, thank you), multi-billion-colored nose-thumbing at Disenchantment, Bureaucracy and Conformity in all its creeping forms. If you’re looking for an inoculant against the smug instrumental mundanity of our own times, against the little Mr. Sengupta in all of us that tries, now and then, to obstruct (or deconstruct) our own Sea of Stories, you could do far worse than to spend a few hours with this little volume.