The Shame-Joy of the Savant Class (PART II)

No less questionable than the motivations behind the searchers for genetic contributions to general intelligence and other complex traits—their rhetorical slip-knots, their cocooned assumptions of science’s moral purity, which we have discussed previously—is their level of certainty regarding the results themselves. Charles Murray, for instance, in his interview with Sam Harris, implies much like Reich that the codification of indisputable genetic determinants of general intelligence is absolutely inevitable, and just around the corner.

Yet if anything behavioral genetics gives every impression of having been—and continuing to be—a humiliating disappointment, as hundreds of genome-wide association studies (GWAS) have failed to converge on single-gene or even few-gene causes for complex qualities like intelligence and mental illness. Typically, the general finding has been that there are significant contributions from dozens if not hundreds if not thousands of genes, each producing a truly  minuscule effect; embarrassingly, however, the lists of such predisposing genes, produced by different research groups, have often failed to converge or experimentally replicate. In one of the most notorious pronouncements on the subject, the editors of the journal Behavioral Genetics in 2012, called for all but starting from scratch, claiming that:

“…the psychiatric and behavior genetics literature has become confusing and it now seems likely that many of the published findings of the last decade are wrong or misleading and have not contributed to real advances in knowledge” (Behavioral Genetics, 2012).

But, unwilling to abandon their optimism (and funding) for genetically formulatizing complex human qualities once and for all, scientists have continued on in the mindset of “bigger is better”, piling on larger datasets and study sizes and more computational power, hoping this will finally yield a definite answer. This has now led to studies of complex traits involving tens or hundreds of thousands of participants; for the nonce it seems the list of “causative” genes for schizophrenia, for example, has steadily grown and now stands at 145 or so, while for intelligence, it is at least at least 500. However, for the most part, it seems that the plight of behavioral genetics has not improved greatly since the chastening realizations of 2012, and results of the search for “smart genes” in particular have been similarly underwhelming. Even very recent articles, such as Reich’s, that portentously describe an imminent age of intellectual classification by genetic testing—cue Brave New World‘s castes of “alphas and epsilons”—admit at the same time that while the heritability of intelligence is certainly substantial, only small fractions of the total variation are explained.

One therefore has to wonder: how and why are people like Murray, Reich, and others reaching such sudden, urgent certainty about the looming mastery of intelligence by genetic research? Where are the stupendous results, the impenetrable and narrowing cordon that will delimit the genetic basis of intelligence to such a precision that it would be indeed somehow socially remiss not to act upon it, not to shout it from the rooftops?

At the heart of the new furore seems to be a recent review, “The New Genetics of Intelligence” by Robert Plomin, one of the foremost researchers on the genetics of general intelligence. Plomin himself seemed to underscore rather than diminish the problems of behavioral genetics of intelligence when he wrote in 2016 that:

“Recent studies of hundreds of thousands of individuals have found genes that explain about 5 percent of the differences among people in intelligence”

But in his more recent review, Plomin is expansive, confident that the genetic conquest of complex traits—and hence the explanation of group differences—is almost a fait accompli. He admits that the change of outlook is very, very recent, and that “From the 1990s until 2017, no replicable associations were found”; but, he continues, the ultimate discovery of genetic bases of intelligence could not have been in doubt, since, arguing from twin studies, it was known that “inherited differences in DNA sequence account for about half of the variance in measures of intelligence”.

Half the variance is a considerable amount, but hardly enough to usefully “predict” any individual—and this assumes we have no problems with twin studies, an assumption that Plomin only admits, halfway through the review, is by no means trivial. But let us pass over that. So the (alleged) watershed is very, very recent indeed, according to Plomin—almost entirely down to the last 6-12 months. Moreover, we learn the advance rests almost entirely upon two developments: first, the undertaking of even larger GWAS studies—dubbed genome-wide polygenic score (GPSs)—involving not tens but hundreds of thousands or even millions of participants, thus increasing the possibility of detecting incredibly minute effects; second, the discovery of a statistically useful proxy for intelligence, namely “educational achievement”.

On the first point, the use of absolutely enormous sample sizes, he resulting predictive power of these studies seems, frankly, appropriate to the puniness of the individual effects. One GPS study from 2016, with 125,000 participants, was able to account for a mere 4% of the variance in intelligence. Another, with a whopping 280,000 participants, also accounted for 4%. Still another now in progress, with more than a million participants, is incomplete but it is thought may account for over 10%. (What, one wonders, is the value of science that requires screening a sizable fraction of a whole population to develop even such weak baseline hypotheses?)

On the second point—that of the new proxy—it seems rather remarkable that much of the recent sound and fury about intelligence, race and genetics owes to the less-than-shocking “discovery” that “years of education is highly correlated phenotypically (0.50) and genetically (0.65) with intelligence”.

One problem with this story (besides that intelligence itself, already somewhat fraught as a concept, is not even being directly measured) is that, although Plomin et al. adopt with little circumspection the explanation “the smarter you are, the more you go in for more school,” alternative possible explanations for these correlations suggest themselves easily. For instance, students with a very poor grasp of material may be kept in school for longer, or take more time to attain the same level; extremely brilliant ones may leave in frustration or prefer to self-educate; or perhaps the may show a more practical-minded intelligence by choosing not to take on vast educational debt by staying longer.

Another problem is that “years of education” inherently does not lend itself to distinguishing at the very high or low ends. One tends to finish a PhD in a certain standard amount of time, for example, regardless of intelligence; therefore variation among PhD achievers will be ignored by such a rubric. Plomin et al. admit this proxy is “largely bimodal”, indicating mostly just whether an individual completed university or not. Surely, this bi-modality will ensure that much more subtle and important gradations of mental ability will be largely washed out by the college/no college distinction.

(It also is worth considering that to equate intelligence in any way with sheer amount of time in the educational system is, inadvertently, to pay an immense and ill-warranted compliment to the educational system as is—one of many implicit homages to the social status quo buried in the work of Plomin and others.)

As for the IQ-affecting genetic variants that result from employing these unprecedentedly gigantic experimental groups—these variants whose discovery causes such sudden delight and excitement and feeds hopeful talk of an inevitable ethno-psychometric partitioning of society—they turn out to be unbelievably weak, each on average explaining about 0.02% of the variance in IQ. Many of the variants are not in genes at all, as had been originally hoped in the early days of GWAS studies, but appear in tiny regions of DNA between genes that apparently have minuscule, indirect regulatory effects.

And so let us trace the rickety path of reasoning and wishing behind the new exultation of Plomin and others: we have gone from dismissing doubts about IQ as a true measure of genius (versus mere talent)… to still-questionable twin-study estimates that even if accepted say genetics only covers half of the variance in IQ… to the realization that there are no strong single genetic predictors of IQ but rather a basket of minuscule ones… to the use not of actual IQ but a partly obvious and partly dubious proxy measure of IQ… to the finding that all these still only predict far less than the half the variance predicted by said (partly questionable) twin studies, and thereby… to, veni, vidi, vici, the brave assurance that the links between intelligence and genetics are all but solved!

Given that the scientific uncertainty seems to be being underplayed on the nature of genetics and intelligence, and that the sheer size of the undertakings required to garner these “game-changing” results resembles, from a modest distance, not so much triumph as a dogged persistence against diminishing returns, we cannot help but wonder how they are worth the trouble—and what might motivations might be driving them on despite such headwinds. Plomin tells us that “GPSs for intelligence will open new avenues for research into the causes and consequences of intelli­gence”; even more astonishingly, we learn that “heritability is an index of equality of opportunity and meritocracy”. Apparently it is scientifically uncontroversial to equate hereditary wealth with meritocracy, or to suggest the more prosperity stays confined within family lines, the more just the society is—and, apparently, no other serious alternative explanations are imaginable.

Finally, heritability of intelligence as assessed by the twin studies Plomin references seems to increase with age. Here we see another example of choosing the preferred rationalization: usually, strong early-life correlation is taken as a sign that an effect is present even before the environment has been able to influence it, hence is more innate—indeed, elite IQ-themed academies such as the Davidson Institute are founded on this premise of assessing young children for “intellectual gifts” at an early age. But Plomin et al. find no difficulty in reversing the argument as it suits them, and seeing the greater correlation later in life instead as an uncovering of an innate tendency, a gradual throwing off of environmentally-imposed restraints: “DNA variants increasingly have an impact on intelligence as individuals select environments corre­lated with their genetic propensities”.

***

Of course, Plomin et al. do supply a dose of the obligatory sober-faced Reichian concern about the need to “acknowledge the risks of discrimination”—yet if, as they strongly advocate, intelligence tests ought to be much more widely deployed as a condition for gaining employment and other societal perquisites, they do not make at all clear what the point of such tests would then be other than to discriminate. Such a clarification becomes all the more critical in light of their comments that seem to lend “scientific” endorsement for the siloing of wealth within family lines.

What, really, can be the motivation here? At the most, if we gather even more data—perhaps from tens of millions of individuals this time—we can perhaps one day create a panel of genetic variants that will explain fully 50% of the difference between any person’s IQ and the mean. But what possible use is that? Are we planning to begin job screening by genetic test? Even in that case, if intelligence is the target, giving an actual IQ test would surely be more useful.

Of course there is also the prospect of using genetic testing purely for personal edification, say to learn one’s “genetic susceptibility” for high or low intelligence, and many people have already done so through online services like GenePlaza, DNA Land, and 23andMe—of which all but the last have already gone ahead with offering “genetic IQ” tests. But this seems like little more than a frivolity. Without the promise of categorizing and pre-determining people’s potentials and futures, and of legitimating existing inequalities, scant uses for such testing suggest themselves and, perhaps not incidentally, Plomin et al. propose very few. It is perhaps telling that they “cannot resist” the double-entendre between the “GPS” abbreviation and the Global Positioning System—a monolithic arrangement for localizing and tracking individuals or, in their case, creating “profiles of genetic strengths and weaknesses for individuals who could be targets for personalized prediction, prevention and intervention”. (The “intervention” part, to say the least, is unsettling.)

The ossification of socioeconomic reality into a rigid underclass and upper class throughout the world is already a fact in our time, having been growing dramatically for the last forty years; the question at hand is the scientific legitimation of such circumstances by way of the concepts of psychometrics and genetics. What more potent defense of the new hyper-unequal status quo could there be than to say that the existing arrangement of superiority and inferiority is for the most part natural, just, and even optimal? Plomin and his colleagues’ vision seems to be one of human optimization through self-segregation, occasionally reinforced by stringent psychometric testing and, partly in its stead, genetic profiling; Taylorism, it seems, is to be put to work on our thoughts, our cells, our very genes.

Once again, the secret glee in ratifying and even intensifying the divisions of power that already exist in society, the barely-contained longing to expand the prestige such scientists and intellectuals already enjoy due to their IQ—is all too evident to ignore. Through the crack under the laboratory door, the shadow of schadenfreude extends and plays out its now-familiar arabesques.

***

As this picture takes shape before us, one sees more and more of Chomsky’s point that there is not much interesting science here—albeit perhaps an interesting agenda. After all is said and done, Plomin’s talk about understanding new causes of intelligence is mostly vacuous, since the variants being found in the studies he extols are ubiquitous in the genome, contribute almost no predictive power individually, and show no mechanistic connection with neurological development, metabolism or any other specific biological sub-system.

One is tempted to say that these variants are not so much “found”, as “dredged“. There is no understanding here, no attempt to explain why any given genetic variant has any effect on IQ whatsoever—a hope which has effectively vanished with the explosion of contributing factors, their minuscule size and their almost limitlessly tangled interactions—there is just the gathering of more and more data to get better and better predictive statistics which can be used to classify and control people, for underlying reasons that are never made overt. Such an activity has little to distinguish it, scientifically, from many other correlative “big data” investigations that have rapidly become fashionable in science, such as the correlation of SNPs with finger length ratio or of geography with musical taste. (Chomsky uses intelligence and height as another example of a such plausible-yet-nugatory correlations.)

Indeed, this pattern of a Pyrrhic victory that wins little or no understanding after offering up a colossal body of raw information is coming to be startlingly typical across the biological sciences, so much so that it may ironically in itself represent a much more interesting sort of meta-level discovery about the nature of living things. Recent studies at Stanford, using data from large GWAS and other genomic datasets, have run with this idea—and found that not only intelligence, but many if not most complex traits and diseases such as height, schizophrenia and rheumatoid arthritis do not have anything like a tractable number of “core” causes, but instead are influenced by literally hundreds of thousands of genetic variants, most having extremely weak effects. Even more remarkably, these influences are dispersed almost equally across different cellular systems, almost indifferent to the cellular system most affected in a given trait/disease. In other words, complex phenotypes like intelligence may have no cause as such at all, instead being influenced by nearly all parts of the genome at once—a picture the Stanford authors describe as the “omnigenic” model.

It is hard (thought not impossible, with enough advanced training) not to notice that such a model, however fascinating in itself, represents a kind of death-knell for the hope of reductionistic, mechanistic “explanation” of disease or traits like intelligence. It is equally hard not to notice that the same will almost surely be true of the prospect of interventions to change these qualities; it is exceedingly (and increasingly) difficult to drug even one target successfully, but 100,000 at once? And while one could still imagine deriving some scientific insight in trying to discover what brain attributes seem essential to produce higher general intelligence, the way to pursue this likely involves the study, not of scattershot genetic variants that contribute minuscule and untraceable effects, but of brains themselves, through functional imaging studies—though these, too, have immense reproducibility problems of their own.

In an omnigenic world, alas, contradictory results seem to be almost part of the territory. As biologist Robert Weinberg put it, referring to analogous attempts to find key explanatory genes for cancer using big data:

“The gaping distance between these data sets and a true understanding of cancer biology is illustrated by the amusing fact that two distinct expression array analyses of cells in breast cancers have been found to be equally useful in predicting future clinical behavior of these tumors but contain almost no genes and thus proteins in common.” (Weinberg)

Let us not underestimate the scientistic-economic optimism that insists such traits will turn out to be manipulable anyway, surely for the good of society. Where the prospect of controlling large numbers of one’s perceived inferiors is at stake—and data and numbers stand ready to assist—the sense of mission may be almost as irresistible as the assumption that one’s cause, simply for being scientific, must be just.

***

If there is any potentially positive aspect to the whole matter of genetically significant group differences, one might say it is that it constitutes a serious blow against interchangeabilism—the view, increasingly pervasive in progressive and even “individualistic” countries, that all human individuals are fundamentally the same and therefore can and should be treated like fungible tokens in a gigantic social machine, much like individual dollars in a globalized economy or 1’s and 0’s within a digital processor.

In this vision (which has gained traction to a large degree through the wholesale adoption of digital devices, interactions, and metaphors), the individual’s role in society is to be exchanged, transferred, and utilized as the needs of the system dictate: to respond as trained, to stay put until despatched elsewhere, to produce as demanded, to create no discord; not to question, not to reason, not to dream, and certainly not to revolt. The goal becomes, on the one hand, to produce sufficiently standardized individuals as to allow them to be interchangeable, thus increasing the efficiency of production, and on the other, to construct a societal “machine” ingenious enough both to maintain itself against any challenges or anomalies and make optimal use of the tokens (aka people) placed at its disposal. (This formula is essentially equivalent to turning humanity into what Heidegger called “standing-reserve”.)

Interchangeabilism is itself the unifying principle between economic and social liberalism as commonly practiced, as well as in the most modern socially liberal conceptions of justice. One can spot this connection in many places; for instance, in the work of the arch-liberal political philosopher John Rawls, who proposes that a just society must be designed so as to be acceptable in advance by featureless, quintessentially interchangeable “reasonable citizens”, unmistakable kith-and-kin to that equally interchangeable (and increasingly untenable) cipher, the Homo economicus of liberal economics. Yet these two liberal aspects, the social and the economic, are popularly taken to be implacable opposites, sharing the term “liberal”, it is assumed, only through some infelicitous coincidence; and indeed they have come to be associated with very different imagery. The economic “liberal”, for example, evokes corporate-financial functionaries in glass office towers, single-mindedly and often ruthlessly strategizing to achieve maximal returns and the wholesale expansion of technical and material activity on all fronts. The social “liberal”, on the other hand, brings with him an appearance—an aura, perhaps?—of looseness, accommodation, permissiveness, often a rage at injustices that can seem to him as all-pervading as the air. His stated prime goal is not profit, but the loosening of all operative cultural, national, ethnic, religious, and sexual differences, norms and restrictions (which he almost reflexively equates with injustices), with the simultaneous promotion of a kind of carefully de-fanged “diversity” whereby differences are lavished with praise and even made sacrosanct.

Yet for all these seeming differences between the unflinching, tide-like expansion of capital and the yearning to rupture all forms and customs to create a perfectly just “diverse” society, and for all they may malign each other, the two sides commonly work in tandem (not forgetting that they may, as is increasingly common, coexist within the same person). Here is the general scheme: first, the economic liberal creates and disseminates the standardized distractions and luxuries that the cultural liberal craves as a touchstone of his identity, and also—through the great and essentially nihilistic power of the market principle’s focus on profit, increase, and mass-production for their own sake—delivers the first undermining blows against the various institutions of any newly-encountered culture. The cultural liberal, in turn, does not resist but actually furthers the homogenization process begun by economic liberalism (here is his great conceit), refining newly acquired people and institutions by discharging their differences as injustices to be fought and erased: “indiscriminateness is a moral imperative because its opposite is discrimination”, in Allan Bloom’s formulation. The cultural liberal thus strives to warmly and seemingly unconditionally welcome the newcomer into the market-herd, typically while combining this welcome with economic incentives that foster dependency on the market.

In general, the cultural liberal facilitates the destruction of active cultural difference and individual idiosyncrasy, by gently transmuting them into quaint and much-desired museum-pieces or commodities. Excluding those elements of a prospective culture that cannot but be viewed as extreme barbarisms and so must be forbidden for legal or public relations reasons, there are three main paths for any incoming novel element of difference. Firstly, museumization, whereby the elements are shorn of their functioning, living context and reposed in designated places of what one might call “instrumental reverence”, to be preserved, admired superficially, obligatorily and perhaps academically, and thus made part of the standardizing process as educational artifact, instead of a possible active obstacle to the pursuit of interchangeability. Secondly, the differences may themselves simply be fetishized into products—not just museum-pieces then, but exotic objects of desire or distraction, likewise shorn of their original cultural function and therefore safe, ready for commodification and digitization. Thirdly, differences may be converted into political power-tokens through the now-familiar machinery of identity-politics, which too resembles commodification in practice, incorporating as it does various and sundry groups into interchangeable instances of the “oppressed” ready for mobilization by the relevant political elites.

These paths are not always mutually exclusive, but in any case, the cultural liberal, by praising “diversity” while actually dismembering, leveraging, and museumizing it, corrals these rebarbative (or even barbaric) non-interchangeable cultural elements away from any potential confrontation with the mechanisms of capital formation or the overall nihilism of the system; if possible, (s)he will go one better and actually make them marketable. Or, to quote Bloom again: “…in attacking ethnocentrism, what they actually do is to assert unawares the superiority of their scientific understanding and the inferiority of the other cultures” (COTAM, 29)

At this point, the cultural liberal once again hands off to the economic liberal, who puts the newly interchangeable human tokens to use in the ceaseless expansion of the market-principle (most likely as wage-labor), and takes any new commodities devised out of the acquired and digested culture as a perquisite. The requirements of interchangeability are fully met now; both sides have done their work; society emerges larger, richer, busier, in some sense “more diverse”… and yet paradoxically even more homogeneous and anonymous than it was before the acquisition. And so liberalism here has been turned upon itself, producing an increasingly illiberal conformity and a blinkered materialism that seem to bear little resemblance the original and laudable liberal goal of free unfolding of individual thoughts, actions, and rights.

Why does this arrangement connect to the question of genetic group or racial differences in intelligence, or any other complex trait of personality or behavior? Simply because genetic discoveries of such differences—if they were somehow shown to be totally objective and reliable, totally free of schadenfreude and other invidious contexts—would indicate that there are not only differences on the individual, but also on group and hence even culture-sustaining scales, of a kind that cannot be simply pounded out of existence by market forces or other tools of homogenization-through-decontextualized-diversity. Were there a mere handful of “master genes” for intelligence or other qualities, then it would have been conceivable, though still very difficult, to “cure” those with lower IQs, or with “difficult” personalities and so on, and eventually conform them to the system as standardized units of labor; instead, the finding that tens or hundreds of thousands of variants are responsible makes such efforts impossible. In however repellent a way, this obstruction would put a kind of hard limit on the drive towards complete interchangeability, the complete homogenization and formulatizing of culture and thought. But again if this is a sort of victory against interchangeabilism it is an unsatisfying one, because we see in it not a true grasping of individual and group possibility and power, of for want of a better word spirit or soul, but a matrix of missteps tainted by greed, by arrogance, by narrow instrumentalism, by biologism, and of course by schadenfreude.

Admittedly, even among those who work with utmost devotion toward the goal of interchangeability there must be, however grudgingly, acknowledgement of difference. Certainly the differences in economic power between different individuals and classes have been enlarging over time the world over, as already attested. Also, certain specializations demand an advanced or deep understanding of complex or obscure topics, or require the development of highly precise skills—or, more bluntly, may demand sheer wealth in order to finance this or that venture. So in these areas, indeed, non-interchangeability—that is, idiosyncrasy and uniqueness—seems to keep a foothold.

But note that even this foothold remains even narrower than it already seems, and in important ways. First, most obviously, it mostly amounts to a dispensation or indulgence of difference only for special (powerful) individuals; the most important professionals and oligarchs are permitted their eccentricities or their unusual aptitudes, and are accepted as such because their power and indispensability buys them that freedom. This is just a dreary matter of “might makes right”. But also, and more insidiously, even under these special elite conditions difference insofar as it is acknowledged is not given some free rein or appreciated in itself, but is mostly reckoned purely in terms of the statistical distribution of some simple variable(s)—not as a truly unique phenomenon that points toward something deeply non-systematizable about both individuals and groups, but on the contrary, as essentially interchangeability plus an information-free error term that can be compensated for and then dispensed with. Thus, we have simply more Taylorism; to the extent that non-interchangeability is conceded, it is also constrained. Individual differences are reckoned not as an immense gamut of human possibility, an invitation to adventure, but in terms of single metrics such as achievement scores, degrees earned, net worth—and of course IQ, a paragon of such a coldly probabilistic conception of difference, and the very one which Murray, Plomin, Reich and others eagerly foresee underwriting a kind of virtuous inequality in the (no-longer-so-new) “knowledge economy”.

It is at this last point that the abandonment of soul for intelligence as the mark of humanity bites. For even when souls are taken to be equal—as they are before God, say, in most Christian tradition—they are never taken to be identical or interchangeable, but indeed utterly unique and free. The soul—not only of individuals, but of whole cultures and ethnicities—is a going concern, something that cannot be torn away precisely because it cannot be reckoned; a collection of numbers, on the other hand, whatever their statistical distribution, readily admits to ranking, rationalization, utilization, and in the end commodification.

And so we realize that by the time we have reached the IQ-difference-based “genetic meritocracy” idolized by Plomin and others, the problem of interchangeableism has already worked its way into the cake, so to speak, indeed is as entrenched as ever, because the focus of the difference is on the wrong thing—not on individuality in all its possibility, but as it appears when compressed and desaturated through the lens of the metric. The result of this fixation on single values, which shares with “big data” the same thinking and goals despite the latter’s high-dimensional trappings and the same flaws and narrowness, is a thoroughly modern fetish one might call “metrical blindness”.

As noted, since the differences turn out to depend on such a huge and complex set of genetic changes, each individually almost negligible, the likelihood of genetically mass-engineering humans to standardize and eliminate complex-trait differences (so as to make them into interchangeable, standardized units) is remote. But that the complexity of the genetic causes is surely intractable for purposes of active social control is scant comfort. For from the workings of metrical blindness comes the greatest irony of all: through the insistence on the statistical, the dominion of mediocrity is already established in the very form in which differences are first articulated, in the terms and structure of the game, in the very attempt to mediocritize exceptionalness (for to make interchangeable is inevitably to mediocritize). All this has happened through the structure of the approach, not just in the metrical but the statistical; for the statistical takes mediocrity as a starting-point, even before the implicitly desired, genetically-inspired policy-based “clean up work” could even begin to produce a truly difference-less, uniformly excellent kind of human, a fungible unit of workforce-productivity, one perfectly and uncomplainingly suited for mass integration into market technocracy.

“Children are not yet fools” the psychiatrist R. D. Laing once wrote, “but we shall turn them into imbeciles like ourselves, with high I. Q.’s if possible.” He might have done still better to have said “mediocrities” instead of “imbeciles”, for those dismissed as imbeciles at least have the potential, now and then, of doing surprising and even unpopular things. Looking back, we see this mediocritization acutely in the use of “educational attainment” as a proxy of intelligence so celebrated by Plomin et al. It may be a reasonable first approximation that smarter people tend to enjoy learning more, and so go in for more education, and inversely for less intelligent people. But it is a bizarre error to carry this rough correlation beyond the most hopelessly broad conclusions; the far extremes of intelligence, in fact, may show opposite trends to that of the moderate extremes, and there is much interesting evidence through the years that this is the case.

Meanwhile, the famous Flynn effect—the observation that IQ scores have risen globally by about 3 points per decade since 1951, or about a standard deviation per generation—suggests the “g” is not really the hard ineffable truth some would want it to be; that how one scores is pliable in some way we don’t understand, that one can get it “from the air”. Looking afield of psychometrics and taking the risk of viewing anecdotal evidence seriously, there are many indications that people were anything but “less bright” than today’s average, and may in fact have been better educated and/or more widely knowledgeable and imaginative. (Note also Randall Jarrell’s essay “The Schools of Yesteryear” and various interesting if fragmentary reflections on how far higher educational standards obtained in past generations, despite their supposedly pitifully lower IQs.)

Again, one suspects that again part of the problem with the scheme stems from the meaninglessness of measuring intelligences that are too rare to be statistically encapsulated—not just the very high and the very low, but the off-the-scale altogether—all of which might humble the system and are therefore actually interesting. And so we may begin to suspect that what is evident in the Flynn effect and elsewhere, and with the genetic profiling of individuals, groups and races, is not so much a great brightening, as a great narrowing: a mass-cultural homing-in on certain specialized kinds of “formal operational” intelligence, paired with a deepening neglect or even obliviousness to other aspects (with which the formal-operational may be significantly but far from perfectly correlated).

The very fact that intelligence is turning out to be intractably complex and multifactorial in its causes inevitably suggests that general intelligence itself may be extremely multifactorial, and that what we call “g” owes not to a singular force, but to an extremely complex and specific coincidence of events, even if on the average a single number is a reasonable predictor of socioeconomic excellence. Plomin et al. of course are sure this cannot be the case, averring for instance that “extremely high intelligence is only quantitatively, not qualitatively, different genetic­ally from the normal distribution”. Since no small subset of genes exists that strongly affects intelligence test results, it follows for them that all forms of intelligence are purely gradations of some underlying, inviolate, quasi-Platonic essence. That this claim can be made in a high-level journal when no statistical test is capable of quantifying truly extraordinary (i.e., statistically un-analyzable) forms of intelligence, alongside with data that is admitted “explains” at best 10% of the variance, all while offering zero mechanistic clues (an “atheoretical approach”), can only be seen as a paragon of the rapt dedication to scientism that has become fashionable among our increasingly desperate technocrat-savants.

In the end, by necessarily leaving out the truly extraordinary mentalities—those whose measure cannot be meaningfully taken in a scale designed around what is already graspable and commonplace, even if to a chosen few, or by genetic decomposition into simple causes—what is really being enforced is just the mediocre, disguised as higher- or lower-functioning versions of it, Plomin’s “purely quantitative differences”, which are all that IQ is competent to assess. But it is the exception that is the mystery and the germ of change. As the philosopher Kierkegaard put it,

“Over time, one tires of the interminable chatter about the universal and the universal, which is repeated until it becomes boring and vapid. There are exceptions. If one cannot explain them, then neither can one explain the universal. One generally fails to notice this, because one does not normally grasp the universal passionately, but only superficially. The exception, on the other hand, grasps the universal with intense passion.

When one does this, a new order of precedence emerges, and the poor exception, if it is ever any good, appears again, as the poor step-daughter in the fairy tale, restored to a position of honour.” (Repetition, p. 78)

One could, in earlier situations, similarly hope that, though the universalizing (or mediocritizing) tendency was ascendant, at least “the exception” could, with sufficient passion, win through and even teach the universal a thing or two. But one is no longer so sure with broad-brush, schadenfreude-fueled “big data” approaches, whose essence is to leave aside understanding of mechanisms or the imagining of new possible explanations and fields and instead institute rule by the statistical, by the view of minds as simply another aspect of standing-reserve. Regrettably, that such rule inevitably produces closure of thought and self-fulfilling prophecies aplenty, does not seem to have yet crossed the (doubtlessly high-IQ) minds of the mandarins of psychometrics.

***

In closing it might be useful to make a sort of brief, impressionistic effort to put all these developments in a more general historical-political context.

Not long ago, one would have not thought such things in imaginable in open public discussion. But as time carries us further and further away from the geopolitical eruption of WWII—out of whose volcanic madness there crystallized so much of the economic, philosophical, and social consensus that we now take as almost tantamount to civilized life—so too the last living and visceral memory of the terrors of scientific racism that were so integral in that conflict have begun to fade.

In the immediate postwar era, a quite legitimate horror at the atrocities committed under the cover of racially-based reasoning during the conflict led to a concerted movement in the opposite direction, to a critique of society based entirely on social construction and on human interchangeability (which also, as it happened, chimed well with certain aspirations to unhampered, global exchange of labor and capital). This critique included a categorical rejection of any possible notion of innate and heritable differences between sub-populations of H. sapiens as even a coherent concept, except for certain obvious matters of appearance or simple genetic traits. The result, perhaps most famously condensed in the UN’s 1950 declaration on “the race question”, is what author Kenan Malik dubs “UNESCO man”: a standardized vision of human dignity and diversity, rejecting at the same time any claim of non-cultural differences between individuals in groups.

In retrospect, this approach amounted, in large part, to simply punting on the problems of racism, ethnic chauvinism, and revanchism that had led up to WWII. Clearly, after the war, there was a need to dismantle or radically revise these perspectives in order to head off future atrocities and violence. Yet difference, problematic or not, finds its way out, and preoccupies the minds of people and nations; to be different is to be born, to stand independent, to articulate a new vision, to feel a different heartbeat that one knows is not another’s. The urge to differentiate one’s self or one’s group, to proclaim a deep, physical and non-contingent uniqueness, is often the whole work of a lifetime, or of a people; it is as fundamental and implacable an urge among men and women as its opposite, of seeking common ground and union. By declaring the problematic (and oft-monstrous) views of difference that had led into the war to be null and void by moral fiat while replacing them with essentially nothing—by offering no alternative, dignified way of thinking of differences as anything but environmental and cultural fictions or quaint tokens of exoticism, the UN assured that “UNESCO man” would be brittle, requiring a precarious quietism on group differences research of the kind proposed by Chomsky, Horgan and others in order to endure.

At the same time, the notion of interchangeabilism within the “UNESCO man” doctrine gained vast approval, instituting with it the mediocritizing, statistical view of man (that had already taken hold in industry, education, and science) as the inadvertent new descriptor of all human difference. Non-quantifiable but perhaps much more ennobling ideas on the source of true uniqueness and merit, is, such as the spirit or shared heritage of peoples and groups, or the ineffable essence of an individual mind, were dissolved; only numbers, with IQ prominent among them, retained legitimacy. Interchangeabilism thus paradoxically was planted the seed for a resurgence of statistical classification (and stratification) of groups and a rebirth of a new, maybe even more dehumanizing interpretation of difference.

And so we now face strange new things—some of which, however, turn out to be revivals of very old phenomena, whose “newness” is only due to the blithe ignorance of history that plagues our increasingly attention-deficit, web-addicted civilization. The inexorable force of forgetting, the loss of vigorous, living, one might even say spiritual memory that our own technological egoism has encouraged—especially of the lessons of the totalitarian 1930s and ’40s—is now combining with the flawed postwar reaction of asserting interchangeability and cultural relativism to open a secret door to the very things we supposedly find anathema. Monsters are becoming thinkable, venturing out tentatively upon legs that that were long ago thought broken for good. We are witness to a resurgence of racism, of a narrative of differentiation in a fork-tongued form, with interchangeabilism and its insatiable appetite for mediocrity intact and even in full accord with it.

Such times present a grand possibilities for re-imagining the future, in ways that may be either rejuvenating or wretched and cataclysmic. Intelligence, despite its undoubted importance, is likely to turn out a test case: the use of non-genetic data to observe, manipulate, and constrain peoples’ lives, and in particular to create a scientifically legitimated conformity in tranches, is already proceeding at a spectacular rate. Already many or most potentially new employees are subject to seemingly arbitrary psychometric testing and drug screens, so that both mind and the body can be freely scoured by employers and governements. If “big data” can be used to construct homogenized group classifications around intellectual potential on the basis of genetic information too, then rest assured that such information will soon come to be used in countless other ways as well, some currently unimaginable, others all too much so.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s