The Shadow of Knowing-All


The inscrutability of neural networks is yet another interesting example of the vertiginous struggle we constantly face in reconciling contrasting scales of the same objects. (This is itself likely a side-effect of our basically Faustian world-view, with its preoccupation with breaking down scales and limits in search of constant expansion). Everywhere we see pieces that are not like the whole: and the more we pursue understanding through reduction and through concepts, the more we notice simple small pieces combining into much larger entities that in turn seem blissfully indifferent to the character of their constituents. Indeed we are waylaid by this same surprised, amazed uneasiness—often given the name “emergence”—in the guise of countless diverse objects and topics, from fractals and photomontages to economics, psychology and molecular biology.

In one sense, this quality of neural networks is greatly liberating and exciting, for it gives us a clue that the insistence that our concepts, reasons, and above all our words must exhaust all of reality may itself be mistaken. Every component of the network is rigorously rule-based—everywhere there is mere computation by simple and wholly determined parts—yet for all that we can make no more sense of the larger outcome than we can of a person who says, “I just like it!” There is no “explanation” of the data to be found in model, beyond the model itself (much as there is no “explanation” of an object to be found from its imprint in silly putty); it simply is what it was trained to be.

At the same time, even considering this liberating quality, neural networks may also be a dangerous example of statistical thinking as a totalizing or prejudicial doctrine, or more broadly of how the very tools we use to “understand” can by their nature blind us to anything outside of their scope. Through this doctrine, the concept of “emergence” remains rooted in a reductive understanding and so is viewed always with suspicion if not embarrassment, as if it could be banished if only we were smart enough, or if we but found new and better words. Whereas what really cries out to be discovered here are not simply “things that are too messy to reduce”, but things for which reduction cannot even be applied in principle—things wholly outside its scope, things for which “words fail us”.

One comment

  1. I think you may be being overly negative about the term “emergence”. I have never perceived the suspicion or embarrassment you describe. I see it as simply the acceptance that some behaviours arise out of complexity and do not lend themselves to reductionist analysis.

    We understand the law of gravity perfectly, but we know that we can never predict the behaviour of three similar sized bodies orbiting each other (though this is generally termed chaotic rather than emergent behaviour). We cannot predict the decisions of a neural network by studying the actions of its logic gates – we would need some bigger faster neural net to model it as a subroutine (while perhaps at the same time giving us some insights as to its internal machinations).

    I think the big question, as the Large Hadron Collider leads us to the edge of a seeming desert for new discoveries, is how many of the still hidden secrets of the Universe are still open to reductionist analysis and if some phenomena are simply unknowable emergents.

    Unfortunately, we can then never predict the total behaviour of the Universe (the ultimate emergent behaviour) because, as someone once put it, the Universe is its own fastest simulator.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s