A weekly letter from the founding editor of The Browser. Topics may vary. Correspondence and criticism welcome: robert@thebrowser.com This week: The inner lives of machines and animals
Thank you for your emails responding to last week's letter about the capacity of machines to produce text in general and literature in particular. Two claims in the piece drew most comment. The first was that machines would come to dominate production and use of the English language (I tip my hat here to AC, SC, GS, RD, SF, PM, RD). The second was that machines might serve as interpreters between humans and animals (I tip my hat here to JC, CS, RJR). I made both claims almost in passing, but your comments encouraged me to read and think more about them, and I find through my reading that they are, in fact, usefully related. I wish I had know before writing last week of the distinction which linguists now make, following Noam Chomsky, between "internal language", the capacity for producing language that exists within one's mind; and "external language", language produced for consumption by others in the form of speech and writing. By "internal language" Chomsky seems to have meant, at least primarily, something which was not quite language itself, but rather the capacity to produce or imagine language, language which would exist as language only when it was externally expressed. But I see other linguists using the term more loosely to include language which is produced and retained within the mind, and this is also my preferred usage, since I often produce language in my mind which I have no occasion or cause to express, and which I call "thought". I realise now that when I talked of machines coming to dominate the production and use of language, swamping human-generated language by the sheer scale of their output, I was talking about external language — "e-language", as linguists might now say. A machine-takeover of external language might be a matter of concern. But it would pale beside the possibilities of pushing the scenario a step further, into the domain of internal language, or "i-language". If machines take effective control of our external language, for example the English language on which you and I are relying now for expressing our thoughts, will the machines also, by some sort of mirroring effect, take control of our "i-language", the internal language in which I, at least, imagine my thoughts to be formed? I rather think they will, as did George Orwell, more or less, in 1984.
The second point, the possibility of interpretation and dialogue between humans and animals, turns on the question of whether non-human animals have languages that machines can learn. This, too, gains clarity from making a distinction between external language and internal language. The general insistence among scientists that humans and only humans have language is primarily a claim about external language. We recognise that most non-human species do a certain amount of signalling among themselves, and that a few species can be taught to recognise a few human words; but this we count as "communication" rather than "language". For animals to have language, on a scientific view, would imply that they were capable of expressing and sharing original and complex thoughts — of which scientists see no evidence. But does an apparent absence of external language prove an absence of internal language? I think not. If we allow that animals can think, then probably we must allow that they have some sort of internal language in which to form their thoughts. And if we allow for animals to have internal language, can an internal language exist which requires and finds no expression in external language? I find this hard to imagine. It seems to me a much greater likelihood that we do not recognise the external languages of animals, than that animals have no such languages.
The distinction between external language and internal language also seems relevant to language-generating machines. Thanks to ChatGPT we can say that machines are now able to produce external language which is indistinguishable from human-made external language. We might just as confidently say that machines have no internal language, and no need for internal language, on the grounds that they have no minds and thus no thoughts. A week ago that was my view; but the more I think about the possible relations between internal and external language, the less confident I become. The machines, the algorithms, the large language models, call them what you will, are doing something in order to generate their external language. I hesitate to characterise that "something" in any more detail, not least because even the computer scientists themselves profess not to know exactly how their machines go about their work. The scientists build the machines according to abstract theories of information processing. They improve the machines' general practical performance by trial and error. But how a machine arrives at a particular outcome in a particular case, what the machine actually does in response to any given prompt, is not something which can be predicted in every detail by anybody including the machine's creators. Is not this "something" which the machine does, this process by which it generates text or pictures, at least functionally equivalent to the internal language of the human mind? If so, if we allow that a machine has its own version of "internal language", then we are surely close to saying, in effect, that the machine has "thoughts". For if there is a difference between internal language and thoughts, then I myself cannot say what that difference is. To the extent that I can inspect my own thinking, my thoughts seem to consist entirely of internal language, and vice versa. On that basis I now find it defensible to say that machines can have "thoughts", though I would still hesitate to say that machines can "think". In either case I do not believe it necessarily follows that machines can "feel", in ways that should condition our behaviour towards them; still less that they might be "conscious", which, in our current usage, would mean that they were "alive". But I also fear, at this point, that words are failing us. We have no hard and fast definitions of words such as "conscious" or "alive" or "thought", because their meanings have always seemed at once both obvious and ineffable when applied to the human mind. We will need a far more precise and structured vocabulary for defining and describing mind-like behaviour if we ever hope to discuss usefully how machines can do what humans do, without being as humans are.
We fear machines. I do not think there is a single instance in the entirety of literature of a machine as a romantic hero (although I have to admit an ignorance of the Marvel Universe). Animals, on the other hand (wolves and snakes excepted), are almost always worthy of love, and capable of heroism. Even so, animals in literature generally go in fear of their lives, because we and they know in our hearts that nature is cruel and that humankind is crueller still. I started to re-read both Black Beauty and Watership Down this past week, as part of my inquiries into the animal mind, but stopped after a few pages in each case because I found myself overwhelmed by fear of the sadness to come. I did, on the other hand, complete my reading of a much-cited essay that had been on my to-do list for decades, and which I finally approached with high excitement, hoping to find in it plausible and interesting claims about the inner lives, and perhaps the internal languages, of animals. The essay is called What Is It Like To Be A Bat?; it was published by Thomas Nagel in the Philosophical Review of October 1974; and in his preamble Nagel introduces his mode of inquiry as follows: It will not help to try to imagine that one has webbing on one's arms, which enables one to fly around at dusk and dawn catching insects in one's mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one's feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat.
Nagel's essay is relatively short and highly readable. If I found it disappointing, that may have been because I hoped to find in it speculations about the thought and inner language of bats, whereas in fact I found almost none. Nagel scarcely even mentions language. The word "language" occurs only twice in the text of the essay and twice in the footnotes. The occurrences in the main text both concern the limitations of human language, not the possibilities of bat language: Reflection on what it is like to be a bat seems to lead us to the conclusion that there are facts that do not consist in the truth of propositions expressible in a human language.
The fact that we cannot expect ever to accommodate in our language a detailed description of Martian or bat phenomenology should not lead us to dismiss as meaningless the claim that bats and Martians have experiences fully comparable in richness of detail to our own.
Of the footnoted references to "language", one is in a book title; the other suggests that bats might have consciousness but not language or thought, asserting that "experience is present in animals lacking language and thought, who have no beliefs at all about their experiences". Whatever it is "like" to be a bat, and if bats have minds, if bats have "experiences", can bats possibly have no thoughts, no language, no beliefs? Nagel does not argue or insist on this point categorically in his essay; he simply does not discuss it all. Yet he does find time, despite his previous disclaimer, to discuss at considerable length a bat's echolocation system, as though this must occupy a main part of a bat's conscious mind. Given that operational awareness of the physical self is a largely unconscious affair in the human mind, I cannot follow Nagel here, and I fear that, in place of his declared inquiry, he is merely repurposing such information about bats as happens to be commonly available.
I came away from Nagel's essay admiring his prose style but regretting his title. He might better have called his essay, Why We Cannot Know What It Is Like To Be A Bat. Nagel does see this criticism coming — it is clear quite early in his essay that he knows he has nothing very original and substantive to say about being a bat — and he tries to pre-empt it, but only by tying himself in something of a knot. He claims on the one hand that we cannot exclude entirely the possibility of some day knowing what it is to be a bat, while on the other hand making clear that he himself has no great enthusiasm for making any serious effort in that direction, since the result of doing so will, he assumes, almost necessarily be imperfect: My point is not that we cannot know what it is like to be a bat. I am not raising that epistemological problem. My point is rather that even to form a conception of what it is like to be a bat (and a fortiori to know what it is like to be a bat) one must take up the bat's point of view. If one can take it up roughly, or partially, then one's conception will also be rough or partial.
This sounds to me more like laziness than anything else. It is armchair philosophy being performed where experimental philosophy would be possible. I would dearly love to see Nagel in dialogue with Charles Foster, author of Being A Beast, a book in which Foster describes how he has lived among, and in the manner of, a variety of wild animals, precisely in order to develop his capacity to empathise with them, to see the world as they do, to be as like them as a person can possibly be. Where Nagel dismisses almost arbitrarily the imitation of animal behaviour as a way into the animal mind, Foster sees it as the most obvious of experimental methods: I want to know what it is like to be a wild thing. It may be possible to know. Neuroscience helps; so does a bit of philosophy and a lot of the poetry of John Clare. But most of all it involves inching dangerously down the evolutionary tree and into a hole in a Welsh hillside, and under the rocks in a Devon river, and learning about weightlessness, the shape of the wind, boredom, mulch in the nose and the shudder and crack of dying things ...
When I’m being a badger, I live in a hole and eat earthworms. When I’m being an otter, I try to catch fish with my teeth.
Where Nagel seems to assume until proven otherwise that humans and animals are incapable of mutual comprehension, Foster assumes the contrary: Because of our close evolutionary cousinhood, I am, at least in terms of the battery of sense receptors we all bear, quite close to most of the animals in this book. And when I’m not, it is generally possible to describe and (roughly) to quantify the differences.
I am only sorry to say that Foster does not, in this book at least, try to live like a bat — though he is aware of Nagel's essay, and mentions it in his introduction. His models are limited to badgers, otters, foxes, deer, and swifts. Had he included bats, had he spent time hanging upside down in barns, catching insects with his tongue, and navigating by echolocation, then I am sure he would have returned with some contribution towards that "rough and partial" idea of a bat's point of view which, for Nagel, was both necessary to any understanding and yet too bothersome to obtain. I find Foster's approach by far the more satisfying, not only as literature but also as philosophy, if "philosophy" is the right word for the sort of conjecture which populates Nagel's essay. Foster renews the natural philosophy from which science was formed in the 16th to the 18th centuries, but in a more extreme form (emulation rather than observation), and incorporating the biological and psychological advances of recent times. Here, for example, is Foster ruminating on the inner life of badgers: I don’t doubt for a moment that badgers have some sort of consciousness. One of the reasons is that I’ve seen them sleeping. There’s plainly something going on in their heads when they’re asleep. They paddle, yip and snarl; the full repertoire of expressions plays out on their faces. There is some sort of story being enacted. And what can the central character be but the badger’s self? If badgers aren’t conscious in a sense comparable to us, their sleeping smiles and winces are more inscrutable than consciousness itself. I prefer the lesser mystery.
Foster's writing has many virtues, but what I most admire in it, I think, is his honesty. I never doubt the factual truth of what Foster is telling me. I appreciate his willingness to think aloud when he is not quite certain about something. In the course of reading him I come to trust even his intuitions — for example, about birdsong: I bought audio-books of bird calls and realised that I could tell a lot about the personality of a bird and the details of its life by hearing the noise it made. Without knowing what it was, I knew somehow that a whitethroat danced fearfully in deciduous summer shadows, looking for death from above, and picked insects with a beak like the finest surgical forceps, and fluffed and fussed and went south early.
There is something shamanic in this paragraph, is there not? A mere trace of the animal — a feather, a drop of blood, in this case a cry — allows the adept to enter into the animal itself. In secular terms we might say that we are apprehending the life of the animal poetically. I dream that machine learning may give us the prose version of this revelation — an account of every nuance in the song of a bird, an account of everything that birdsong might be saying to other birds that spend their lives immersed in that same song and can respond to signals that humans do not hear, with instincts that humans do not have.
Perhaps the entirety of my thoughts in this letter are contained in — could be inferred from — Wittgenstein's remark that "If a lion could speak, we could not understand him". I suspect this proposition is so often cited because there are so many ways of disagreeing with it. For example, since Wittgenstein can speak, and often I cannot understand what Wittgenstein says, I do not see why we need the lion at all. I wonder, too, how much of the force of Wittgenstein's remark derives from his choice of animal. The power and beauty of a lion is such that we can easily imagine that whatever a lion might say should be worth hearing, and that it would be our loss if we could not understand. There is also the undertow of danger, which always raises the stakes. If we attempt conversation with a lion we will stand a fair chance of being killed and perhaps being eaten (though humans are not a first-choice food for lions). I suspect there was enough of the performance artist in Wittgenstein for him to know that he would not have made nearly such an impression by saying, "If a mosquito could speak, we would not understand her". I believe that lions can speak, and that it is realistic to hope we may some day understand them. But, at the risk of repeating my conclusion of last week, I do worry what the lion might have to say. Will it be a kill-or-be-killed account of nature, appealing to the most aggressive and regressive instincts in human nature? Will it be a j'accuse account of humans' supplanting nature and proving crueller still? I am not ready for either. If Anna Sewell's imagined account of Black Beauty's hardships can move me to tears, I tremble to think how I might feel on hearing something similar from the horse's, or the lion's, mouth. — Robert
|