ON THE MEDIA: Are humans evolving beyond the need to tell stories?

November 28, 2016

By Will Self, www.theguardian.comNovember 25th, 2016

Neuroscientists who insist technology is changing our brains may have it wrong. What if we are switching from books to digital entertainment because of a change in our need to communicate?

A few years ago I gave a lecture in Oxford that was reprinted in the Guardian under the heading: “The novel is dead (this time it’s for real)”. In it I argued that the novel was losing its cultural centrality due to the digitisation of print: we are entering a new era, one with a radically different form of knowledge technology, and while those of us who have what Marshal McLuhan termed “Gutenberg minds” may find it hard to comprehend – such was our sense of the solidity of the literary world – without the necessity for the physical book itself, there’s no clear requirement for the art forms it gave rise to. I never actually argued that the novel was dead, nor that narrative itself was imperiled, yet whenever I discuss these matters with bookish folk they all exclaim: “But we need stories – people will always need stories.” As if that were an end to the matter.

Non-coincidentally, in line with this shift from print to digital there’s been an increase in the number of scientific studies of narrative forms and our cognitive responses to them. There’s a nice symmetry here: just as the technology arrives to convert the actual into the virtual, so other technologies arise, making it possible for us to look inside the brain and see its actual response to the virtual worlds we fabulate and confabulate. In truth, I find much of this research – which marries arty anxiety with techno-assuredness – to be self-serving, reflecting an ability to win the grants available for modish interdisciplinary studies, rather than some new physical paradigm with which to explain highly complex mental phenomena. Really, neuroscience has taken on the sexy mantle once draped round the shoulders of genetics. A few years ago, each day seemed to bring forth a new gene for this or that. Such “discoveries” rested on a very simplistic view of how the DNA of the human genotype is expressed in us poor, individual phenotypes – and I suspect many of the current discoveries, which link alterations in our highly plastic brains to cognitive functions we can observe using sophisticated equipment, will prove to be equally ill-founded.

The neuroscientist Susan Greenfield has been prominent in arguing that our new digital lives are profoundly altering the structure of our brains. This is undoubtedly the case – but then all human activities impact upon the individual brain as they’re happening; this by no means implies a permanent alteration, let alone a heritable one. After all, so far as we can tell the gross neural anatomy of the human has remained unchanged for hundreds of millennia, while the age of bi-directional digital media only properly dates – in my view – from the inception of wireless broadband in the early 2000s, hardly enough time for natural selection to get to work on the adaptive advantages of … tweeting. Nevertheless, pioneering studies have long since shown that licensed London cab drivers, who’ve completed the exhaustive “Knowledge” (which consists of memorising every street and notable building within a six mile radius of Charing Cross), have considerably enlarged posterior hippocampi.

This is the part of brain concerned with way-finding, but it’s also strongly implicated in memory formation; neuroscientists are now discovering that at the cognitive level all three abilities – memory, location, and narration – are intimately bound up. This, too, is hardly surprising: key for humans, throughout their long pre-history as hunter-gatherers, has been the ability to find food, remember where food is and tell the others about it. It’s strange, of course, to think of Pride and Prejudice or Ulysses as simply elaborations upon our biologically determined inclination to give people directions – but then it’s perhaps stranger still to realise that sustained use of satellite navigation, combined with absorbing all our narrative requirements in pictorial rather written form, may transform us into miserable and disoriented amnesiacs.

When he lectured on literature in the 1950s, Vladimir Nabokov would draw a map on the blackboard at the beginning of each session, depicting, for example, the floor plan of Austen’s Mansfield Park, or the “two ways” of Proust’s Combray. What Nabokov seems to have understood intuitively is what neuroscience is now proving: reading fiction enables a deeply memorable engagement with our sense of space and place. What the master was perhaps less aware of – because, as yet, this phenomenon was inchoate – was that throughout the 20th century the editing techniques employed in Hollywood films were being increasingly refined. This is the so-called “tyranny of film”: editing methods that compel our attention, rather than leaving us free to absorb the narrative in our own way. Anyone now in middle age will have an intuitive understanding of this: shots are shorter nowadays, and almost all transitions are effected by crosscutting, whereby two ongoing scenes are intercut in order to force upon the viewer the idea of their synchrony. It’s in large part this tyranny that makes contemporary films something of a headache for older viewers, to whom they can seem like a hypnotic swirl of action.

It will come as no surprise to Gutenberg minds to learn that reading is a better means of forming memory than watching films, as is listening to afternoon drama on Radio 4. This is the so-called “visualisation hypothesis” that proposes that people – and children in particular – find it harder not only to remember film as against spoken or written narratives, but also to come up with novel responses to them, because the amount of information they’re given, together with its determinate nature, forecloses imaginative response.

Almost all contemporary parents – and especially those of us who class themselves as “readers” – have engaged in the Great Battle of Screen: attempting to limit our children’s consumption of films, videos, computer games and phone-based social media. We feel intuitively that it can’t be doing our kids any good – they seem mentally distracted as well as physically fidgety: unable to concentrate as they often look from one handheld screen to a second freestanding one, alternating between tweezering some images on a touchscreen and manipulating others using a remote control. Far from admonishing my younger children to “read the classics” – an utterly forlorn hope – I often find myself simply wishing they’d put their phones down long enough to have their attention compelled by the film we’re watching.

The Great Battle of Screen … a teenager triple-screening. Photograph: U Baumgarten via Getty

If we take seriously the conclusions of these recent neuroscientific studies, one fact is indisputable: whatever the figures for books sales (either in print or digital form), reading for pleasure has been in serious decline for over a decade. That this form of narrative absorption (if you’ll forgive the coinage) is closely correlated with high attainment and wellbeing may tell us nothing about the underlying causation, but the studies do demonstrate that the suite of cognitive aptitudes needed to decipher text and turn it into living, breathing, visible and tangible worlds seem to wither away once we stop turning the pages and start goggling at virtual tales.

Of course, the sidelining of reading narrative (and along with it the semi-retirement of all those narrative forms we love) is small potatoes compared with the loss of our capacity for episodic memory: would we be quite so quick to post those fantastic holiday photographs on Facebook if we knew that in so doing we’d imperil our ability to recall unaided our walk along the perfect crescent of sand, and our first ecstatic kiss? You might’ve thought that as a novelist who depends on fully attuned Gutenberg minds to read his increasingly complex and confusing texts I’d be dismayed by this craven new couch-based world; and, as a novelist, I am.

I began writing my books on a manual typewriter at around the same time wireless broadband became ubiquitous, sensing it was inimical not only to the act of writing, but that of reading as well: a novel should be a self-contained and self-explanatory world (at least, that’s how the form has evolved), and it needs to be created in the same cognitive mode as it’s consumed: the writer hunkering down into his own episodic memories, and using his own canonical knowledge, while imagining all the things he’s describing, rather than Googling them to see what someone else thinks they look like. I also sense the decline in committed reading among the young that these studies claim: true, the number of those who’ve ever been inclined “to get up in the morning in the fullness of youth”, as Nietzsche so eloquently put it, “and open a book” has always been small; but then it’s worth recalling the sting in the tail of his remark: “now that’s what I call vicious”.

And there is something vicious about all that book learning, especially when it had to be done by rote. There’s something vicious as well about the baby boomer generation, which, not content to dominate the cultural landscape, also demands that everyone younger than us survey it in the same way. For the past five years I’ve been working on a trilogy of novels that aim to map the connections between technological change, warfare and human psychopathology, so obviously I’m attempting to respond to the zeitgeist using this increasingly obsolete art form. My view is that we’re deluded if we think new technologies come into existence because of clearly defined human objectives – let alone benevolent ones – and it’s this that should shape our response to them. No, the history of the 20th century – and now the 21st – is replete with examples of technologies that were developed purely in order to facilitate the killing of people at a distance, of which the internet is only the most egregious example. Our era is also replete with the mental illnesses occasioned by such technologies – sometimes I think our obsession with viewing violent and horrific imagery is some sort of collective post-traumatic stress disorder.

So, it may be that our instinctive desire to kill at a distance is a stronger determinant of our cognitive abilities than our need to tell other humans where the food is. Which would certainly explain why poring over a facsimile of Shakespeare’s first folio is being supplanted by first-person shooters. I’ve referred throughout this piece to Gutenberg minds, and I do indeed believe that each successive knowledge technology brings with it a different form of human being. It’s worrying that our young seem distracted and often depressed, and sad for those of us who have invested so much of our belief and our effort in print technology, that it – and the modes of being associated with it – appear to be in decline. But it may be the case that our children are in the larval stage of a new form of human being, one which no longer depends on their ability to tell the others where the food is. Why? Because, of course, they know where it is already, due to the absolute fluidity and ubiquity of bi-directional digital media. Indeed, there may not be any need to tell the others where the food is in the future, because in an important sense there are no others.

The so-called “singularity” proposed by tech gurus, whereby humans hybridise with machine intelligence, and form a new genotype, subject to evolution by natural selection, may not begin with a cosmic bang; rather, the whimpering of our children as they shoot at their virtual enemies, or are defriended, may be the signal that it’s begun already. Richard Brautigan, the great hippy writer, envisaged a “cybernetic meadow” in which “mammals and computers live together in mutually programmed harmony”. It sounds to me an awful lot like our own current state of storytelling, without, of course, the need for anyone to read poetry, which is the form within which Brautigan did his visualising, and we received his rather optimistic vision.

This is an edited version of a lecture delivered by Will Self as part of Scottish Book Week. His new novel, Phone, will be published by Penguin next year.

Related Posts:

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *