Elements of surprise

Reading Vera Tobin’s Elements of Surprise for New Scientist, 5 May 2018

How do characters and events in fiction differ from those in real life? And what is it about our experience of life that fiction exaggerates, omits or captures to achieve its effects?

Effective fiction is Vera Tobin’s subject. And as a cognitive scientist, she knows how pervasive and seductive it can be, even in – or perhaps especially in – the controlled environment of an experimental psychology lab.

Suppose, for instance, you want to know which parts of the brain are active when forming moral judgements, or reasoning about false beliefs. These fields and others rest on fMRI brain scans. Volunteers receive short story prompts with information about outcomes or character intentions and, while their brains are scanned, have to judge what other characters ought to know or do.

“As a consequence,” writes Tobin in her new book Elements of Surprise, “much research that is putatively about how people think about other humans… tells us just as much, if not more, about how study participants think about characters in constructed narratives.”

Tobin is weary of economists banging on about the “flaws” in our cognitive apparatus. “The science on this phenomenon has tended to focus on cataloguing errors people make in solving problems or making decisions,” writes Tobin, “but… its place and status in storytelling, sense-making, and aesthetic pleasure deserve much more attention.”

Tobin shows how two major “flaws” in our thinking are in fact the necessary and desirable consequence of our capacity for social interaction. First, we wildly underestimate our differences. We model each other in our heads and have to assume this model is accurate, even while we’re revising it, moment to moment. At the same time, we have to assume no one else has any problem performing this task – which is why we’re continually mortified to discover other people have no idea who we really are.

Similarly, we find it hard to model the mental states of people, including our past selves, who know less about something than we do. This is largely because we forget how we came to that privileged knowledge.

“Tobin is weary of economists banging on about the ‘flaws’ in our cognitive apparatus”
There are implications for autism, too. It is, Tobin says, unlikely that many people with autism “lack” an understanding that others think differently – known as “theory of mind”. It is more likely they have difficulty inhibiting their knowledge when modelling others’ mental states.

And what about Emma, titular heroine of Jane Austen’s novel? She “is all too ready to presume that her intentions are unambiguous to others and has great difficulty imagining, once she has arrived at an interpretation of events, that others might believe something different”, says Tobin. Austen’s brilliance was to fashion a plot in which Emma experiences revelations that confront the consequences of her “cursed thinking” – a cognitive bias making us assume any person with whom we communicate has the background knowledge to understand what is being said.

Just as we assume others know what we’re thinking, we assume our past selves thought as we do now. Detective stories exploit this foible. Mildred Pierce, Michael Curtiz’s 1945 film, begins at the end, as it were, depicting the story’s climactic murder. We are fairly certain we know who did it, but we flashback to the past and work forward to the present only to find that we have misinterpreted everything.

I confess I was underwhelmed on finishing this excellent book. But then I remembered Sherlock Holmes’s complaint (mentioned by Tobin) that once he reveals the reasoning behind his deductions, people are no longer impressed by his singular skill. Tobin reveals valuable truths about the stories we tell to entertain each other, and those we tell ourselves to get by, and how they are related. Like any good magic trick, it is obvious once it has been explained.

Pushing the boundaries

Rounding up some cosmological pop-sci for New Scientist, 24 March 2018

IN 1872, the physicist Ludwig Boltzmann developed a theory of gases that confirmed the second law of thermodynamics, more or less proved the existence of atoms and established the asymmetry of time. He went on to describe temperature, and how it governed chemical change. Yet in 1906, this extraordinary man killed himself.

Boltzmann is the kindly if gloomy spirit hovering over Peter Atkins’s new book, Conjuring the Universe: The origins of the laws of nature. It is a cheerful, often self-deprecating account of how most physical laws can be unpacked from virtually nothing, and how some constants (the peculiarly precise and finite speed of light, for example) are not nearly as arbitrary as they sound.

Atkins dreams of a final theory of everything to explain a more-or-less clockwork universe. But rather than wave his hands about, he prefers to clarify what can be clarified, clear his readers’ minds of any pre-existing muddles or misinterpretations, and leave them, 168 succinct pages later, with a rather charming image of him tearing his hair out over the fact that the universe did not, after all, pop out of nothing.

It is thanks to Atkins that the ideas Boltzmann pioneered, at least in essence, can be grasped by us poor schlubs. Popular science writing has always been vital to science’s development. We ignore it at our peril and we owe it to ourselves and to those chipping away at the coalface of research to hold popular accounts of their work to the highest standards.

Enter Brian Clegg. He is such a prolific writer of popular science, it is easy to forget how good he is. Icon Books is keeping him busy writing short, sweet accounts for its Hot Science series. The latest, by Clegg, is Gravitational Waves: How Einstein’s spacetime ripples reveal the secrets of the universe.

Clegg delivers an impressive double punch: he transforms a frustrating, century-long tale of disappointment into a gripping human drama, affording us a vivid glimpse into the uncanny, depersonalised and sometimes downright demoralising operations of big science. And readers still come away wishing they were physicists.

Less polished, and at times uncomfortably unctuous, Catching Stardust: Comets, asteroids and the birth of the solar system is nevertheless a promising debut from space scientist and commentator Natalie Starkey. Her description of how, from the most indirect evidence, a coherent history of our solar system was assembled, is astonishing, as are the details of the mind-bogglingly complex Rosetta mission to rendezvous with comet 67P/Churyumov-Gerasimenko – a mission in which she was directly involved.

It is possible to live one’s whole life within the realms of science and discovery. Plenty of us do. So it is always disconcerting to be reminded that longer-lasting civilisations than ours have done very well without science or formal logic, even. And who are we to say they afforded less happiness and fulfilment than our own?

Nor can we tut-tut at the way ignorant people today ride science’s coat-tails – not now antibiotics are failing and the sixth extinction is chewing its way through the food chain.

Physicists, especially, find such thinking well-nigh unbearable, and Alan Lightman speaks for them in his memoir Searching for Stars on an Island in Maine. He wants science to rule the physical realm and spirituality to rule “everything else”. Lightman is an elegant, sensitive writer, and he has written a delightful book about one man’s attempt to hold the world in his head.

But he is wrong. Human culture is so rich, diverse, engaging and significant, it is more than possible for people who don’t give a fig for science or even rational thinking to live lives that are meaningful to themselves and valuable to the rest of us.

“Consilience” was biologist E.O. Wilson’s word for the much-longed-for marriage of human enquiry. Lightman’s inadvertent achievement is to show that the task is more than just difficult, it is absurd.

Writing about knowing

Reading John Brockman’s anthology This Idea Is Brilliant: Lost, overlooked, and underappreciated scientific concepts everyone should know for New Scientist, 24 February 2018 

Literary agent and provocateur John Brockman has turned popular science into a sort of modern shamanism, packaged non-fiction into gobbets of smart thinking, made stars of unlikely writers and continues to direct, deepen and contribute to some of the most hotly contested conversations in civic life.

This Idea Is Brilliant is the latest of Brockman’s annual anthologies drawn from edge.org, his website and shop window. It is one of the stronger books in the series. It is also one of the more troubling, addressing, informing and entertaining a public that has recently become extraordinarily confused about truth and falsehood, fact and knowledge.

Edge.org’s purpose has always been to collide scientists, business people and public intellectuals in fruitful ways. This year, the mix in the anthology leans towards the cognitive sciences, philosophy and the “freakonomic” end of the non-fiction bookshelf. It is a good time to return to basics: to ask how we know what we know, what role rationality plays in knowing, what tech does to help and hinder that knowing, and, frankly, whether in our hunger to democratise knowledge we have built a primrose-lined digital path straight to post-truth perdition.

Many contributors, biting the bullet, reckon so. Measuring the decline in the art of conversation against the rise of social media, anthropologist Nina Jablonski fears that “people are opting for leaner modes of communication because they’ve been socialized inadequately in richer ones”.

Meanwhile, an applied mathematician, Coco Krumme, turning the pages of Jorge Luis Borges’s short story The Lottery in Babylon, conceptualises the way our relationship with local and national government is being automated to the point where fixing wayward algorithms involves the applications of yet more algorithms. In this way, civic life becomes opaque and arbitrary: a lottery. “To combat digital distraction, they’d throttle email on Sundays and build apps for meditation,” Krumme writes. “Instead of recommender systems that reveal what you most want to hear, they’d inject a set of countervailing views. The irony is that these manufactured gestures only intensify the hold of a Babylonian lottery.”

Of course, IT wasn’t created on a whim. It is a cognitive prosthesis for significant shortfalls in the way we think. Psychologist Adam Waytz cuts to the heart of this in his essay “The illusion of explanatory depth” – a phrase describing how people “feel they understand the world with far greater detail, coherence and depth than they really do”.

Humility is a watchword here. If our thinking has holes in it, if we forget, misconstrue, misinterpret or persist in false belief, if we care more for the social consequences of our beliefs than their accuracy, and if we suppress our appetite for innovation in times of crisis (all subjects of separate essays here), there are consequences. Why on earth would we imagine we can build machines that don’t reflect our own biases, or don’t – in a ham-fisted effort to correct for them – create ones of their own we can barely spot, let alone fix?

Neuroscientist Sam Harris is one of several here who, searching for a solution to the “truthiness” crisis, simply appeals to basic decency. We must, he argues, be willing to be seen to change our minds: “Wherever we look, we find otherwise sane men and women making extraordinary efforts to avoid changing [them].”

He has a point. Though our cognitive biases, shortfalls and the like make us less than ideal rational agents, evolution has equipped us with social capacities that, smartly handled, run rings round the “cleverest” algorithm.

Let psychologist Abigail Marsh have the last word: “We have our flaws… but we can also claim to be the species shaped by evolution to possess the most open hearts and the greatest proclivity for caring on Earth.” This may, when all’s said and done, have to be enough.

What’s the Russian for Eastbourne?

Reading Davies and Kent’s Red Atlas for the Telegraph, 13 January 2018

This is a journey through an exotic world conjured into being by the Military Topographic Directorate of the General Staff of the Soviet Army. Tasked by Stalin during the Second World War to accurately and secretly map the Soviet Union, its Eastern European allies, its Western adversaries, and the rest of the world, the Directorate embarked on the largest mapping effort in history, Too many maps have been lost for us to be entirely sure what coverage was attained, but it must have been massive. Considering the UK alone, if there are detailed street plans of the market town of Gainsborough in Lincolnshire, we can be reasonably sure there were once maps of Carlisle and Hull.

From internal evidence (serial numbers and such-like) we know there were well in excess of 1 million maps produced. Only a few survive today, and the best preserved of them, the most beautiful, the most peculiar, the most chilling, are reproduced here. The accompanying text, by cartographers John Davies and Alexander Kent, is rich in detail, and it needs to be. Soviet intelligence maps boast a level of detail that puts our own handsome Ordnance Survey to shame — a point the authors demonstrate by putting OS maps alongside their Soviet counterparts. You can not only see my road from one of these Soviet maps: you can see how tall the surrounding buildings are. You can read the height of a nearby bridge above water, its dimensions, its load capacity, and what it is made of. As for the river, I now know its width, its flow, its depth, and whether it has a viscous bed (it hasn’t).

This is not a violent tale. There is little evidence that the mapmakers had invasion on their mind. What would have been the point? By the time Russian tanks were rolling down the A14 (Cambridge, UK, 1:10,000 City Plan of 1998), nuclear exchanges would have obliterated most of these exquisite details, carefully garnered from aerial reconnaissance, archival research, Zenit satellite footage and, yes, wonderfully, non-descript men dawdling outside factory gates and police stations. Maybe the maps were for them and their successors. Placenames are rendered phonetically: HEJSTYNZ for Hastings and “ISBON” for Eastbourne on one Polish map. This would have been useful if you were asking directions, but useless if you were in a tank speeding through hostile territory, trying to read the road signs. The Directorate’s city maps came with guides. Some of the details recorded here are sinister enough: Cambridgeshire clay “becomes waterlogged and severely impedes off-road movement of mechanized transport.” Its high hedges “significantly impede observation of the countryside”. But what are we to make of the same guide’s wistful description of the city itself? “The bank of the river Cam is lined with ivy-clad buildings of the colleges of the university, with ridged roofs and turrets… The lodging-houses with their lecture-halls are reminiscent of monasteries or ancient castles.”

Though deployed on an industrial scale, the Soviet mapmakers were artisans, who tried very hard to understand a world they would never have any opportunity to see. They did a tremendous job: why else would their maps have informed the US invasion of Afghanistan, water resource management in Armenia, or oil exploration in India? Now and again their cultural assumptions led them down strange paths. Ecclesiastical buildings lost all significance in the Republic of Ireland, whose landscape became dotted with disused watermills. In England, Beeching’s cull of the railways was incomprehensible to Russian mapmakers, for whom railways were engines of revolution. A 1971 map of Leeds sheet not only shows lines closed in the 1960s; it also depicts and names the Wellington terminus station, adjacent to City station, which had closed in 1938.

The story of the Soviets’ mapping and remapping, particularly of the UK, is an eerie one, and though their effort seems impossibly Heath-Robinson now, the reader is never tempted into complacency. Cartography remains an ambiguous art. For evidence, go to Google Maps and type in “Burghfield”. It’s a village near Reading, home to a decommissioned research station of the Atomic Weapons Establishment. Interestingly, the authors claim that though the site is visible in detail through Google Earth, for some reason Google Maps has left the site blank and unlabelled.

This claim is only partly true. The label is there, though it appears at only the smallest available scale of the map. Add the word “atomic” to your search string, and you are taken to an image that, if not particularly informative, is still adequate for a visit.

Two thoughts followed hard on this non-discovery of mine. First, that I should let this go: my idea of “adequate” mapping is likely to be a lot less rigorous than the authors’; anyway it is more than possible that this corner of Google Maps has been updated since the book went to press. Second, that my idle fact-checking placed me in a new world — or at any rate, one barely out of its adolescence. (Keyhole, the company responsible for what became Google Earth, was founded in 2001.)

Today anyone with a broadband connection can drill down to information once considered the prerogative of government analysts. Visit Google Earth’s Russia, and you can find traces of the forest belts planted as part of Stalin’s Great Transformation of Nature in 1948. You can see how industrial combines worked their way up the Volga, building hydroelectric plants that drowned an area the size of France with unplanned swamps. There’s some chauvinistic glee to be had from this, but in truth, intelligence has become simply another digital commodity: stuff to be mined, filleted, mashed up, repackaged. Open-source intelligence: OSINT. There are conferences about it. Workshops. Artworks.

The Red Atlas is not about endings. It is about beginnings. The Cold War, far from being over, has simply subsumed our civic life. Everyone is in the intelligence business now.

Future by design

The Second Digital Turn: Design beyond intelligence
Mario Carpo
MIT Press

THE Polish futurist Stanislaw Lem once wrote: “A scientist wants an algorithm, whereas the technologist is more like a gardener who plants a tree, picks apples, and is not bothered about ‘how the tree did it’.”

For Lem, the future belongs to technologists, not scientists. If Mario Carpo is right and the “second digital turn” described in his extraordinary new book comes to term, then Lem’s playful, “imitological” future where analysis must be abandoned in favour of creative activity, will be upon us in a decade or two. Never mind our human practice of science, science itself will no longer exist, and our cultural life will consist of storytelling, gesture and species of magical thinking.

Carpo studies architecture. Five years ago, he edited The Digital Turn in Architecture 1992-2012, a book capturing the curvilinear, parametric spirit of digital architecture. Think Frank Gehry’s Guggenheim Museum in Bilbao – a sort of deconstructed metal fish head – and you are halfway there.

Such is the rate of change that five years later, Carpo has had to write another book (the urgency of his prose is palpable and thrilling) about an entirely different kind of design. This is a generative design powered by artificial intelligence, with its ability to thug through digital simulations (effectively, breaking things on screen until something turns up that can’t be broken) and arriving at solutions that humans and their science cannot better.

This kind of design has no need of casts, stamps, moulds or dies. No costs need be amortised. Everything can be a one-off at the same unit cost.

Beyond the built environment, it is the spiritual consequences of this shift that matter, for by its light Carpo shows all cultural history to be a gargantuan exercise in information compression.

Unlike their AIs, human beings cannot hold much information at any one time. Hence, for example, the Roman alphabet: a marvel of compression, approximating all possible vocalisations with just 26 characters. Now that we can type and distribute any glyph at the touch of a button, is it any wonder emojis are supplementing our tidy 26-letter communications?

Science itself is simply a series of computational strategies to draw the maximum inference from the smallest number of precedents. Reduce the world to rules and there is no need for those precedents. We have done this for so long and so well some of us have forgotten that “rules” aren’t “real” rules, they are just generalisations.

AIs simply gather or model as many precedents as they wish. Left to collect data according to their own strengths, they are, Carpo says, “postscientific”. They aren’t doing science we recognise: they are just thugging.

“Carpo shows all cultural history to be a gargantuan exercise in information compression”

Carpo foresees the “separation of the minds of the thinkers from the tools of computation”. But in that alienation, I think, lies our reason to go on. Because humans cannot handle very much data at any one time, sorting is vital, which means we have to assign meaning. Sorting is therefore the process whereby we turn data into knowledge. Our inability to do what computers can do has a name already: consciousness.

Carpo’s succinctly argued future has us return to a tradition of orality and gesture, where these forms of communication need no reduction or compression since our tech will be able to record, notate, transmit, process and search them, making all cultural technologies developed to handle these tasks “equally unnecessary”. This will be neither advance nor regression. Evolution, remember, is maddeningly valueless.

Could we ever have evolved into Spock-like hyper-rationality? I doubt it. Carpo’s sincerity, wit and mischief show that Prospero is more the human style. Or Peter Pan, who observed: “You can have anything in life, if you will sacrifice everything else for it.”


Stalin’s meteorologist

I reviewed Olivier Rolin’s new book for The Daily Telegraph

750,000 shot. This figure is exact; the Soviet secret police, the NKVD, kept meticulous records relating to their activities during Stalin’s Great Purge. How is anyone to encompass in words this horror, barely 80 years old? Some writers find the one to stand for the all: an Everyman to focus the reader’s horror and pity. Olivier Rolin found his when he was shown drawings and watercolours made by Alexey Wangenheim, an inmate of the Solovki prison camp in Russia’s Arctic north. He made them for his daughter, and they are reproduced as touching miniatures in this slim, devastating book, part travelogue, part transliteration of Wangenheim’s few letters home.

While many undesirables were labelled by national or racial identity, a huge number were betrayed by their accomplishments. Before he was denounced by a jealous colleague, Wangenheim ran a pan-Soviet weather service. He was not an exceptional scientist: more an efficient bureaucrat. He cannot even be relied on “to give colourful descriptions of the glories of nature” before setting sail, with over a thousand others, for a secret destination, not far outside the town of Medvezhegorsk. There, some time around October 1937, a single NKVD officer dispatched the lot of them, though he had help with the cudgelling, the transport, the grave-digging. While he went to work with his Nagant pistol, others were washing blood and brains off the trucks and tarpaulins.

Right to the bitter end, Wangenheim is a boring correspondent, always banging on about the Party. “My faith in the Soviet authorities has in no way been shaken” he says. “Has Comrade Stalin received my letter?” And again: “I have battled in my heart not to allow myself to think ill of the Soviet authorities or of the leaders”. Rolin makes gold of such monotony, exploiting the degree to which French lends itself to lists and repeated figures, and his translator Ros Schwartz has rendered these into English that is not just palatable, but often thrilling and always freighted with dread.

When Wangenheim is not reassuring his wife about the Bolshevik project, he is making mosaics out of stone chippings and brick dust: meticulous little portraits of — of all people — Stalin. Rolin openly struggles to understand his subject’s motivation: “In any case, blinkeredness or pathetic cunning, there is something sinister about seeing this man, this scholar, making of his own volition the portrait of the man in whose name he is being crucified.”

That Rolin finds a mystery here is of a piece with his awkward nostalgia for the promise of the Bolshevik revolution. Hovering like a miasma over some pages (though Rolin is too smart to succumb utterly) is that hoary old meme, “the revolution betrayed”. So let us be clear: the revolution was not betrayed. The revolution panned out exactly the way it was always going to pan out, whether Stalin was at the helm or not. It is also exactly the way the French revolution panned out, and for exactly the same reason.

Both French and Socialist revolutions sought to reinvent politics to reflect the imminent unification of all branches of human knowledge, and consequently, their radical simplification. By Marx’s day this idea, under the label “scientism”, had become yawningly conventional: also wrong.

Certainly by the time of the Bolshevik revolution, scientists better than Wangenheim — physicists, most famously — knew that the universe would not brook such simplification, neither under Marx nor under any other totalising system. Rationality remains a superb tool with which to investigate the world. But as a working model of the world, guiding political action, it leads only to terror.

To understand Wangenheim’s mosaic-making, we have to look past his work, diligently centralising and simplifying his own meteorological science to the point where a jealous colleague, deprived of his sinecure, denounced him. We need to look at the human consequences of this attempt at scientific government, and particularly at what radical simplification does to the human psyche. To order and simplify life is to bureaucratise it, and to bureaucratise human beings is make them behave like machines. Rolin says Wangenheim clung to the party for the sake of his own sanity. I don’t doubt it. But to cling to any human institution, or to any such removed and fortressed individual, is the act, not of a suffering human being but of a malfunctioning machine.

At the end of his 1940 film The Great Dictator Charles Chaplin, dressed in Adolf Hitler’s motley, broke the fourth wall to declare war on the “machine men with machine minds” that were then marching roughshod across his world. Regardless of Hitler’s defeat, this was a war we assuredly lost. To be sure the bureaucratic infection, like all infections, has adapted to ensure its own survival, and it is not so virulent as it was. The pleasures of bureaucracy are more evident now; its damages, though still very real, are less evident. “Disruption” has replaced the Purge. The Twitter user has replaced the police informant.

But let us be explicit here, where Rolin has been admirably artful and quietly insidious: the pleasures of bureaucracy in both eras are exactly the same. Wangenheim’s murderers lived in a world that had been made radically simple for them. In Utopia, all you have to do is your job (though if you don’t, Utopia falls apart). These men weren’t deprived of humanity: they were relieved of it. They experienced exactly what you or I feel when the burden of life’s ambiguities is lifted of a sudden from our shoulders: contentment, bordering on joy.

A kind of “symbol knitting”

Reviewing new books by Paul Lockhart and Ian Stewart for The Spectator 

It’s odd, when you think about it, that mathematics ever got going. We have no innate genius for numbers. Drop five stones on the ground, and most of us will see five stones without counting. Six stones are a challenge. Presented with seven stones, we will have to start grouping, tallying and making patterns.

This is arithmetic, ‘a kind of “symbol knitting”’ according to the maths researcher and sometime teacher Paul Lockhart, whose Arithmetic explains how counting systems evolved to facilitate communication and trade, and ended up watering (by no very obvious route) the metaphysical gardens of mathematics.

Lockhart shamelessly (and successfully) supplements the archeological record with invented number systems of his own. His three fictitious early peoples have decided to group numbers differently: in fours, in fives, and in sevens. Now watch as they try to communicate. It’s a charming conceit.

Arithmetic is supposed to be easy, acquired through play and practice rather than through the kind of pseudo-theoretical ponderings that blighted my 1970s-era state education. Lockhart has a lot of time for Roman numerals, an effortlessly simple base-ten system which features subgroup symbols like V (5), L (50) and D (500) to smooth things along. From glorified tallying systems like this, it’s but a short leap to the abacus.

It took an eye-watering six centuries for Hindu-Arabic numbers to catch on in Europe (via Fibonacci’s Liber Abaci of 1202). For most of us, abandoning intuitive tally marks and bead positions for a set of nine exotic squiggles and a dot (the forerunner of zero) is a lot of cost for an impossibly distant benefit. ‘You can get good at it if you want to,’ says Lockhart, in a fit of under-selling, ‘but it is no big deal either way.’

It took another four centuries for calculation to become a career, as sea-going powers of the late 18th century wrestled with the problems of navigation. In an effort to improve the accuracy of their logarithmic tables, French mathematicians broke the necessary calculations down into simple steps involving only addition and subtraction, assigning each step to human ‘computers’.

What was there about navigation that involved such effortful calculation? Blame a round earth: the moment we pass from figures bounded by straight lines or flat surfaces we run slap into all the problems of continuity and the mazes of irrational numbers. Pi, the ratio of a circle’s circumference to its diameter, is ugly enough in base 10 (3.1419…). But calculate pi in any base, and it churns out numbers forever. It cannot be expressed as a fraction of any whole number. Mathematics began when practical thinkers like Archimedes decided to ignore naysayers like Zeno (whose paradoxes were meant to bury mathematics, not to praise it) and deal with nonsenses like pi and the square root of 1.

How do such monstrosities yield such sensible results? Because mathematics is magical. Deal with it.

Ian Stewart deals with it rather well in Significant Figures, his hagiographical compendium of 25 great mathematicians’ lives. It’s easy to quibble. One of the criteria for Stewart’s selection was, he tells us, diversity. Like everybody else, he wants to have written Tom Stoppard’s Arcadia, championing (if necessary, inventing) some unsung heroine to enliven a male-dominated field. So he relegates Charles Babbage to Ada King’s little helper, then repents by quoting the opinion of Babbage’s biographer Anthony Hyman (perfectly justified, so far as I know) that ‘there is not a scrap of evidence that Ada ever attempted original mathematical work’. Well, that’s fashion for you.

In general, Stewart is the least modish of writers, delivering new scholarship on ancient Chinese and Indian mathematics to supplement a well-rehearsed body of knowledge about the western tradition. A prolific writer himself, Stewart is good at identifying the audiences for mathematics at different periods. The first recognisable algebra book, by Al-Khwarizmi, written in the first half of the 9th century, was commissioned for a popular audience. Western examples of popular form include Cardano’s Book on Games of Chance, published 1663. It was the discipline’s first foray into probability.

As a subject for writers, mathematics sits somewhere between physics and classical music. Like physics, it requires that readers acquire a theoretical minimum, without which nothing will make much sense. (Unmathematical readers should not start withSignificant Figures; it is far too compressed.) At the same time, like classical music, mathematics will not stand too much radical reinterpretation, so that biography ends up playing a disconcertingly large role in the scholarship.

In his potted biographies Stewart supplements but makes no attempt to supersede Eric Temple Bell, whose history Men of Mathematics of 1937 remains canonical. This is wise: you wouldn’t remake Civilisation by ignoring Kenneth Clark. At the same time, one can’t help regretting the degree to which a Scottish-born mathematician and science fiction writer born in 1945 has had his limits set by the work of a Scottish-born mathematician and science fiction writer born in 1883. It can’t be helped. Mathematical results are not superseded. When the ancient Babylonians worked out how to solve quadratic equations, their result never became obsolete.

This is, I suspect, why both Lockhart and Stewart have each ended up writing good books about territories adjacent to the meat of mathematics. The difference is that Lockhart did this deliberately. Stewart simply ran out of room.

Stanisław Lem: The man with the future inside him


From the 1950s, science fiction writer Stanisław Lem began firing out prescient explorations of our present and far beyond. His vision is proving unparalleled.
For New Scientist, 16 November 2016

“POSTED everywhere on street corners, the idiot irresponsibles twitter supersonic approval, repeating slogans, giggling, dancing…” So it goes in William Burroughs’s novel The Soft Machine (1961). Did he predict social media? If so, he joins a large and mostly deplorable crowd of lucky guessers. Did you know that in Robert Heinlein’s 1948 story Space Cadet, he invented microwave food? Do you care?

There’s more to futurology than guesswork, of course, and not all predictions are facile. Writing in the 1950s, Ray Bradbury predicted earbud headphones and elevator muzak, and foresaw the creeping eeriness of today’s media-saturated shopping mall culture. But even Bradbury’s guesses – almost everyone’s guesses, in fact – tended to exaggerate the contemporary moment. More TV! More suburbia! Videophones and cars with no need of roads. The powerful, topical visions of writers like Frederik Pohl and Arthur C. Clarke are visions of what the world would be like if the 1950s (the 1960s, the 1970s…) went on forever.

And that is why Stanisław Lem, the Polish satirist, essayist, science fiction writer and futurologist, had no time for them. “Meaningful prediction,” he wrote, “does not lie in serving up the present larded with startling improvements or revelations in lieu of the future.” He wanted more: to grasp the human adventure in all its promise, tragedy and grandeur. He devised whole new chapters to the human story, not happy endings.

And, as far as I can tell, Lem got everything – everything – right. Less than a year before Russia and the US played their game of nuclear chicken over Cuba, he nailed the rational madness of cold-war policy in his book Memoirs Found in a Bathtub (1961). And while his contemporaries were churning out dystopias in the Orwellian mould, supposing that information would be tightly controlled in the future, Lem was conjuring with the internet (which did not then exist), and imagining futures in which important facts are carried away on a flood of falsehoods, and our civic freedoms along with them. Twenty years before the term “virtual reality” appeared, Lem was already writing about its likely educational and cultural effects. He also coined a better name for it: “phantomatics”. The books on genetic engineering passing my desk for review this year have, at best, simply reframed ethical questions Lem set out in Summa Technologiae back in 1964 (though, shockingly, the book was not translated into English until 2013). He dreamed up all the usual nanotechnological fantasies, from spider silk space-elevator cables to catastrophic “grey goo”, decades before they entered the public consciousness. He wrote about the technological singularity – the idea that artificial superintelligence would spark runaway technological growth – before Gordon Moore had even had the chance to cook up his “law” about the exponential growth of computing power. Not every prediction was serious. Lem coined the phrase “Theory of Everything”, but only so he could point at it and laugh.

He was born on 12 September 1921 in Lwów, Poland (now Lviv in Ukraine). His abiding concern was the way people use reason as a white stick as they steer blindly through a world dominated by chance and accident. This perspective was acquired early, while he was being pressed up against a wall by the muzzle of a Nazi machine gun – just one of several narrow escapes. “The difference between life and death depended upon… whether one went to visit a friend at 1 o’clock or 20 minutes later,” he recalled.

Though a keen engineer and inventor – in school he dreamed up the differential gear and was disappointed to find it already existed – Lem’s true gift lay in understanding systems. His finest childhood invention was a complete state bureaucracy, with internal passports and an impenetrable central office.

He found the world he had been born into absurd enough to power his first novel (Hospital of the Transfiguration, 1955), and might never have turned to science fiction had he not needed to leap heavily into metaphor to evade the attentions of Stalin’s literary censors. He did not become really productive until 1956, when Poland enjoyed a post-Stalinist thaw, and in the 12 years following he wrote 17 books, among them Solaris (1961), the work for which he is best known by English speakers.

Solaris is the story of a team of distraught experts in orbit around an inscrutable and apparently sentient planet, trying to come to terms with its cruel gift-giving (it insists on “resurrecting” their dead). Solaris reflects Lem’s pessimistic attitude to the search for extraterrestrial intelligence. It’s not that alien intelligences aren’t out there, Lem says, because they almost certainly are. But they won’t be our sort of intelligences. In the struggle for control over their environment they may as easily have chosen to ignore communication as respond to it; they might have decided to live in a fantastical simulation rather than take their chances any longer in the physical realm; they may have solved the problems of their existence to the point at which they can dispense with intelligence entirely; they may be stoned out of their heads. And so on ad infinitum. Because the universe is so much bigger than all of us, no matter how rigorously we test our vaunted gift of reason against it, that reason is still something we made – an artefact, a crutch. As Lem made explicit in one of his last novels, Fiasco (1986), extraterrestrial versions of reason and reasonableness may look very different to our own.

Lem understood the importance of history as no other futurologist ever has. What has been learned cannot be unlearned; certain paths, once taken, cannot be retraced. Working in the chill of the cold war, Lem feared that our violent and genocidal impulses are historically constant, while our technical capacity for destruction will only grow.

Should we find a way to survive our own urge to destruction, the challenge will be to handle our success. The more complex the social machine, the more prone it will be to malfunction. In his hard-boiled postmodern detective story The Chain of Chance (1975), Lem imagines a very near future that is crossing the brink of complexity, beyond which forms of government begin to look increasingly impotent (and yes, if we’re still counting, it’s here that he makes yet another on-the-money prediction by describing the marriage of instantly accessible media and global terrorism).

Say we make it. Say we become the masters of the universe, able to shape the material world at will: what then? Eventually, our technology will take over completely from slow-moving natural selection, allowing us to re-engineer our planet and our bodies. We will no longer need to borrow from nature, and will no longer feel any need to copy it.

At the extreme limit of his futurological vision, Lem imagines us abandoning the attempt to understand our current reality in favour of building an entirely new one. Yet even then we will live in thrall to the contingencies of history and accident. In Lem’s “review” of the fictitious Professor Dobb’s book Non Serviam, Dobb, the creator, may be forced to destroy the artificial universe he has created – one full of life, beauty and intelligence – because his university can no longer afford the electricity bills. Let’s hope we’re not living in such a simulation.

Most futurologists are secret utopians: they want history to end. They want time to come to a stop; to author a happy ending. Lem was better than that. He wanted to see what was next, and what would come after that, and after that, a thousand, ten thousand years into the future. Having felt its sharp end, he knew that history was real, that the cause of problems is solutions, and that there is no perfect world, neither in our past nor in our future, assuming that we have one.

By the time he died in 2006, this acerbic, difficult, impatient writer who gave no quarter to anyone – least of all his readers – had sold close to 40 million books in more than 40 languages, and earned praise from futurologists such as Alvin Toffler of Future Shock fame, scientists from Carl Sagan to Douglas Hofstadter, and philosophers from Daniel Dennett to Nicholas Rescher.

“Our situation, I would say,” Lem once wrote, “is analogous to that of a savage who, having discovered the catapult, thought that he was already close to space travel.” Be realistic, is what this most fantastical of writers advises us. Be patient. Be as smart as you can possibly be. It’s a big world out there, and you have barely begun.


Just how much does the world follow laws?


How the Zebra Got its Stripes and Other Darwinian Just So Stories by Léo Grasset
The Serengeti Rules: The quest to discover how life works and why it matters by Sean B. Carroll
Lysenko’s Ghost: Epigenetics and Russia by Loren Graham
The Great Derangement: Climate change and the unthinkable by Amitav Ghosh
reviewed for New Scientist, 15 October 2016

JUST how much does the world follow laws? The human mind, it seems, may not be the ideal toolkit with which to craft an answer. To understand the world at all, we have to predict likely events and so we have a lot invested in spotting rules, even when they are not really there.

Such demands have also shaped more specialised parts of culture. The history of the sciences is one of constant struggle between the accumulation of observations and their abstraction into natural laws. The temptation (especially for physicists) is to assume these laws are real: a bedrock underpinning the messy, observable world. Life scientists, on the other hand, can afford no such assumption. Their field is constantly on the move, a plaything of time and historical contingency. If there is a lawfulness to living things, few plants and animals seem to be aware of it.

Consider, for example, the charming “just so” stories in French biologist and YouTuber Léo Grasset’s book of short essays, How the Zebra Got its Stripes. Now and again Grasset finds order and coherence in the natural world. His cost-benefit analysis of how animal communities make decisions, contrasting “autocracy” and “democracy”, is a fine example of lawfulness in action.

But Grasset is also sharply aware of those points where the cause-and-effect logic of scientific description cannot show the whole picture. There are, for instance, four really good ways of explaining how the zebra got its stripes, and those stripes arose probably for all those reasons, along with a couple of dozen others whose mechanisms are lost to evolutionary history.

And Grasset has even more fun describing the occasions when, frankly, nature goes nuts. Take the female hyena, for example, which has to give birth through a “pseudo-penis”. As a result, 15 per cent of mothers die after their first labour and 60 per cent of cubs die at birth. If this were a “just so” story, it would be a decidedly off-colour one.

The tussle between observation and abstraction in biology has a fascinating, fraught and sometimes violent history. In Europe at the birth of the 20th century, biology was still a descriptive science. Life presented, German molecular biologist Gunther Stent observed, “a near infinitude of particulars which have to be sorted out case by case”. Purely descriptive approaches had exhausted their usefulness and new, experimental approaches were developed: genetics, cytology, protozoology, hydrobiology, endocrinology, experimental embryology – even animal psychology. And with the elucidation of underlying biological process came the illusion of control.

In 1917, even as Vladimir Lenin was preparing to seize power in Russia, the botanist Nikolai Vavilov was lecturing to his class at the Saratov Agricultural Institute, outlining the task before them as “the planned and rational utilisation of the plant resources of the terrestrial globe”.

Predicting that the young science of genetics would give the next generation the ability “to sculpt organic forms at will”, Vavilov asserted that “biological synthesis is becoming as much a reality as chemical”.

The consequences of this kind of boosterism are laid bare in Lysenko’s Ghost by the veteran historian of Soviet science Loren Graham. He reminds us what happened when the tentatively defined scientific “laws” of plant physiology were wielded as policy instruments by a desperate and resource-strapped government.

Within the Soviet Union, dogmatic views on agrobiology led to disastrous agricultural reforms, and no amount of modern, politically motivated revisionism (the especial target of Graham’s book) can make those efforts seem more rational, or their aftermath less catastrophic.

In modern times, thankfully, a naive belief in nature’s lawfulness, reflected in lazy and increasingly outmoded expressions such as “the balance of nature”, is giving way to a more nuanced, self-aware, even tragic view of the living world. The Serengeti Rules, Sean B. Carroll’s otherwise triumphant account of how physiology and ecology turned out to share some of the same mathematics, does not shy away from the fact that the “rules” he talks about are really just arguments from analogy.

“If there is a lawfulness to living things, few plants and animals seem to be aware of it”
Some notable conservation triumphs have led from the discovery that “just as there are molecular rules that regulate the numbers of different kinds of molecules and cells in the body, there are ecological rules that regulate the numbers and kinds of animals and plants in a given place”.

For example, in Gorongosa National Park, Mozambique, in 2000, there were fewer than 1000 elephants, hippos, wildebeest, waterbuck, zebras, eland, buffalo, hartebeest and sable antelopes combined. Today, with the reintroduction of key predators, there are almost 40,000 animals, including 535 elephants and 436 hippos. And several of the populations are increasing by more than 20 per cent a year.

But Carroll is understandably flummoxed when it comes to explaining how those rules might apply to us. “How can we possibly hope that 7 billion people, in more than 190 countries, rich and poor, with so many different political and religious beliefs, might begin to act in ways for the long-term good of everyone?” he asks. How indeed: humans’ capacity for cultural transmission renders every Serengeti rule moot, along with the Serengeti itself – and a “law of nature” that does not include its dominant species is not really a law at all.

Of course, it is not just the sciences that have laws: the humanities and the arts do too. In The Great Derangement, a book that began as four lectures presented at the University of Chicago last year, the novelist Amitav Ghosh considers the laws of his own practice. The vast majority of novels, he explains, are realistic. In other words, the novel arose to reflect the kind of regularised life that gave you time to read novels – a regularity achieved through the availability of reliable, cheap energy: first, coal and steam, and later, oil.

No wonder, then, that “in the literary imagination climate change was somehow akin to extraterrestrials or interplanetary travel”. Ghosh is keenly aware of and impressively well informed about climate change: in 1978, he was nearly killed in an unprecedentedly ferocious tornado that ripped through northern Delhi, leaving 30 dead and 700 injured. Yet he has never been able to work this story into his “realist” fiction. His hands are tied: he is trapped in “the grid of literary forms and conventions that came to shape the narrative imagination in precisely that period when the accumulation of carbon in the atmosphere was rewriting the destiny of the Earth”.

The exciting and frightening thing about Ghosh’s argument is how he traces the novel’s narrow compass back to popular and influential scientific ideas – ideas that championed uniform and gradual processes over cataclysms and catastrophes.

One big complaint about science – that it kills wonder – is the same criticism Ghosh levels at the novel: that it bequeaths us “a world of few surprises, fewer adventures, and no miracles at all”. Lawfulness in biology is rather like realism in fiction: it is a convention so useful that we forget that it is a convention.

But, if anthropogenic climate change and the gathering sixth mass extinction event have taught us anything, it is that the world is wilder than the laws we are used to would predict. Indeed, if the world really were in a novel – or even in a book of popular science – no one would believe it.

Beware the indeterminate momentum of the throbbing whole


Graham Harman (2nd from right) and fellow speculative materialists in 2007


In 1942, the Argentine writer Jorge Luis Borges cooked up an entirely fictitious “Chinese” encyclopedia entry for animals. Among its nonsensical subheadings were “Embalmed ones”, “Stray dogs”, “Those that are included in this classification” and “Those that, at a distance, resemble flies”.

Explaining why these categories make no practical sense is a useful and enjoyable intellectual exercise – so much so that in in 1966 the French philosopher Michel Foucault wrote an entire book inspired by Borges’ notion. Les mots et les choses (The Order of Things) became one of the defining works of the French philosophical movement called structuralism.

How do we categorise the things we find in the world? In Immaterialism, his short and very sweet introduction to his own brand of philosophy, “object-oriented ontology”, the Cairo-based philosopher Graham Harman identifies two broad strategies. Sometimes we split things into their ingredients. (Since the enlightenment, this has been the favoured and extremely successful strategy of most sciences.) Sometimes, however, it’s better to work in the opposite direction, defining things by their relations with other things. (This is the favoured method of historians and critics and other thinkers in the humanities.)

Why should scientists care about this second way of thinking? Often they don’t have to. Scientists are specialists. Reductionism – finding out what things are made of – is enough for them.

Naturally, there is no hard and fast rule to be made here, and some disciplines – the life sciences especially – can’t always be reducing things to their components.

So there have been attempts to bring this other, “emergentist” way of thinking into the sciences. One of the most ingenious was the “new materialism” of the German entrepreneur (and Karl Marx’s sidekick) Friedrich Engels. One of Engels’s favourite targets was the Linnaean system of biological classification. Rooted in formal logic, this taxonomy divides all living things into species and orders. It offers us a huge snapshot of the living world. It is tremendously useful. It is true. But it has limits. It cannot record how one species may, over time, give rise to some other, quite different species. (Engels had great fun with the duckbilled platypus, asking where that fitted into any rigid scheme of things.) Similarly, there is no “essence” hiding behind a cloud of steam, a puddle of water, or a block of ice. There are only structures, succeeding each other in response to changes in the local conditions. The world is not a ready-made thing: it is a complex interplay of processes, all of which are ebbing and flowing, coming into being and passing away.

So far so good. Applied to science, however, Engels’ schema turn out to be hardly more than a superior species of hand-waving. Indeed, “dialectical materialism” (as it later became known) proved so unwieldy, it took very few years of application before it became a blunt weapon in the hands of Stalinist philosophers who used it to demotivate, discredit and disbar any scientific colleague whose politics they didn’t like.

Harman has learned the lessons of history well. Though he’s curious to know where his philosophy abuts scientific practice (and especially the study of evolution), he is prepared to accept that specialists know what they are doing: that rigor in a narrow field is a legitimate way of squeezing knowledge out of the world, and that a 126-page A-format paperback is probably not the place to reinvent the wheel.

What really agitates him, fills his pages, and drives him to some cracking one-liners (this is, heavens be praised, a *funny* book about philosophy) is the sheer lack of rigour to be found in his own sphere.

While pillorying scientists for treating objects as superficial compared with their tinest pieces, philosophers in the humanities have for more than a century been leaping off the opposite cliff, treating objects “as needlessly deep or spooky hypotheses”. By claiming that an object is nothing but its relations or actions they unknowingly repeat the argument of the ancient Megarians , “who claimed that no one is a house-builder unless they are currently building a house”. Harman is sick and tired of this intellectual fashion, by which “‘becoming’ is blessed as the trump card of innovators, while ‘being’ is cursed as a sad-sack regession to the archaic philosophies of olden times”.

Above all, Harman has had it with peers and colleagues who zoom out and away from every detailed question, until the very world they’re meant to be studying resembles “the indeterminate momentum of the throbbing whole” (and this is not a joke — this is the sincerely meant position statement of another philosopher, a friendly acquaintance of his, Jane Bennett).

So what’s Harman’s solution? Basically, he wants to be able to talk unapologetically about objects. He explores a single example: the history of the Dutch East India Company. Without toppling into the “great men” view of history – according to which a world of inanimate props is pushed about by a few arbitrarily privileged human agents – he is out to show that the EIC was an actual *thing*, a more-or-less stable phenomenon ripe for investigation, and not simply a rag-bag collection of “human practices”.

Does his philosophy describe the Dutch East India Company rigorously enough for his work to qualify as real knowledge? I think so. In fact I think he succeeds to a degree which will surprise, reassure and entertain the scientifically minded.

Be in no doubt: Harman is no turncoat. He does not want the humanities to be “more scientific”. He wants them to be less scientific, but no less rigorous, able to handle, with rigour and versatility, the vast and teeming world of things science cannot handle: “Hillary Clinton, the city of Odessa, Tolkein’s imaginary Rivendell… a severed limb, a mixed herd of zebras and wildebeest, the non-existent 2016 Chicago Summer Olympics, and the constellation of Scorpio”.

Graham Harman
Polity, £9.99