Future by design

The Second Digital Turn: Design beyond intelligence
Mario Carpo
MIT Press

THE Polish futurist Stanislaw Lem once wrote: “A scientist wants an algorithm, whereas the technologist is more like a gardener who plants a tree, picks apples, and is not bothered about ‘how the tree did it’.”

For Lem, the future belongs to technologists, not scientists. If Mario Carpo is right and the “second digital turn” described in his extraordinary new book comes to term, then Lem’s playful, “imitological” future where analysis must be abandoned in favour of creative activity, will be upon us in a decade or two. Never mind our human practice of science, science itself will no longer exist, and our cultural life will consist of storytelling, gesture and species of magical thinking.

Carpo studies architecture. Five years ago, he edited The Digital Turn in Architecture 1992-2012, a book capturing the curvilinear, parametric spirit of digital architecture. Think Frank Gehry’s Guggenheim Museum in Bilbao – a sort of deconstructed metal fish head – and you are halfway there.

Such is the rate of change that five years later, Carpo has had to write another book (the urgency of his prose is palpable and thrilling) about an entirely different kind of design. This is a generative design powered by artificial intelligence, with its ability to thug through digital simulations (effectively, breaking things on screen until something turns up that can’t be broken) and arriving at solutions that humans and their science cannot better.

This kind of design has no need of casts, stamps, moulds or dies. No costs need be amortised. Everything can be a one-off at the same unit cost.

Beyond the built environment, it is the spiritual consequences of this shift that matter, for by its light Carpo shows all cultural history to be a gargantuan exercise in information compression.

Unlike their AIs, human beings cannot hold much information at any one time. Hence, for example, the Roman alphabet: a marvel of compression, approximating all possible vocalisations with just 26 characters. Now that we can type and distribute any glyph at the touch of a button, is it any wonder emojis are supplementing our tidy 26-letter communications?

Science itself is simply a series of computational strategies to draw the maximum inference from the smallest number of precedents. Reduce the world to rules and there is no need for those precedents. We have done this for so long and so well some of us have forgotten that “rules” aren’t “real” rules, they are just generalisations.

AIs simply gather or model as many precedents as they wish. Left to collect data according to their own strengths, they are, Carpo says, “postscientific”. They aren’t doing science we recognise: they are just thugging.

“Carpo shows all cultural history to be a gargantuan exercise in information compression”

Carpo foresees the “separation of the minds of the thinkers from the tools of computation”. But in that alienation, I think, lies our reason to go on. Because humans cannot handle very much data at any one time, sorting is vital, which means we have to assign meaning. Sorting is therefore the process whereby we turn data into knowledge. Our inability to do what computers can do has a name already: consciousness.

Carpo’s succinctly argued future has us return to a tradition of orality and gesture, where these forms of communication need no reduction or compression since our tech will be able to record, notate, transmit, process and search them, making all cultural technologies developed to handle these tasks “equally unnecessary”. This will be neither advance nor regression. Evolution, remember, is maddeningly valueless.

Could we ever have evolved into Spock-like hyper-rationality? I doubt it. Carpo’s sincerity, wit and mischief show that Prospero is more the human style. Or Peter Pan, who observed: “You can have anything in life, if you will sacrifice everything else for it.”

 

Stalin’s meteorologist

I reviewed Olivier Rolin’s new book for The Daily Telegraph

750,000 shot. This figure is exact; the Soviet secret police, the NKVD, kept meticulous records relating to their activities during Stalin’s Great Purge. How is anyone to encompass in words this horror, barely 80 years old? Some writers find the one to stand for the all: an Everyman to focus the reader’s horror and pity. Olivier Rolin found his when he was shown drawings and watercolours made by Alexey Wangenheim, an inmate of the Solovki prison camp in Russia’s Arctic north. He made them for his daughter, and they are reproduced as touching miniatures in this slim, devastating book, part travelogue, part transliteration of Wangenheim’s few letters home.

While many undesirables were labelled by national or racial identity, a huge number were betrayed by their accomplishments. Before he was denounced by a jealous colleague, Wangenheim ran a pan-Soviet weather service. He was not an exceptional scientist: more an efficient bureaucrat. He cannot even be relied on “to give colourful descriptions of the glories of nature” before setting sail, with over a thousand others, for a secret destination, not far outside the town of Medvezhegorsk. There, some time around October 1937, a single NKVD officer dispatched the lot of them, though he had help with the cudgelling, the transport, the grave-digging. While he went to work with his Nagant pistol, others were washing blood and brains off the trucks and tarpaulins.

Right to the bitter end, Wangenheim is a boring correspondent, always banging on about the Party. “My faith in the Soviet authorities has in no way been shaken” he says. “Has Comrade Stalin received my letter?” And again: “I have battled in my heart not to allow myself to think ill of the Soviet authorities or of the leaders”. Rolin makes gold of such monotony, exploiting the degree to which French lends itself to lists and repeated figures, and his translator Ros Schwartz has rendered these into English that is not just palatable, but often thrilling and always freighted with dread.

When Wangenheim is not reassuring his wife about the Bolshevik project, he is making mosaics out of stone chippings and brick dust: meticulous little portraits of — of all people — Stalin. Rolin openly struggles to understand his subject’s motivation: “In any case, blinkeredness or pathetic cunning, there is something sinister about seeing this man, this scholar, making of his own volition the portrait of the man in whose name he is being crucified.”

That Rolin finds a mystery here is of a piece with his awkward nostalgia for the promise of the Bolshevik revolution. Hovering like a miasma over some pages (though Rolin is too smart to succumb utterly) is that hoary old meme, “the revolution betrayed”. So let us be clear: the revolution was not betrayed. The revolution panned out exactly the way it was always going to pan out, whether Stalin was at the helm or not. It is also exactly the way the French revolution panned out, and for exactly the same reason.

Both French and Socialist revolutions sought to reinvent politics to reflect the imminent unification of all branches of human knowledge, and consequently, their radical simplification. By Marx’s day this idea, under the label “scientism”, had become yawningly conventional: also wrong.

Certainly by the time of the Bolshevik revolution, scientists better than Wangenheim — physicists, most famously — knew that the universe would not brook such simplification, neither under Marx nor under any other totalising system. Rationality remains a superb tool with which to investigate the world. But as a working model of the world, guiding political action, it leads only to terror.

To understand Wangenheim’s mosaic-making, we have to look past his work, diligently centralising and simplifying his own meteorological science to the point where a jealous colleague, deprived of his sinecure, denounced him. We need to look at the human consequences of this attempt at scientific government, and particularly at what radical simplification does to the human psyche. To order and simplify life is to bureaucratise it, and to bureaucratise human beings is make them behave like machines. Rolin says Wangenheim clung to the party for the sake of his own sanity. I don’t doubt it. But to cling to any human institution, or to any such removed and fortressed individual, is the act, not of a suffering human being but of a malfunctioning machine.

At the end of his 1940 film The Great Dictator Charles Chaplin, dressed in Adolf Hitler’s motley, broke the fourth wall to declare war on the “machine men with machine minds” that were then marching roughshod across his world. Regardless of Hitler’s defeat, this was a war we assuredly lost. To be sure the bureaucratic infection, like all infections, has adapted to ensure its own survival, and it is not so virulent as it was. The pleasures of bureaucracy are more evident now; its damages, though still very real, are less evident. “Disruption” has replaced the Purge. The Twitter user has replaced the police informant.

But let us be explicit here, where Rolin has been admirably artful and quietly insidious: the pleasures of bureaucracy in both eras are exactly the same. Wangenheim’s murderers lived in a world that had been made radically simple for them. In Utopia, all you have to do is your job (though if you don’t, Utopia falls apart). These men weren’t deprived of humanity: they were relieved of it. They experienced exactly what you or I feel when the burden of life’s ambiguities is lifted of a sudden from our shoulders: contentment, bordering on joy.

A kind of “symbol knitting”

Reviewing new books by Paul Lockhart and Ian Stewart for The Spectator 

It’s odd, when you think about it, that mathematics ever got going. We have no innate genius for numbers. Drop five stones on the ground, and most of us will see five stones without counting. Six stones are a challenge. Presented with seven stones, we will have to start grouping, tallying and making patterns.

This is arithmetic, ‘a kind of “symbol knitting”’ according to the maths researcher and sometime teacher Paul Lockhart, whose Arithmetic explains how counting systems evolved to facilitate communication and trade, and ended up watering (by no very obvious route) the metaphysical gardens of mathematics.

Lockhart shamelessly (and successfully) supplements the archeological record with invented number systems of his own. His three fictitious early peoples have decided to group numbers differently: in fours, in fives, and in sevens. Now watch as they try to communicate. It’s a charming conceit.

Arithmetic is supposed to be easy, acquired through play and practice rather than through the kind of pseudo-theoretical ponderings that blighted my 1970s-era state education. Lockhart has a lot of time for Roman numerals, an effortlessly simple base-ten system which features subgroup symbols like V (5), L (50) and D (500) to smooth things along. From glorified tallying systems like this, it’s but a short leap to the abacus.

It took an eye-watering six centuries for Hindu-Arabic numbers to catch on in Europe (via Fibonacci’s Liber Abaci of 1202). For most of us, abandoning intuitive tally marks and bead positions for a set of nine exotic squiggles and a dot (the forerunner of zero) is a lot of cost for an impossibly distant benefit. ‘You can get good at it if you want to,’ says Lockhart, in a fit of under-selling, ‘but it is no big deal either way.’

It took another four centuries for calculation to become a career, as sea-going powers of the late 18th century wrestled with the problems of navigation. In an effort to improve the accuracy of their logarithmic tables, French mathematicians broke the necessary calculations down into simple steps involving only addition and subtraction, assigning each step to human ‘computers’.

What was there about navigation that involved such effortful calculation? Blame a round earth: the moment we pass from figures bounded by straight lines or flat surfaces we run slap into all the problems of continuity and the mazes of irrational numbers. Pi, the ratio of a circle’s circumference to its diameter, is ugly enough in base 10 (3.1419…). But calculate pi in any base, and it churns out numbers forever. It cannot be expressed as a fraction of any whole number. Mathematics began when practical thinkers like Archimedes decided to ignore naysayers like Zeno (whose paradoxes were meant to bury mathematics, not to praise it) and deal with nonsenses like pi and the square root of 1.

How do such monstrosities yield such sensible results? Because mathematics is magical. Deal with it.

Ian Stewart deals with it rather well in Significant Figures, his hagiographical compendium of 25 great mathematicians’ lives. It’s easy to quibble. One of the criteria for Stewart’s selection was, he tells us, diversity. Like everybody else, he wants to have written Tom Stoppard’s Arcadia, championing (if necessary, inventing) some unsung heroine to enliven a male-dominated field. So he relegates Charles Babbage to Ada King’s little helper, then repents by quoting the opinion of Babbage’s biographer Anthony Hyman (perfectly justified, so far as I know) that ‘there is not a scrap of evidence that Ada ever attempted original mathematical work’. Well, that’s fashion for you.

In general, Stewart is the least modish of writers, delivering new scholarship on ancient Chinese and Indian mathematics to supplement a well-rehearsed body of knowledge about the western tradition. A prolific writer himself, Stewart is good at identifying the audiences for mathematics at different periods. The first recognisable algebra book, by Al-Khwarizmi, written in the first half of the 9th century, was commissioned for a popular audience. Western examples of popular form include Cardano’s Book on Games of Chance, published 1663. It was the discipline’s first foray into probability.

As a subject for writers, mathematics sits somewhere between physics and classical music. Like physics, it requires that readers acquire a theoretical minimum, without which nothing will make much sense. (Unmathematical readers should not start withSignificant Figures; it is far too compressed.) At the same time, like classical music, mathematics will not stand too much radical reinterpretation, so that biography ends up playing a disconcertingly large role in the scholarship.

In his potted biographies Stewart supplements but makes no attempt to supersede Eric Temple Bell, whose history Men of Mathematics of 1937 remains canonical. This is wise: you wouldn’t remake Civilisation by ignoring Kenneth Clark. At the same time, one can’t help regretting the degree to which a Scottish-born mathematician and science fiction writer born in 1945 has had his limits set by the work of a Scottish-born mathematician and science fiction writer born in 1883. It can’t be helped. Mathematical results are not superseded. When the ancient Babylonians worked out how to solve quadratic equations, their result never became obsolete.

This is, I suspect, why both Lockhart and Stewart have each ended up writing good books about territories adjacent to the meat of mathematics. The difference is that Lockhart did this deliberately. Stewart simply ran out of room.

Stanisław Lem: The man with the future inside him

lem

From the 1950s, science fiction writer Stanisław Lem began firing out prescient explorations of our present and far beyond. His vision is proving unparalleled.
For New Scientist, 16 November 2016

“POSTED everywhere on street corners, the idiot irresponsibles twitter supersonic approval, repeating slogans, giggling, dancing…” So it goes in William Burroughs’s novel The Soft Machine (1961). Did he predict social media? If so, he joins a large and mostly deplorable crowd of lucky guessers. Did you know that in Robert Heinlein’s 1948 story Space Cadet, he invented microwave food? Do you care?

There’s more to futurology than guesswork, of course, and not all predictions are facile. Writing in the 1950s, Ray Bradbury predicted earbud headphones and elevator muzak, and foresaw the creeping eeriness of today’s media-saturated shopping mall culture. But even Bradbury’s guesses – almost everyone’s guesses, in fact – tended to exaggerate the contemporary moment. More TV! More suburbia! Videophones and cars with no need of roads. The powerful, topical visions of writers like Frederik Pohl and Arthur C. Clarke are visions of what the world would be like if the 1950s (the 1960s, the 1970s…) went on forever.

And that is why Stanisław Lem, the Polish satirist, essayist, science fiction writer and futurologist, had no time for them. “Meaningful prediction,” he wrote, “does not lie in serving up the present larded with startling improvements or revelations in lieu of the future.” He wanted more: to grasp the human adventure in all its promise, tragedy and grandeur. He devised whole new chapters to the human story, not happy endings.

And, as far as I can tell, Lem got everything – everything – right. Less than a year before Russia and the US played their game of nuclear chicken over Cuba, he nailed the rational madness of cold-war policy in his book Memoirs Found in a Bathtub (1961). And while his contemporaries were churning out dystopias in the Orwellian mould, supposing that information would be tightly controlled in the future, Lem was conjuring with the internet (which did not then exist), and imagining futures in which important facts are carried away on a flood of falsehoods, and our civic freedoms along with them. Twenty years before the term “virtual reality” appeared, Lem was already writing about its likely educational and cultural effects. He also coined a better name for it: “phantomatics”. The books on genetic engineering passing my desk for review this year have, at best, simply reframed ethical questions Lem set out in Summa Technologiae back in 1964 (though, shockingly, the book was not translated into English until 2013). He dreamed up all the usual nanotechnological fantasies, from spider silk space-elevator cables to catastrophic “grey goo”, decades before they entered the public consciousness. He wrote about the technological singularity – the idea that artificial superintelligence would spark runaway technological growth – before Gordon Moore had even had the chance to cook up his “law” about the exponential growth of computing power. Not every prediction was serious. Lem coined the phrase “Theory of Everything”, but only so he could point at it and laugh.

He was born on 12 September 1921 in Lwów, Poland (now Lviv in Ukraine). His abiding concern was the way people use reason as a white stick as they steer blindly through a world dominated by chance and accident. This perspective was acquired early, while he was being pressed up against a wall by the muzzle of a Nazi machine gun – just one of several narrow escapes. “The difference between life and death depended upon… whether one went to visit a friend at 1 o’clock or 20 minutes later,” he recalled.

Though a keen engineer and inventor – in school he dreamed up the differential gear and was disappointed to find it already existed – Lem’s true gift lay in understanding systems. His finest childhood invention was a complete state bureaucracy, with internal passports and an impenetrable central office.

He found the world he had been born into absurd enough to power his first novel (Hospital of the Transfiguration, 1955), and might never have turned to science fiction had he not needed to leap heavily into metaphor to evade the attentions of Stalin’s literary censors. He did not become really productive until 1956, when Poland enjoyed a post-Stalinist thaw, and in the 12 years following he wrote 17 books, among them Solaris (1961), the work for which he is best known by English speakers.

Solaris is the story of a team of distraught experts in orbit around an inscrutable and apparently sentient planet, trying to come to terms with its cruel gift-giving (it insists on “resurrecting” their dead). Solaris reflects Lem’s pessimistic attitude to the search for extraterrestrial intelligence. It’s not that alien intelligences aren’t out there, Lem says, because they almost certainly are. But they won’t be our sort of intelligences. In the struggle for control over their environment they may as easily have chosen to ignore communication as respond to it; they might have decided to live in a fantastical simulation rather than take their chances any longer in the physical realm; they may have solved the problems of their existence to the point at which they can dispense with intelligence entirely; they may be stoned out of their heads. And so on ad infinitum. Because the universe is so much bigger than all of us, no matter how rigorously we test our vaunted gift of reason against it, that reason is still something we made – an artefact, a crutch. As Lem made explicit in one of his last novels, Fiasco (1986), extraterrestrial versions of reason and reasonableness may look very different to our own.

Lem understood the importance of history as no other futurologist ever has. What has been learned cannot be unlearned; certain paths, once taken, cannot be retraced. Working in the chill of the cold war, Lem feared that our violent and genocidal impulses are historically constant, while our technical capacity for destruction will only grow.

Should we find a way to survive our own urge to destruction, the challenge will be to handle our success. The more complex the social machine, the more prone it will be to malfunction. In his hard-boiled postmodern detective story The Chain of Chance (1975), Lem imagines a very near future that is crossing the brink of complexity, beyond which forms of government begin to look increasingly impotent (and yes, if we’re still counting, it’s here that he makes yet another on-the-money prediction by describing the marriage of instantly accessible media and global terrorism).

Say we make it. Say we become the masters of the universe, able to shape the material world at will: what then? Eventually, our technology will take over completely from slow-moving natural selection, allowing us to re-engineer our planet and our bodies. We will no longer need to borrow from nature, and will no longer feel any need to copy it.

At the extreme limit of his futurological vision, Lem imagines us abandoning the attempt to understand our current reality in favour of building an entirely new one. Yet even then we will live in thrall to the contingencies of history and accident. In Lem’s “review” of the fictitious Professor Dobb’s book Non Serviam, Dobb, the creator, may be forced to destroy the artificial universe he has created – one full of life, beauty and intelligence – because his university can no longer afford the electricity bills. Let’s hope we’re not living in such a simulation.

Most futurologists are secret utopians: they want history to end. They want time to come to a stop; to author a happy ending. Lem was better than that. He wanted to see what was next, and what would come after that, and after that, a thousand, ten thousand years into the future. Having felt its sharp end, he knew that history was real, that the cause of problems is solutions, and that there is no perfect world, neither in our past nor in our future, assuming that we have one.

By the time he died in 2006, this acerbic, difficult, impatient writer who gave no quarter to anyone – least of all his readers – had sold close to 40 million books in more than 40 languages, and earned praise from futurologists such as Alvin Toffler of Future Shock fame, scientists from Carl Sagan to Douglas Hofstadter, and philosophers from Daniel Dennett to Nicholas Rescher.

“Our situation, I would say,” Lem once wrote, “is analogous to that of a savage who, having discovered the catapult, thought that he was already close to space travel.” Be realistic, is what this most fantastical of writers advises us. Be patient. Be as smart as you can possibly be. It’s a big world out there, and you have barely begun.

 

Just how much does the world follow laws?

zebra

How the Zebra Got its Stripes and Other Darwinian Just So Stories by Léo Grasset
The Serengeti Rules: The quest to discover how life works and why it matters by Sean B. Carroll
Lysenko’s Ghost: Epigenetics and Russia by Loren Graham
The Great Derangement: Climate change and the unthinkable by Amitav Ghosh
reviewed for New Scientist, 15 October 2016

JUST how much does the world follow laws? The human mind, it seems, may not be the ideal toolkit with which to craft an answer. To understand the world at all, we have to predict likely events and so we have a lot invested in spotting rules, even when they are not really there.

Such demands have also shaped more specialised parts of culture. The history of the sciences is one of constant struggle between the accumulation of observations and their abstraction into natural laws. The temptation (especially for physicists) is to assume these laws are real: a bedrock underpinning the messy, observable world. Life scientists, on the other hand, can afford no such assumption. Their field is constantly on the move, a plaything of time and historical contingency. If there is a lawfulness to living things, few plants and animals seem to be aware of it.

Consider, for example, the charming “just so” stories in French biologist and YouTuber Léo Grasset’s book of short essays, How the Zebra Got its Stripes. Now and again Grasset finds order and coherence in the natural world. His cost-benefit analysis of how animal communities make decisions, contrasting “autocracy” and “democracy”, is a fine example of lawfulness in action.

But Grasset is also sharply aware of those points where the cause-and-effect logic of scientific description cannot show the whole picture. There are, for instance, four really good ways of explaining how the zebra got its stripes, and those stripes arose probably for all those reasons, along with a couple of dozen others whose mechanisms are lost to evolutionary history.

And Grasset has even more fun describing the occasions when, frankly, nature goes nuts. Take the female hyena, for example, which has to give birth through a “pseudo-penis”. As a result, 15 per cent of mothers die after their first labour and 60 per cent of cubs die at birth. If this were a “just so” story, it would be a decidedly off-colour one.

The tussle between observation and abstraction in biology has a fascinating, fraught and sometimes violent history. In Europe at the birth of the 20th century, biology was still a descriptive science. Life presented, German molecular biologist Gunther Stent observed, “a near infinitude of particulars which have to be sorted out case by case”. Purely descriptive approaches had exhausted their usefulness and new, experimental approaches were developed: genetics, cytology, protozoology, hydrobiology, endocrinology, experimental embryology – even animal psychology. And with the elucidation of underlying biological process came the illusion of control.

In 1917, even as Vladimir Lenin was preparing to seize power in Russia, the botanist Nikolai Vavilov was lecturing to his class at the Saratov Agricultural Institute, outlining the task before them as “the planned and rational utilisation of the plant resources of the terrestrial globe”.

Predicting that the young science of genetics would give the next generation the ability “to sculpt organic forms at will”, Vavilov asserted that “biological synthesis is becoming as much a reality as chemical”.

The consequences of this kind of boosterism are laid bare in Lysenko’s Ghost by the veteran historian of Soviet science Loren Graham. He reminds us what happened when the tentatively defined scientific “laws” of plant physiology were wielded as policy instruments by a desperate and resource-strapped government.

Within the Soviet Union, dogmatic views on agrobiology led to disastrous agricultural reforms, and no amount of modern, politically motivated revisionism (the especial target of Graham’s book) can make those efforts seem more rational, or their aftermath less catastrophic.

In modern times, thankfully, a naive belief in nature’s lawfulness, reflected in lazy and increasingly outmoded expressions such as “the balance of nature”, is giving way to a more nuanced, self-aware, even tragic view of the living world. The Serengeti Rules, Sean B. Carroll’s otherwise triumphant account of how physiology and ecology turned out to share some of the same mathematics, does not shy away from the fact that the “rules” he talks about are really just arguments from analogy.

“If there is a lawfulness to living things, few plants and animals seem to be aware of it”
Some notable conservation triumphs have led from the discovery that “just as there are molecular rules that regulate the numbers of different kinds of molecules and cells in the body, there are ecological rules that regulate the numbers and kinds of animals and plants in a given place”.

For example, in Gorongosa National Park, Mozambique, in 2000, there were fewer than 1000 elephants, hippos, wildebeest, waterbuck, zebras, eland, buffalo, hartebeest and sable antelopes combined. Today, with the reintroduction of key predators, there are almost 40,000 animals, including 535 elephants and 436 hippos. And several of the populations are increasing by more than 20 per cent a year.

But Carroll is understandably flummoxed when it comes to explaining how those rules might apply to us. “How can we possibly hope that 7 billion people, in more than 190 countries, rich and poor, with so many different political and religious beliefs, might begin to act in ways for the long-term good of everyone?” he asks. How indeed: humans’ capacity for cultural transmission renders every Serengeti rule moot, along with the Serengeti itself – and a “law of nature” that does not include its dominant species is not really a law at all.

Of course, it is not just the sciences that have laws: the humanities and the arts do too. In The Great Derangement, a book that began as four lectures presented at the University of Chicago last year, the novelist Amitav Ghosh considers the laws of his own practice. The vast majority of novels, he explains, are realistic. In other words, the novel arose to reflect the kind of regularised life that gave you time to read novels – a regularity achieved through the availability of reliable, cheap energy: first, coal and steam, and later, oil.

No wonder, then, that “in the literary imagination climate change was somehow akin to extraterrestrials or interplanetary travel”. Ghosh is keenly aware of and impressively well informed about climate change: in 1978, he was nearly killed in an unprecedentedly ferocious tornado that ripped through northern Delhi, leaving 30 dead and 700 injured. Yet he has never been able to work this story into his “realist” fiction. His hands are tied: he is trapped in “the grid of literary forms and conventions that came to shape the narrative imagination in precisely that period when the accumulation of carbon in the atmosphere was rewriting the destiny of the Earth”.

The exciting and frightening thing about Ghosh’s argument is how he traces the novel’s narrow compass back to popular and influential scientific ideas – ideas that championed uniform and gradual processes over cataclysms and catastrophes.

One big complaint about science – that it kills wonder – is the same criticism Ghosh levels at the novel: that it bequeaths us “a world of few surprises, fewer adventures, and no miracles at all”. Lawfulness in biology is rather like realism in fiction: it is a convention so useful that we forget that it is a convention.

But, if anthropogenic climate change and the gathering sixth mass extinction event have taught us anything, it is that the world is wilder than the laws we are used to would predict. Indeed, if the world really were in a novel – or even in a book of popular science – no one would believe it.

Beware the indeterminate momentum of the throbbing whole

2ndfromright_speculative-realism-materialism

Graham Harman (2nd from right) and fellow speculative materialists in 2007

 

In 1942, the Argentine writer Jorge Luis Borges cooked up an entirely fictitious “Chinese” encyclopedia entry for animals. Among its nonsensical subheadings were “Embalmed ones”, “Stray dogs”, “Those that are included in this classification” and “Those that, at a distance, resemble flies”.

Explaining why these categories make no practical sense is a useful and enjoyable intellectual exercise – so much so that in in 1966 the French philosopher Michel Foucault wrote an entire book inspired by Borges’ notion. Les mots et les choses (The Order of Things) became one of the defining works of the French philosophical movement called structuralism.

How do we categorise the things we find in the world? In Immaterialism, his short and very sweet introduction to his own brand of philosophy, “object-oriented ontology”, the Cairo-based philosopher Graham Harman identifies two broad strategies. Sometimes we split things into their ingredients. (Since the enlightenment, this has been the favoured and extremely successful strategy of most sciences.) Sometimes, however, it’s better to work in the opposite direction, defining things by their relations with other things. (This is the favoured method of historians and critics and other thinkers in the humanities.)

Why should scientists care about this second way of thinking? Often they don’t have to. Scientists are specialists. Reductionism – finding out what things are made of – is enough for them.

Naturally, there is no hard and fast rule to be made here, and some disciplines – the life sciences especially – can’t always be reducing things to their components.

So there have been attempts to bring this other, “emergentist” way of thinking into the sciences. One of the most ingenious was the “new materialism” of the German entrepreneur (and Karl Marx’s sidekick) Friedrich Engels. One of Engels’s favourite targets was the Linnaean system of biological classification. Rooted in formal logic, this taxonomy divides all living things into species and orders. It offers us a huge snapshot of the living world. It is tremendously useful. It is true. But it has limits. It cannot record how one species may, over time, give rise to some other, quite different species. (Engels had great fun with the duckbilled platypus, asking where that fitted into any rigid scheme of things.) Similarly, there is no “essence” hiding behind a cloud of steam, a puddle of water, or a block of ice. There are only structures, succeeding each other in response to changes in the local conditions. The world is not a ready-made thing: it is a complex interplay of processes, all of which are ebbing and flowing, coming into being and passing away.

So far so good. Applied to science, however, Engels’ schema turn out to be hardly more than a superior species of hand-waving. Indeed, “dialectical materialism” (as it later became known) proved so unwieldy, it took very few years of application before it became a blunt weapon in the hands of Stalinist philosophers who used it to demotivate, discredit and disbar any scientific colleague whose politics they didn’t like.

Harman has learned the lessons of history well. Though he’s curious to know where his philosophy abuts scientific practice (and especially the study of evolution), he is prepared to accept that specialists know what they are doing: that rigor in a narrow field is a legitimate way of squeezing knowledge out of the world, and that a 126-page A-format paperback is probably not the place to reinvent the wheel.

What really agitates him, fills his pages, and drives him to some cracking one-liners (this is, heavens be praised, a *funny* book about philosophy) is the sheer lack of rigour to be found in his own sphere.

While pillorying scientists for treating objects as superficial compared with their tinest pieces, philosophers in the humanities have for more than a century been leaping off the opposite cliff, treating objects “as needlessly deep or spooky hypotheses”. By claiming that an object is nothing but its relations or actions they unknowingly repeat the argument of the ancient Megarians , “who claimed that no one is a house-builder unless they are currently building a house”. Harman is sick and tired of this intellectual fashion, by which “‘becoming’ is blessed as the trump card of innovators, while ‘being’ is cursed as a sad-sack regession to the archaic philosophies of olden times”.

Above all, Harman has had it with peers and colleagues who zoom out and away from every detailed question, until the very world they’re meant to be studying resembles “the indeterminate momentum of the throbbing whole” (and this is not a joke — this is the sincerely meant position statement of another philosopher, a friendly acquaintance of his, Jane Bennett).

So what’s Harman’s solution? Basically, he wants to be able to talk unapologetically about objects. He explores a single example: the history of the Dutch East India Company. Without toppling into the “great men” view of history – according to which a world of inanimate props is pushed about by a few arbitrarily privileged human agents – he is out to show that the EIC was an actual *thing*, a more-or-less stable phenomenon ripe for investigation, and not simply a rag-bag collection of “human practices”.

Does his philosophy describe the Dutch East India Company rigorously enough for his work to qualify as real knowledge? I think so. In fact I think he succeeds to a degree which will surprise, reassure and entertain the scientifically minded.

Be in no doubt: Harman is no turncoat. He does not want the humanities to be “more scientific”. He wants them to be less scientific, but no less rigorous, able to handle, with rigour and versatility, the vast and teeming world of things science cannot handle: “Hillary Clinton, the city of Odessa, Tolkein’s imaginary Rivendell… a severed limb, a mixed herd of zebras and wildebeest, the non-existent 2016 Chicago Summer Olympics, and the constellation of Scorpio”.

Immaterialism
Graham Harman
Polity, £9.99

The tomorrow person

gettyimages-480014817-800x533

You Belong to the Universe: Buckminster Fuller and the future by Jonathon Keats
reviewed for New Scientist, 11 June 2016.

 

IN 1927 the suicidal manager of a building materials company, Richard Buckminster (“Bucky”) Fuller, stood by the shores of Lake Michigan and decided he might as well live. A stern voice inside him intimated that his life after all had a purpose, “which could be fulfilled only by sharing his mind with the world”.

And share it he did, tirelessly for over half a century, with houses hung from masts, cars with inflatable wings, a brilliant and never-bettered equal-area map of the world, and concepts for massive open-access distance learning, domed cities and a new kind of playful, collaborative politics. The tsunami that Fuller’s wing flap set in motion is even now rolling over us, improving our future through degree shows, galleries, museums and (now and again) in the real world.

Indeed, Fuller’s”comprehensive anticipatory design scientists” are ten-a-penny these days. Until last year, they were being churned out like sausages by the design interactions department at the Royal College of Art, London. Futurological events dominate the agendas of venues across New York, from the Institute for Public Knowledge to the International Center of Photography. “Science Galleries”, too, are popping up like mushrooms after a spring rain, from London to Bangalore.

In You Belong to the Universe, Jonathon Keats, himself a critic, artist and self-styled “experimental philosopher”, looks hard into the mirror to find what of his difficult and sometimes pantaloonish hero may still be traced in the lineaments of your oh-so-modern “design futurist”.

Be in no doubt: Fuller deserves his visionary reputation. He grasped in his bones, as few have since, the dynamism of the universe. At the age of 21, Keats writes, “Bucky determined that the universe had no objects. Geometry described forces.”

A child of the aviation era, he used materials sparingly, focusing entirely on their tensile properties and on the way they stood up to wind and weather. He called this approach “doing more with less”. His light and sturdy geodesic dome became an icon of US ingenuity. He built one wherever his country sought influence, from India to Turkey to Japan.

Chapter by chapter, Keats asks how the future has served Fuller’s ideas on city planning, transport, architecture, education. It’s a risky scheme, because it invites you to set Fuller’s visions up simply to knock them down again with the big stick of hindsight. But Keats is far too canny for that trap. He puts his subject into context, works hard to establish what would and would not be reasonable for him to know and imagine, and explains why the history of built and manufactured things turned out the way it has, sometimes fulfilling, but more often thwarting, Fuller’s vision.

This ought to be a profoundly wrong-headed book, judging one man’s ideas against the entire recent history of Spaceship Earth (another of Fuller’s provocations). But You Belong to the Universe says more about Fuller and his future in a few pages than some whole biographies, and renews one’s interest – if not faith – in all those graduate design shows.

Seven nature-writing books that capture the spirit of animals

unknown-800x533

Annie Marie Musselman/Redux/Eyevine

 

It is often impossible to separate how animals behave “wild” from how they behave around humans. Coyotes are a startling example. Once limited to a small range in the south-west US, they responded to eradication campaigns by expanding across all of North America and adopting their would-be nemesis’s habits to an almost ludicrous degree: coyotes have even been seen catching buses.
for New Scientist18 May 2016

Is boredom good for us?

time

Sandi Mann’s The Upside of Downtime and Felt Time: The psychology of how we perceive time by Marc Wittmann reviewed for New Scientist, 13 April 2016.

 

VISITORS to New York’s Museum of Modern Art in 2010 got to meet time, face-to-face. For her show The Artist is Present, Marina Abramovic sat, motionless, for 7.5 hours at a stretch while visitors wandered past her.

Unlike all the other art on show, she hadn’t “dropped out” of time: this was no cold, unbreathing sculpture. Neither was she time’s plaything, as she surely would have been had some task engaged her. Instead, Marc Wittmann, a psychologist based in Freiburg, Germany, reckons that Abramovic became time.

Wittmann’s book Felt Time explains how we experience time, posit it and remember it, all in the same moment. We access the future and the past through the 3-second chink that constitutes our experience of the present. Beyond this interval, metronome beats lose their rhythm and words fall apart in the ear.

As unhurried and efficient as an ophthalmologist arriving at a prescription by placing different lenses before the eye, Wittmann reveals, chapter by chapter, how our view through that 3-second chink is shaped by anxiety, age, boredom, appetite and feeling.

Unfortunately, his approach smacks of the textbook, and his attempt at a “new solution to the mind-body problem” is a mess. However, his literary allusions – from Thomas Mann’s study of habituation in The Magic Mountain to Sten Nadolny’s evocation of the present moment in The Discovery of Slowness – offer real insight. Indeed, they are an education in themselves for anyone with an Amazon “buy” button to hand.

As we read Felt Time, do we gain most by mulling Wittmann’s words, even if some allusions are unfamiliar? Or are we better off chasing down his references on the internet? Which is the more interesting option? Or rather: which is “less boring”?

Sandi Mann’s The Upside of Downtime is also about time, inasmuch as it is about boredom.

Once we delighted in devices that put all knowledge and culture into our pockets. But our means of obtaining stimulation have become so routine that they have themselves become a source of boredom. By removing the tedium of waiting, says psychologist Mann, we have turned ourselves into sensation junkies. It’s hard for us to pay attention to a task when more exciting stimuli are on offer, and being exposed to even subtle distractions can make us feel more bored.

Sadly, Mann’s book demonstrates the point all too well. It is a design horror: a mess of boxed-out paragraphs and bullet-pointed lists. Each is entertaining in itself, yet together they render Mann’s central argument less and less engaging, for exactly the reasons she has identified. Reading her is like watching a magician take a bullet to the head while “performing” Russian roulette.

In the end Mann can’t decide whether boredom is a good or bad thing, while Wittmann’s more organised approach gives him the confidence he needs to walk off a cliff as he tries to use the brain alone to account for consciousness. But despite the flaws, Wittmann is insightful and Mann is engaging, and, praise be, there’s always next time.