Venice in interesting times

Visiting May You Live In Interesting Times, the 58th International Art Exhibition at the Venice Biennale, for New Scientist, 16 May 2019

BETWEEN now and 24 November, half a million people will visit May You Live in Interesting Times, the main art exhibition of the Venice Biennale. More than 120 years old, the Biennale is the world’s biggest and most venerable art fair. This year’s offering overflows its historical venue in the gardens on Venice’s eastern edge and sprawls across the city.

In a 300-metre-long former rope-making factory in Venice’s Arsenale, a complex of former shipyards and armouries, it is hard to miss data-verse 1 by Japanese DJ and data artist Ryoji Ikeda: the first instalment of a year-long project to realise an entire universe on a gigantic, wall-sized high-definition screen.

Back in Paris, in a studio that consists of hardly more than a few tables and laptops, Ikeda and his programmers have been peeling open huge data sets, using software they have written themselves. From the flood of numbers issuing from CERN, NASA, the Human Genome Project and other open sources, they have fashioned highly detailed abstract animations.

Ikeda is self-taught. He came to visual art from making animations to accompany DJ sets in the squats, clubs and underground parties of Kyoto, Japan. While his own musical taste was eclectic in the extreme, “from classical to voodoo”, Ikeda was drawn to house and dub: forms in which he says “the sound system is the real subject, not the music being played”.

His own “music” reduces sound to sine waves and impulses – and the animations to accompany his sets are equally minimal. “If the sine wave is the simplest expression of sound, what’s the simplest expression of light? For the scientist, that’s a complicated question, but for the artist, the answer is simple: it’s the pixel,” he says.

Ikeda’s project to reduce the world to its essentials continues: “I wondered what would happen if matter were reduced the same way.” Now Ikeda has turned himself into one of art’s curious beasts, the pure “data artist”.

Each of data-verse 1‘s 15-minute-long abstract “dances” explores the universe at a different scale, from the way proteins fold to the pattern of ripples in the cosmic background radiation. However, Ikeda’s aim is not to illustrate or visualise the universe, but to convey the sheer quantity of data we are now gathering in our effort to understand the world.

In the Arsenale, there are glimpses of this new nature. The Milky Way, reduced to wheeling labels. The human body, taken apart and presented as a sequence of what look like archaeological finds. A brain, colour-coded, turned over and over, as if for the inspection of a hyperactive child. A furious blizzard of solar images. And other less-easily identified sequences, where the information has peeled away entirely from the thing it represents, and takes on a life of its own: red pixels move upstream through flowing numbers like so many salmon.

Ikeda differs from his fellow data artists. While a generation has embraced and made art from “big data” – the kind of dynamic information flow that derives from recording a constantly changing world – Ikeda remains wedded to an earlier, more philosophical definition of data as the record of observed facts. Chaos and complexity for their own sake do not interest him. “I never use dynamic data in my work,” he says.

He did try, once. In 2014, he won a residency at the Large Hadron Collider at CERN, Switzerland. But he found the data overwhelming. “They have supercomputers and one experiment takes two years to analyse and compute,” he says, “and still it’s not really enough. They proposed I use this dynamic data, but how could one single artist handle this? We talk of ‘big data’ but no one imagines really how big it is.”

So Ikeda’s data-verse 1 project, which will take a year and two more productions to reach fruition, is founded on that most old-fashioned of ideas, a record of objective truth. It is neither easy nor cheap to realise, and is being supported by watch-makers Audemars Piguet, an increasingly powerful patron of artists who operate on the boundaries between art and science.

Last year, the firm helped Brighton-based art duo Semiconductor realise their CERN-inspired kinetic sculpture HALO. Before that, it invited lidar artist Quayola to map the Swiss valley where it has its factory.

While Audemars Piguet has an interest in art that pushes technological boundaries, Ikeda fights shy of talk of technology, or even physics. He is interested in the truth bound up in numbers themselves. In an interview with Japanese art critic Akira Asada in 2009, he remarked: “I cannot help but wonder if there are any artists today that give real consideration to beauty. To me, it is mathematicians, not artists, who epitomise that kind of individual. There is such a freeness to their thinking that it is almost embarrassing to me.”

Other highlights at the Arsenale include Dominique Gonzalez-Foerster’s Endodrome, (above) a purely virtual work, accessed through a HTC Vive Pro headset. The artist envisioned it “as a kind of organic and mental space, a slightly altered state of consciousness”. Manifesting at first as a sort of hyper-intuitive painting app, in which you use your own outpoured breath as a brush, Endodrome’s imagery becomes ever more precise and surreal. In a show that bristles with anxiety, Gonzalez-Foerster offers the festival-goer an oasis of creative contemplation.

Also at the Arsenale, and fresh from her show Power Plants at London’s Serpentine Gallery, the German artist Hito Steyerl presents This Is the Future, (above) a lush, AI-generated garden of the future, all the more tantalising for the fact that you’ll probably die there. Indeed, this being the future, you’re sure to die there. Steyerl mixes up time and risk, hope and fear, in a wonderfully sly send-up of professional future-gazing.

The Giardini, along the city’s eastern edge, are the traditional site of La Biennale Art Exhibitions since they began in 1895. They’re where you’ll find the national pavilions. Hungary possesses one of the 29 permanent structures here, and this year it’s full of imaginary cameras. They’re the work of cartoonist-turned media artist Tamás Waliczky. Some of his Imaginary Cameras and Other Optical Devices (above) are based on real cameras, others on long-forgotten 19th-century machines; still others are entirely fictional (not to mention impossible). Can you tell the difference? In any event, this understated show does a fine job of reminding us that we see the world in many, highly selective ways.

There’s quite as much activity outside the official venues of the Biennale as within them. At the Ca’ Rezzonico palazzo until 6 July, you have a chance to save an internationally celebrated artist from drowning (or not- it’s really up to you). A meticulously rendered volumetric avatar of Marina Abramović beckons from within a glass tank that is slowly filling with water, in a bid to draw attention to rising sea levels in a city which is famously sinking. Don’t knock Rising (above) till you’ve tried it: this ludicrous-sounding jape proved oddly moving.

Back at the Arsenale, Ed Atkins reprises his installation Olde Food, (above) which had its UK outing at London’s Cabinet gallery last year. Atkins has spent much of his career exploring what roboticist Masahiro Mori’s famously dubbed the “uncanny valley” — the gap that is supposed to separate real people from their human-like creations. Mori’s assumption was that the closer our inventions came to resembling us, the creepier they would become.

Using commercially purchased avatars which he animates using facial recognition software, Atkins has created his share of creepy art zombies. In Olde Food, though, he introduces a new element: an almost unbearably intense compassion.

Atkins has created a world populated by uncanny digital avatars who (when they’re not falling from the sky into sandwiches — you’ll have to trust me when I say this does make a sort of sense) quite clearly yearn for the impress of genuine humanity. These near-people pray. They play piano (or try to). They weep. They’re ugly. They’re uncoordinated. They’re quite hopeless, really. I do wish I could have done something for them.

In the realm of mind games

By the end of the show, I was left less impressed by artificial intelligence and more depressed that it had reduced my human worth to base matter. Had it, though? Or had it simply made me aware of how much I wanted to be base matter, shaped into being by something greater than myself? I was reminded of something that Benjamin Bratton, author of the cyber-bible The Stack, said in a recent lecture: “We seem only to be able to approach AI theologically.”

Visiting AI: More Than Human at London’s Barbican Centre for the Financial Times, 15 May 2019.

Planck comes to Marvel’s rescue

Watching Anthony and Joe Russo’s Avengers: Endgame for New Scientist, 15 May 2019

AFTER a spectacular false start, the heroes of Anthony and Joe Russo’s Avengers: End Game gather around a cobbled-together time machine. They’re out to stop Thanos, a supervillian whose solution to the universe’s resource depletion problem is to annihilate half of all life.

Stopping Thanos will not be easy, since the film — the capstone on 21 other interconnected movies in the Marvel cinematic universe — opens with Thanos having already achieved his goal. Many of our favourite characters are already dead. Given that vases do not unbreak themselves, how then will the surviving Avengers bring half the world back to life?

Revisiting and resetting past narratives is a necessity for long-running drama franchises. And as the deceased Bobby Ewing discovered when he stepped out of his shower in 1986, erasing two whole seasons of Dynasty’s soapy story arc, it can be a hard pill for viewers to swallow.

You’d think science fiction franchises would have an easier time of it, armed as they are with all manner of P T Barnum tricks, but the truth’s more complicated. The world of the X-Men draws to a close this year with two films, Dark Phoenix and The New Mutants. The franchise’s constant, piecemeal reinventions have been sloppy, but only so as to stay half-way faithful to their even more sloppy comic-book sources. On the plus side, we’ve had the passage of time, and the price paid for wisdom, brought to life by the unaging, unkillable, and ever more excruciatingly lonely figure of Wolverine, played by Hugh Jackman.

From the always mindbending Doctor Who to the unforgettably weird final seasons of the Battlestar Galactica retread, it’s clear that you can tell truths about time, age, mortality, loss and regret in playful ways without ever opening a science textbook, and I wish to heaven someone had pointed this out to Star Trek, notorious for being the franchise where overblown popular science goes to die.

Since The Next Generation, Star Trek has saddled itself with a science bible that almost makes sense. And why not? Einstein’s equations do allow for the existence of time machines. And physicist Kip Thorne’s work in the 1980s on time-space wormholes does allow for the transmission of information through time. But hang on a minute: time machines aren’t practical, and the kind of messages you can actually send from the future aren’t ever going to be interesting, and the more you cite real science, the more you leave yourself open to people who begin sentences with phrases like “Yes but…” and “I think you’ll find…”

Avengers: Endgame’s hokey solution to time travel works far better, I reckon, by colliding two chunks of utter nonsense at high narrative speed. Take one master thief, Scott Lang (played by the always affable Paul Rudd), give him a suit that lets him shrink small enough to enter “the quantum realm”, point out (correctly) that at this scale time and space cease to mean very much, and hey presto, you have yourself a time machine powered entirely by jazz-hands and flim-flam. Smart-alec viewers can’t contradict the science, because there is no science here, and hasn’t been since 1899.

This was the year German theoretical physicist Max Planck evolved a model of the physical universe that relied upon ratios (which are timeless and universally true) rather than measurements (which depend upon who’s making the ruler). In the universe Planck drew up, the speed of light, the electromagnetic wave function, and the gravitational constant all have a value of 1. From this fiendish piece of dimensional analysis, you can work the shortest distance imaginable — the point at which the terms “here” or “there” cease to have meaning.

In a space smaller than the Planck length squared, information cannot exist — which is why a single photon entering a black hole, increases the area of the event horizon by 10-66 cm2. As Ant Man, understandably, did not say.

Asking for it

Capital is not the point here. Neither is capitalism. The point is our relationship with information. Amazon’s algorithms are sucking all the localism out of the retail system, to the point where whole high streets have vanished — and entire communities with them. Amazon is in part powered by the fatuous metricisation of social variety through systems of scores, rankings, likes, stars and grades, which are (not coincidentally) the methods by which social media structures — from clownish Twitter to China’s Orwellian Social Credit System — turn qualitative differences into quantitative inequalities.

Reading The Metric Society: On the Quantification of the Social by Steffen Mau (Polity Press) for the Times Literary Supplement, 30 April 2019 

“The English expedition of 1919 is to blame for this whole misery”

‘The English expedition of 1919 is ultimately to blame for this whole misery, by which the general masses seized possession of me,’ Einstein once remarked. Charlie Chaplin understood his appeal: ‘They cheer me because they all understand me,’ he remarked, accompanying the theoretical physicist to a film premiere, ‘and they cheer you because no one understands you.’

Four books to celebrate the centenary of  Eddington’s 1919 eclipse observations. For The Spectator, 11 May 2019.

 

A series of apparently impossible events

Exploring Smoke and Mirrors at Wellcome Collection for New Scientist, 1 May 2019

ACCORDING to John Nevil Maskelyne, “a bad conjurer will make a good medium any day”. He meant that, as a stage magician in 19th-century London, he had to produce successful effects night after night, while rivals who claimed their illusions were powered by the spirit world could simply blame a bad set on “unhelpful spirits”, or even on the audience’s own scepticism.

A gaffe-ridden performance in the UK by one set of spiritualists, the US Davenport Brothers, drove Maskelyne to invent his own act. With his friend, the cabinet maker George Alfred Cooke, he created an “anti-spiritualist” entertainment, at once replicating and debunking the spiritualist movement’s stock-in-trade effects.

Matthew Tompkins teases out the historical implications of Maskelyne’s story in The Spectacle of Illusion: Magic, the paranormal and the complicity of the mind (Thames & Hudson). It is a lavishly illustrated history to accompany Smoke and Mirrors, a new and intriguing exhibition at the Wellcome Collection in London.

Historical accident was partly responsible. In 1895, Guglielmo Marconi sent long-wave radio signals over a distance of a couple of kilometres, and, for decades after, hardly a year passed in which some researcher didn’t announce a new type of invisible ray. The world turned out to have aspects hidden from unaided human perception. Was it so unreasonable of people to speculate about what, or who, might lurk in those hidden corners of reality? Were they so gullible, reeling as they were from the mass killings of the first world war, to populate these invisible realms with their dead?

In 1924, the magazine Scientific American offered $2500 to any medium who could demonstrate their powers under scientific controls. The medium Mina “Margery” Crandon decided to try her hand, but she reckoned without the efforts of one Harry “Handcuff” Houdini, who eventually exposed her as a fraud.

Yet spiritualism persisted, shading off into parapsychology, quantum speculation and any number of cults. Understanding why is more the purview of a psychologist such as Gustav Kuhn, who, as well as being a major contributor to the show, offers insight into magic and magical belief in his own new book, Experiencing the Impossible (MIT Press).

Kuhn, a member of the Magic Circle, finds Maskelyne’s “anti-spiritualist” form of stage magic alive in the hands of illusionist Derren Brown. He suggests that Brown is more of a traditional magician than he lets on, dismissing the occult while he endorses mysterious psychological phenomena, mostly to do with “subconscious priming”, that, at root, are non-scientific.

Kuhn defines magic as “the experience of wonder that results from perceiving an apparently impossible event”. Definitions of what is impossible differ, and different illusions work for different people. You can even design it for animals, as a torrent of YouTube videos, based largely on Finnish magician Jose Ahonen’s “Magic for Dogs”, attest.

Tricking dogs is one thing, but why do our minds fall for magic? It was the 18th-century Scottish Enlightenment philosopher, David Hume, who argued that there is no metaphysical glue binding events, and that we only ever infer causal relationships, be they real or illusory.

Twinned with our susceptibility to wrongly infer relationships between events in the world is our ability to fool ourselves at an even deeper level. Numerous studies, including one by researcher and former magician Jay Olson and clinician Amir Raz which sits at the exit to the Wellcome show, conclude that our feeling of free will may be an essential trick of the mind.

Inferring connections makes us confident in ourselves and our abilities, and it is this confidence, this necessary delusion about the brilliance of our cognitive abilities, that lets us function… and be tricked. Even after reading both books, I defy you to see through the illusions and wonders in store at the exhibition.

Writing (or, How the dead lord it over the living)

Visiting Writing: Making Your Mark, an exhibition at the British Library, for New Scientist, 26 April 2019

Writing is dark magic. Because the written, or even better, carved, word can effortlessly outlive the human span, it enables the dead to lord it over the living.

There are advantages to this, of course. It’s handy not to have to reinvent the wheel generation after generation.

But let’s be clear who wields the power here – much as the ancient Egyptians, who used to channel the divine power of words into spells that would animate carved servants, or shabti, ready to do their bidding after their death. “Here I am,” reads the inscription on one poor put-upon shabti, ready “when called to work, cultivate fields or irrigate the riverbanks.”

Poetry be damned: writing is first and foremost about control.

This is very apparent in a new exhibition at the British Library, London, called Writing: Making your mark. It’s been launched to celebrate a technology that’s a bit under five millennia old, so you’ll find everything from carved stone slabs to the first ever use of an italic typeface, to (my favourite) an eye-wateringly vituperative telegram (in four parts) from the 20th century British playwright John Osborne to a hostile critic.

It’s comprehensive, thoughtful and eye-catching, with a design that has you wandering through what looks like some peculiar 3D cuneiform from the future. Best of all, the show makes narrative sense: we learn how various writing and printing forms evolved independently at different times and places, to fulfil changing social and cultural functions.

Granted, the story does not and cannot start with much of a bang. As the wall information concedes, the act of writing is just a recreational by-product of accounting. The first written records were tallies, calendars and contracts. Set aside their great age and the earliest objects in the exhibition (among them the oldest in the Library’s collection, an Egyptian stela (carved stone) from around 1600 BC) make for dull reading.

But amazingly early, suspicion, and even downright hatred, of the written word crept in – to run like a secret history beneath the course of Western culture. In the dialogue Phaedrus, composed around 370 BC, the ancient Greek philosopher Socrates complains that writing things down will “create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves… they will be hearers of many things and will have learned nothing”.

Plato, Socrates’ pupil, listened to his master’s diatribe intently. Indeed, he took down every word. Plato’s obtuse disobedience has paid huge dividends. For one thing, it means that Socrates’ wisdom is available to us all. Millennia hence, we are still reading Phaedrus, and smiling at the quaint bits.

But a few of us (we meet in dank basement rooms: check your pens and smartphones at the door) agree with Socrates. We reckon that putting pen (stylus, chisel or moveable type…) to paper (stone, slate, clay, or peeled bark) has set the lot of us on the road to everlasting perdition.

Our current all-too-well founded panics around trust, authority, truth and fake news feed the gloomy suspicion that the written word makes us lazy and shallow, that for all our modern, information-driven wonders, our space rockets and our antibiotics, it makes us less than we might be: a people earnestly conversing with themselves.

Writing: Making your mark does its best to win us round to the cause of literacy and preserved thought.

Who knew that the story of written forms would prove so epic? Or, indeed, so touching? There’s a sandstone sphinx sporting a prototype letter “A”, and a Greek child’s second-century homework scratched, laboriously, on a clay tablet.

But with their final room, about the future of writing, I feel the curators may finally have woken to doubt. A black box, and virtually empty, this space whether new media may undercut our surprisingly resilient written culture.

I’m surprised the curators’ confidence should have been so shaken. After all, written and printed forms continue to proliferate: emoji have provided us with a whole new writing system to combine with our alphabetic language. Instagram, once the home of unadorned selfie snaps, now wobbles and sparkles with photos smothered in animated annotations and one-liners in a form that’s so new it hasn’t really got a name yet. Writing continues to be one of our most plastic and fast-changing forms of self-expression.

Though with each innovation, we retreat, chattering, ever further from Socrates’ dinner party ideal of society driven by good conversation.

Stanley Kubrick at the Design Museum

The celebrated film director Stanley Kubrick never took the future for granted. In films as diverse as Dr. Strangelove: or, how I learned to stop worrying and love the bomb (1964) and A Clockwork Orange (1971), Kubrick’s focus was always savagely humane, unpicking the way the places we inhabit make us think and feel. At the opening of a new exhibition at the London Design Museum in Holland Park, David Stock and I spoke to co-curator Adriënne Groen about Kubrick’s most scientifically inflected film, 2001: A Space Odyssey (1968), and how Kubrick masterminded a global effort to imagine one possible future: part technological utopia, part sterile limbo, and, more than 50 years since its release, as gripping as hell.

You can see the interview here.

How Stanley Kubrick‘s collaboration with science fiction writer Arthur C. Clarke led to 2001 is well known. “The ‘really good’ science-fiction movie is a great many years overdue,” Clarke enthused, as the men began their work on a project with the working title Journey Beyond the Stars.

For those who want a broader understanding of how Kubrick gathered, enthused and sometimes (let’s be brutally frank, here) exploited the visionary talent available to him, The Design Museum’s current exhibition is essential viewing. There are prototypes of the pornographic furniture from the opening dolly shot of A Clockwork Orange, inspired by the work of artist Allen Jones but fashioned by assistant production designer Liz Moore when Jones decided not to hitch his cart – and reputation – to Kubrick’s controversial vision.

But it’s the names that recur again and again, from film to film, over decades of creative endeavour, that draw one in. The costume designer Milena Canonero was a Kubrick regular and, far from being swamped, immeasurably enriched Kubrick’s vision. (There’s a wonderful production photograph here of actor Malcolm McDowell trying on some of her differently styled droog hats.)

Kubrick was fascinated by the way people respond to being regimented – by the architectural brutalism of the Thamesmead estate in A Clockwork Orange, or by a savage gunnery sergeant in Full Metal Jacket, or by their own fetishism in Eyes Wide Shut. Kubrick’s fascination with how people think and behave is well served by this show, which will give anyone of a psychological bent much food for thought.

 

Choose-your-own adventure

Reading The Importance of Small Decisions by Michael O’Brien, R. Alexander Bentley and William Brock for New Scientist, 13 April 2019

What if you could map all kinds of human decision-making and use it to chart society’s evolution?

This is what academics Michael O’Brien, Alexander Bentley and William Brock try to do in The Importance of Small Decisions. It is an attempt to expand on a 2014 paper, “Mapping collective behavior in the big-data era”, that they wrote in Behavioral and Brain Sciences . While contriving to be somehow both too short and rambling, it bites off more than it can chew, nearly chokes to death on the ins and outs of group selection, and coughs up its best ideas in the last 40 pages.

Draw a graph. The horizontal axis maps decisions according to how socially influenced they are. The vertical axis tells you how clear the costs and pay-offs are for each decision. Rational choices sit in the north-western quadrant of the map. To the north-east, bearded capuchins teach each other how to break into palm nuts in a charming example of social learning (pictured). Twitter storms generated by fake news swirl about the south-east.

The more choices you face, the greater the cognitive load. The authors cite economist Eric Beinhocker, who in The Origin of Wealth calculated that human choices had multiplied a hundred million-fold in the past 10,000 years. Small and insignificant decisions now consume us.

Worse, costs and pay-offs are increasingly hidden in an ocean of informational white noise, so that it is easier to follow a trend than find an expert. “Why worry about the underlying causes of global warming when we can see what tens of millions of our closest friends think?” ask the authors, building to a fine, satirical climax.

In an effort to communicate widely, the authors have, I think, left out a few too many details from their original paper. And a mid-period novel by Philip K. Dick would paint a more visceral picture of a world created by too much information. Still, there is much fun to be had reading the garrulous banter of these three extremely smart academics.

Come on, Baggy, get with the beat!

Reading The Evolving Animal Orchestra: In search of what makes us musical by Henkjan Honing for New Scientist, 6 April 2019

The perception, if not the enjoyment, of musical cadences and of rhythm,” wrote Darwin in his 1871 book The Descent of Man, “is probably common to all animals.”

Henkjan Honing has tested this eminently reasonable idea, and in his book, The Evolving Animal Orchestra, he reports back. He details his disappointment, frustration and downright failure with such wit, humility and a love of the chase that any young person reading it will surely want to run away to become a cognitive scientist.

No culture has yet been found that doesn’t have music, and all music shares certain universal characteristics: melodies composed of seven or fewer discrete pitches; a regular beat; a limited sequence of rhythmic patterns. All this would suggest a biological basis for musicality.

A bird flies with regular beats of its wings. Animals walk with a particular rhythm. So you might expect beat perception to be present in everything that doesn’t want to falter when moving. But it isn’t. Honing describes experiments that demonstrate conclusively that we are the only primates with a sense of rhythm, possibly deriving from advanced beat perception.

Only strongly social animals, he writes, from songbirds and parrots to elephants and humans, have beat perception. What if musicality was acquired by all prosocial species through a process of convergent evolution? Like some other cognitive scientists, Honing now wonders whether language might derive from music, in a similar way to how reading uses much older neural structures that recognise contrast and sharp corners.

Honing must now test this exciting hypothesis. And if The Evolving Animal Orchestra is how he responds to disappointment, I can’t wait to see what he makes of success.