Ushering in the End Times at London’s Barbican Hall

LCO_Barbican_311018_244

Mark Allan / Barbican

Listening to the London Contemporary Orchestra for New Scientist, 1 November 2018

On All Hallow’s Eve this year, at London’s Barbican Hall, the London Contemporary Orchestra, under the baton of their co-artistic director Robert Ames, managed with two symphonic pieces to drown the world and set it ablaze in the space of a single evening.

Giacinto Scelsi’s portentously titled Uaxuctum: The legend of the Maya City, destroyed by the Maya people themselves for religious reasons, evoked the mysterious and violent collapse of that once thriving civilisation; the second piece of the evening, composer and climate activist John Luther Adams’s Become Ocean, looked to the future, the rise of the world’s oceans, and good riddance to the lot of us.

Lost Worlds was a typical piece of LCO programming: not content with presenting two very beautiful but undeniably challenging long-ish works, the orchestra had elected to play behind a translucent screen onto which were projected the digital meanderings of an artistically trained neural net. Twists of entoptic colour twisted and cavorted around the half-seen musicians while a well-place spotlight, directly over Ames’s head, sent the conductor’s gestures sprawling across the screen, as though ink were being dashed over all those pretty digitally generated splotches of colour.

Everything, on paper, pointed to an evening that was trying far too hard to be avant garde. In the execution, however, the occasion was a triumph.

The idea of matching colours to sounds is not new. The painter Wassily Kandinsky struggled for years to fuse sound and image and ended up inventing abstract painting, more or less as a by-product. The composer Alexander Scriabin was so desperate to establish his reputation as the founder of a new art of colour-music, he plagiarised other people’s synaesthetic experiences in his writings and invented a clavier à lumières (“keyboard with lights”) for use in his work Prometheus: Poem of Fire. “It is not likely that Scriabin’s experiment will be repeated by other composers,” wrote a reviewer for The Nation after its premiere in New York in 1915: “moving-picture shows offer much better opportunities.” (Walt Disney proved The Nation right: Fantasia was released in 1937.)

Now, as 2018 draws to a close, artificial intelligence is being hurled at the problem. For this occasion the London-based theatrical production company Universal Assembly Unit had got hold of a recursive neural net engineered by Artrendex, a company that uses artificial intelligence to research and predict the art market. According to the concert’s programme note, it took several months to train Artrendex’s algorithm on videos of floods and fires, teaching it the aesthetics of these phenomena so that, come the evening of the performance, it would construct organic imagery in response to the music.

LCO_Barbican_311018_156

Mark Allan / Barbican

While never obscuring the orchestra, the light show was dramatic and powerful, sometimes evoking (for those who enjoy their Andrei Tarkovsky) the blurriness of the clouds swamping the ocean planet Solaris in the movie of that name; then at other moments weaving and flickering, not so much like flames, but more like the speeded-up footage from some microbial experiment. Maybe I’ve worked at New Scientist too long, but I got the distinct and discomforting impression that I was looking, not at some dreamy visual evocation of a musical mood, but at the the responses of single-celled life to desperate changes in their tiny environment.

As for the music – which was, after all, the main draw for this evening – it is fair to say that Scelsi’s Uaxuctum would not be everyone’s cup of tea. For a quick steer, recall the waily bits from 2001: A Space Odyssey. That music was by the Hungarian composer György Ligeti, who was born about two decades after Scelsi, and was — both musically and personally — a lot less weird. Scelsi was a Parisian dandy who spent years in a mental institution playing one piano note again and again and Uaxuctum, composed in 1966, was such an incomprehensibly weird and difficult proposition, it didn’t get any performance at all for 21 years, and no UK performance at all before this one.

John Luther Adams’s Become Ocean (2013) is an easier (and more often performed) composition – The New Yorkermusic critic Alex Ross called it “the loveliest apocalypse in musical history”. This evening its welling sonorities brought hearts into mouths: rarely has mounting anxiety come wrapped in so beautiful a package.

So I hope it takes nothing away from the LCO’s brave and accomplished playing to say that the visual component was the evening’s greatest triumph. The dream of “colour music” has ended in bathos and silliness for so many brilliant and ambitious musicians. Now, with the judicious application of some basic neural networking, we may at last be on the brink of fusing tone and colour into an art that’s genuinely new, and undeniably beautiful.

Pierre Huyghe: Digital canvases and mind-reading machines

Visiting UUmwelt, Pierre Huyghe’s show at London’s Serpentine Gallery, for the Financial Times, 4 October 2018

On paper, Pierre Huyghe’s new exhibition at the Serpentine Gallery in London is a rather Spartan effort. Gone are the fictional characters, the films, the drawings; the collaborative manga flim-flam of No Ghost Just a Shell; the nested, we’re not-in-Kansas-any-more fictions, meta-fictions and crypto-documentaries of Streamside Day Follies. In place of Huyghe’s usual stage blarney come five large LED screens. Each displays a picture that, as we watch, shivers through countless mutations, teetering between snapshot clarity and monumental abstraction. One display is meaty; another, vaguely nautical. A third occupies a discomforting interzone between elephant and milk bottle.

Huyghe has not abandoned all his old habits. There are smells (suggesting animal and machine worlds), sounds (derived from brain-scan data, but which sound oddly domestic: was that not a knife-drawer being tidied?) and a great many flies. Their random movements cause the five monumental screens to pause and stutter, and this is a canny move, because without that  arbitrary grammar, Huyghe’s barrage of visual transformations would overwhelm us, rather than excite us. There is, in short, more going on here than meets the eye. But that, of course, is true of everywhere: the show’s title nods to the notion of “Umwelt” coined by the zoologist Jacob von Uexküll in 1909, when he proposed that the significant world of an animal was the sum of  things to which it responds, the rest going by virtually unnoticed. Huyghe’s speculations about machine intelligence are bringing this story up to date.

That UUmwelt turns out to be a show of great beauty as well; that the gallery-goer emerges from this most abstruse of high-tech shows with a re-invigorated appetite for the arch-traditional business of putting paint on canvas: that the gallery-goer does all the work, yet leaves feeling exhilarated, not exploited — all this is going to require some explanation.

To begin at the beginning, then: Yukiyasu Kamitani , who works at Kyoto University in Japan, made headlines in 2012 when he fed the data from fMRI brain scans of sleeping subjects into neural networks. These computer systems eventually succeeded in capturing shadowy images of his volunteers’ dreams. Since then his lab has been teaching computers to see inside people’s heads. It’s not there yet, but there are interesting blossoms to be plucked along the way.

UUmwelt is one of these blossoms. A recursive neural net has been shown about a million pictures, alongside accompanying fMRI data gathered from a human observer. Next, the neural net has been handed some raw fMRI data, and told to recreate the picture the volunteer was looking at.

Huyghe has turned the ensuing, abstruse struggles of the Kamitani Lab’s unthinking neural net into an exhibition quite as dramatic as anything he has ever made. Only, this time, the theatrics are taking place almost entirely in our own heads. What are we looking at here? A bottle. No, an elephant, no, a Francis Bacon screaming pig, goose, skyscraper, mixer tap, steam train mole dog bat’s wing…

The closer we look, the more engaged we become, the less we are able to describe what we are seeing. (This is literally true, in fact, since visual recognition works just that little bit faster than linguistic processing.) So, as we watch these digital canvases, we are drawn into dreamlike, timeless lucidity: a state of concentration without conscious effort that sports psychologists like to call “flow”. (How the Serpentine will ever clear the gallery at the end of the day I have no idea: I for one was transfixed.)

UUmwelt, far from being a show about how machines will make artists redundant, turns out to be a machine for teaching the rest of us how to read and truly appreciate the things artists make. It exercises and strengthens that bit of us that looks beyond the normative content of images and tries to make sense of them through the study of volume, colour, light, line, and texture. Students of Mondrian, Duffy and Bacon, in particular, will lap up this show.

Remember those science-fictional devices and medicines that provide hits of concentrated education? Quantum physics in one injection! Civics in a pill! I think Huyghe may have come closer than anyone to making this silly dream a solid and compelling reality. His machines are teaching us how to read pictures, and they’re doing a good job of it, too.