William De Morgan was something of a liability. He once used a fireplace as a makeshift kiln and set fire to his rented London home. As a businessman he was a disaster. The prices he charged for his tiles and ceramics hardly even paid for the materials, never mind his time. And at the turn of the 20th century, when serious financial problems loomed, only a man of De Morgan’s impractical stripe would resort to writing fiction…
Leeuwarden-Fryslan, one of the less populated parts of the Netherlands, has been designated this year’s European Capital of Culture. It’s a hub of social and technological and cultural innovation and yet hardly anyone has heard of the place. It makes batteries that the makers claim run circles around Tesla’s current technology, there are advanced plans for the region to go fossil free by 2025, it has one of the highest (and happiest) immigrant populations in Europe, and yet all we can see from the minibus, from horizon to horizon, is cows.
When you’re invited to write about an area you know nothing about, a good place to start is the heritage. But even that can’t help us here. The tiny city of Leeuwarden boasts three hugely famous children: spy and exotic dancer Mata Hari, astrophysicist Jan Hendrik Oort (he of the Oort Cloud) and puzzle-minded artist Maurits Cornelis Escher. The trouble is, all three are famous for being maddening eccentrics.
All Leeuwarden’s poor publicists can do then, having brought us here, is throw everything at us and hope something sticks. And so it happens that, somewhere between the (world-leading) Princessehof ceramics museum and Lan Fan Taal, a permanent pavilion celebrating world languages, someone somewhere makes a small logistical error and locks me inside an M C Escher exhibition.
Escher, who died in 1972, is famous for using mathematical ideas in his art, drawing on concepts from symmetry and hyperbolic geometry to create complex tessellated images. And the Fries Museum in Leeuwarden has gathered more than 80 original prints for me to explore, along with drawings, photographs and memorabilia, so there is no possibility of my getting bored.
Nor is the current exhibition, Escher’s Journey, the usual, chilly celebration of the man’s puzzle-making ability and mathematical sixth sense. Escher was a pleasant, passionate man with a taste for travel, and this show reveals how his personal experiences shaped his art.
Escher’s childhood was by his own account a happy one. His parents took a good deal of interest in his education without ever restricting his intellectual freedom. This was as well, since he was useless at school. Towards the end of his studies, he and his parents traveled through France to Italy, and in Florence he wrote to a friend: “I wallow in it, but so greedily that I fear that my stomach will not be able to withstand it.”
The cultural feast afforded by the city was the least of it. The Leeuwarden native was equally staggered by the surrounding hills – the sheer, three-dimensional fact of them; the rocky coasts and craggy defiles; the huddled mountain villages with squares, towers and houses with sloping roofs. Escher’s love of the Italian landscape consumed him and, much to his mother’s dismay, he was soon permanently settled in the country.
For visitors familiar to the point of satiety and beyond with Escher’s endlessly reproduced and commodified architectural puzzles and animal tessellations, the sketches he made in Italy during the 1920s and 1930s are the highlight of this show. Escher’s favored medium was the engraving. It’s a time-consuming art, and one that affords the artist time to think and to tinker. Inevitably, Escher began merging his sketches into new, realistic wholes. Soon he was trying out unusual perspectives and image compilations. In Still Life with Mirror (1934), he crossed the threshold, creating a reflected world that proves on close inspection to be physically and mathematically impossible.
The usual charge against Escher as an artist – that he was too caught up in the toils of his own visual imagination to express much humanity – is hard to rebuff. There’s a gap here it’s not so easy to bridge: between Escher the approachable and warm-hearted family man and Escher the grumpy Parnassian (he once sent Mick Jagger away with a flea in his ear for asking him for an album cover).
The second world war had a lot to answer for, of course, not least because it drove Escher out of his beloved Italian hills and back, via Switzerland, to the flat old, dull old Netherlands. “Italy, the landscape, the people, they speak to me.” he explained in 1968. “Switzerland doesn’t and Holland even less so.”
Without the landscape to inform his art, other influences came to dominate. Among the places he had visited as war gathered was the Alhambra in Granada. The complex geometric patterns covering its every surface, and their timeless, endless repetition, fascinated him. For days on end he copied the Arab motifs in the palace. Back in the Netherlands, their influence, and Escher’s growing fascination with the mathematics of tessellation, would draw him away from landscapes toward an art consisting entirely of “visualised thoughts”.
By the time his images were based on periodic tilings (meaning that you can slide a pattern in a certain direction and have it exactly overlay the original), his commentaries suggest that Escher had come to embrace his own, somewhat sterile reputation. “I played a game,” he recalled, “indulged in imaginary thoughts, with no other intention than to explore the possibilities of representation. In my work I give a report on these discoveries.”
In the end Escher’s designs became so fiendishly complex, his output dropped almost to zero, and much of his time was taken up lecturing and corresponding about his unique way of working. He corresponded with mathematicians, though he never considered himself one. He knew Roger Penrose. He lived to see the first fractal shapes evolve out of the mathematical studies of Koch and Mandelbrot, though it wasn’t until after his death that Benoît Mandelbrot coined the word “fractal” and popularised the concept.
Eventually, I am missed. At any rate, someone thinks to open the gallery door. I don’t know how long I was in there, locked in close proximity to my childhood hero. (Yes, as a child I did those jigsaw puzzles; yes, as a student I had those posters on my wall) I can’t have been left inside Escher’s Journey for more than a few minutes. But I exited a wreck.
The Fries Museum has lit Escher’s works using some very subtle and precise spot projection; this and the trompe-l’œil monochrome paintwork on the walls of the gallery form a modestly Escherine puzzle all by themselves. Purely from the perspective of exhibition design, this charming, illuminating, and comprehensive show is well worth a visit.
Mark O’Connell’s To Be a Machine, a travelogue of strange journeys and bizarre encounters among transhumanists, won the 2018 Wellcome Book Prize. Wearing my New Scientist hat I asked O’Connell how he managed to give transhumanism a human face – despite his own scepticism.
Has transhumanism ever made personal sense to you?
Transhumanism’s critique of the human condition, its anxiety around having to die — that’s something I have some sympathy with, for sure, and that’s where the book began. The idea was for the door to some kind of conversion to be always open. But I was never really convinced that the big ideas in transhumanism, things like mind-uploading and so on, were really plausible. The most interesting question for me was, “Why would anyone want this?”
A lot of transhumanist thought is devoted to evading death. Do the transhumanists you met get much out of life?
I wouldn’t want to be outright prescriptive about what it means to live a meaningful life. I’m still trying to figure that one out myself. I think if you’re so devoted to the idea that we can outrun death, and that death makes life utterly meaningless, then you are avoiding the true animal nature of what it means to be human. But I find myself moving back and forth between that position and one that says, you know what, these people are driven by a deep, Promethean project. I don’t have the deep desire to shake the world to its core that these people have. In that sense, they’re living life to its absolute fullest.
What most sticks in your mind from your researches for the book?
The place that sticks in my mind most clearly is Alcor’s cryogenic life extension facility. In terms of just the visuals, it’s bizarre. You’re walking around what’s known as a “patient care bay”, among these gigantic stainless steel cylinders filled with corpses and severed heads that they’re going to unfreeze once a cure for death is found. The thing that really grabbed me was the juxtaposition between the sci-fi level of the thing and the fact that it was situated in a business park on the outskirts of Phoenix, next door to Big D’s Floor Covering Supplies and a tile showroom.
They do say the future arrives unevenly…
I think we’re at a very particular cultural point in terms of our relationship to “the future”. We aren’t really thinking of science as this boundless field of possibility any more, and so it seems bit of a throwback, like something from an Arthur C. Clarke story. It’s like the thing with Elon Musk. Even the global problems he identifies — rogue AI, and finding a new planet that we can live on to perpetuate the species — seem so completely removed from actual problems that people are facing right now that they’re absurd. A handful of people who seem to wield almost infinite technological resources are devoting themselves to completely speculative non-problems. They’re not serious, on some basic level.
Are you saying transhumanism is a product of an unreal Silicon Valley mentality?
The big cultural influence over transhumanism, the thing that took it to the next level, seems to have been the development of the internet in the late 1990s. That’s when it really became a distinct social movement, as opposed to a group of more-or-less isolated eccentric thinkers and obsessives.
But it’s very much a global movement. I met a lot of Europeans – Russia in particular has a long prehistory of attempts to evade death. But most transhumanists have tended to end up in the US and specifically in Silicon Valley. I suppose that’s because these kinds of ideas get most traction there. You don’t get people laughing at you when you mention want to live forever.
The one person I really found myself grappling with, in the most profound and unsettling way, was Randal Koene. It’s his idea of uploading the human mind to a computer that I find most deeply troubling and offensive, and kind of absurd. As a person and as a communicator, though, Koene was very powerful. A lot of people who are pushing forward these ideas — people like Ray Kurzweil — tend to be impresarios. Randal was the opposite. He was very quietly spoken, very humble, very much the scientist. There were moments he really pushed me out of my scepticism – and I liked him.
Is transhumanism science or religion?
It’s not a religion: there’s no God, for instance. But at the same time I think it very obviously replaces religion in terms of certain basic yearnings and anxieties. The anxiety about death is the obvious one.
There is a very serious religious subtext to all of transhumanism’s aspirations. And at the same time, transhumanists absolutely reject that thinking, because it tends to undermine their perception of themselves as hardline rationalists and deeply science-y people. Mysticism is quite toxic to their sense of themselves.
Will their future ever arrive?
On one level, it’s already happening. We’re walking round in this miasma of information and data, almost in a state of merger with technology. That’s what we’re grappling with as a culture. But if that future means an actual merger of artificial intelligence and human intelligence, I think that’s a deeply terrifying idea, and not, touch wood, something that is ever going to happen.
Should we be worried?
That is why I’m now writing about a book about apocalyptic anxieties. It’s a way to try to get to grips with our current political and cultural moment.
To Be a Machine: Adventures among cyborgs, utopians, hackers, and the futurists solving the modest problem of death Mark O’Connell
How do characters and events in fiction differ from those in real life? And what is it about our experience of life that fiction exaggerates, omits or captures to achieve its effects?
Effective fiction is Vera Tobin’s subject. And as a cognitive scientist, she knows how pervasive and seductive it can be, even in – or perhaps especially in – the controlled environment of an experimental psychology lab.
Suppose, for instance, you want to know which parts of the brain are active when forming moral judgements, or reasoning about false beliefs. These fields and others rest on fMRI brain scans. Volunteers receive short story prompts with information about outcomes or character intentions and, while their brains are scanned, have to judge what other characters ought to know or do.
“As a consequence,” writes Tobin in her new book Elements of Surprise, “much research that is putatively about how people think about other humans… tells us just as much, if not more, about how study participants think about characters in constructed narratives.”
Tobin is weary of economists banging on about the “flaws” in our cognitive apparatus. “The science on this phenomenon has tended to focus on cataloguing errors people make in solving problems or making decisions,” writes Tobin, “but… its place and status in storytelling, sense-making, and aesthetic pleasure deserve much more attention.”
Tobin shows how two major “flaws” in our thinking are in fact the necessary and desirable consequence of our capacity for social interaction. First, we wildly underestimate our differences. We model each other in our heads and have to assume this model is accurate, even while we’re revising it, moment to moment. At the same time, we have to assume no one else has any problem performing this task – which is why we’re continually mortified to discover other people have no idea who we really are.
Similarly, we find it hard to model the mental states of people, including our past selves, who know less about something than we do. This is largely because we forget how we came to that privileged knowledge.
“Tobin is weary of economists banging on about the ‘flaws’ in our cognitive apparatus”
There are implications for autism, too. It is, Tobin says, unlikely that many people with autism “lack” an understanding that others think differently – known as “theory of mind”. It is more likely they have difficulty inhibiting their knowledge when modelling others’ mental states.
And what about Emma, titular heroine of Jane Austen’s novel? She “is all too ready to presume that her intentions are unambiguous to others and has great difficulty imagining, once she has arrived at an interpretation of events, that others might believe something different”, says Tobin. Austen’s brilliance was to fashion a plot in which Emma experiences revelations that confront the consequences of her “cursed thinking” – a cognitive bias making us assume any person with whom we communicate has the background knowledge to understand what is being said.
Just as we assume others know what we’re thinking, we assume our past selves thought as we do now. Detective stories exploit this foible. Mildred Pierce, Michael Curtiz’s 1945 film, begins at the end, as it were, depicting the story’s climactic murder. We are fairly certain we know who did it, but we flashback to the past and work forward to the present only to find that we have misinterpreted everything.
I confess I was underwhelmed on finishing this excellent book. But then I remembered Sherlock Holmes’s complaint (mentioned by Tobin) that once he reveals the reasoning behind his deductions, people are no longer impressed by his singular skill. Tobin reveals valuable truths about the stories we tell to entertain each other, and those we tell ourselves to get by, and how they are related. Like any good magic trick, it is obvious once it has been explained.
Had you $1800 to spend on footwear in 2012, you might have considered buying a pair of RayFish sneakers. Delivery would have taken a while because you were invited to design the patterned leather yourself. You would have then have had to wait while the company grew a pair of transgenic stingrays in their Thai aquaculture facility up to the age where their biocustomised skins could be harvested.
Alas, animal rights activists released the company’s first batch of rays into the wild before harvesting could take place, and the company suspended trading. Scuba divers still regularly report sightings of fish sporting the unlikely colourations that were RayFish’s signature.
RayFish was, you’ll be pleased to hear, a con, perpetrated by three Dutch artists five years ago. It now features in Fake, the latest show at the Science Gallery, Dublin, an institution that sells itself as the place “where art and science collide”.
The word “collide” is well chosen. “We’re not experts on any one topic here,” explains Ian Brunswick, the gallery’s head of programming, “and we’re not here to heal any kind of ‘rift’ between science and art. When we develop a show, we start from a much simpler place, with an open call to artists, designers and scientists.” They ask all the parties what they think of the new idea, and what can they show them. Scientists in particular, says Brunswick, often underestimate which elements of their work will captivate.
Founded under the auspices of Dublin’s Trinity College, the Science Gallery is becoming a global brand thanks to the support of founding partner Google.org. London gets a gallery later this year; Bangalore in 2019. The aim is to not to educate, but to inspire visitors to educate themselves.
Brunswick recalls how climate change, in particular, triggered this sea-change in the way public educators think about their role: “I think many science shows have been operating a deficit model: they fill you up like an empty vessel, giving you enough facts so you agree with the scientists’ approach. And it doesn’t work.” A better approach, Brunswick argues, is to give the audience an immediate, visceral experience of the subject of the show.
For example, in 2014 Dublin’s Science Gallery called its climate change show “Strange Weather”, precisely to explore the fact that weather and climate change are different things, and that weather is the only phenomenon we experience directly on a daily basis. It got people to ask how they knew what they knew about the climate – and what knowledge they might be missing.
Playfulness characterises the current show. Fakery, it seems, is bad, necessary, inevitable, natural, dangerous, creative, and delightful, all at once. There are fictional animals here preserved in jars besides real specimens: are they fake, or merely out of context? And you can (and should) visit the faux-food deli and try a caramelised whey product here from Norway that everyone calls cheese because what the devil else would you call it?
Then there’s a genuine painting that became a fake when its unscrupulous owner manipulated the artist’s signature. And the Chinese fake phones that are parodies you couldn’t possibly mistake for the real thing: from Pikachu to cigarette packets. There’s a machine here will let you manipulate your fake laugh until it sounds genuine.
Fake’s contributing artists have left me with the distinct suspicion that the world I thought I knew is not the world.
Directly above RayFish’s brightly patterned sneakers, on the upper floor of the gallery, I saw Barack Obama delivering fictional speeches. A work in progress by researchers from the University of Washington, Synthesizing Obama is a visual form of lip-synching in which audio files of Obama speaking are converted into realistic mouth shapes. These are then blended with video images of Obama’s head as he delivers another speech entirely.
It’s a topical piece, given today’s accusatory politics, and a chilling one.
IN 1872, the physicist Ludwig Boltzmann developed a theory of gases that confirmed the second law of thermodynamics, more or less proved the existence of atoms and established the asymmetry of time. He went on to describe temperature, and how it governed chemical change. Yet in 1906, this extraordinary man killed himself.
Boltzmann is the kindly if gloomy spirit hovering over Peter Atkins’s new book, Conjuring the Universe: The origins of the laws of nature. It is a cheerful, often self-deprecating account of how most physical laws can be unpacked from virtually nothing, and how some constants (the peculiarly precise and finite speed of light, for example) are not nearly as arbitrary as they sound.
Atkins dreams of a final theory of everything to explain a more-or-less clockwork universe. But rather than wave his hands about, he prefers to clarify what can be clarified, clear his readers’ minds of any pre-existing muddles or misinterpretations, and leave them, 168 succinct pages later, with a rather charming image of him tearing his hair out over the fact that the universe did not, after all, pop out of nothing.
It is thanks to Atkins that the ideas Boltzmann pioneered, at least in essence, can be grasped by us poor schlubs. Popular science writing has always been vital to science’s development. We ignore it at our peril and we owe it to ourselves and to those chipping away at the coalface of research to hold popular accounts of their work to the highest standards.
Enter Brian Clegg. He is such a prolific writer of popular science, it is easy to forget how good he is. Icon Books is keeping him busy writing short, sweet accounts for its Hot Science series. The latest, by Clegg, is Gravitational Waves: How Einstein’s spacetime ripples reveal the secrets of the universe.
Clegg delivers an impressive double punch: he transforms a frustrating, century-long tale of disappointment into a gripping human drama, affording us a vivid glimpse into the uncanny, depersonalised and sometimes downright demoralising operations of big science. And readers still come away wishing they were physicists.
Less polished, and at times uncomfortably unctuous, Catching Stardust: Comets, asteroids and the birth of the solar system is nevertheless a promising debut from space scientist and commentator Natalie Starkey. Her description of how, from the most indirect evidence, a coherent history of our solar system was assembled, is astonishing, as are the details of the mind-bogglingly complex Rosetta mission to rendezvous with comet 67P/Churyumov-Gerasimenko – a mission in which she was directly involved.
It is possible to live one’s whole life within the realms of science and discovery. Plenty of us do. So it is always disconcerting to be reminded that longer-lasting civilisations than ours have done very well without science or formal logic, even. And who are we to say they afforded less happiness and fulfilment than our own?
Nor can we tut-tut at the way ignorant people today ride science’s coat-tails – not now antibiotics are failing and the sixth extinction is chewing its way through the food chain.
Physicists, especially, find such thinking well-nigh unbearable, and Alan Lightman speaks for them in his memoir Searching for Stars on an Island in Maine. He wants science to rule the physical realm and spirituality to rule “everything else”. Lightman is an elegant, sensitive writer, and he has written a delightful book about one man’s attempt to hold the world in his head.
But he is wrong. Human culture is so rich, diverse, engaging and significant, it is more than possible for people who don’t give a fig for science or even rational thinking to live lives that are meaningful to themselves and valuable to the rest of us.
“Consilience” was biologist E.O. Wilson’s word for the much-longed-for marriage of human enquiry. Lightman’s inadvertent achievement is to show that the task is more than just difficult, it is absurd.
Literary agent and provocateur John Brockman has turned popular science into a sort of modern shamanism, packaged non-fiction into gobbets of smart thinking, made stars of unlikely writers and continues to direct, deepen and contribute to some of the most hotly contested conversations in civic life.
This Idea Is Brilliant is the latest of Brockman’s annual anthologies drawn from edge.org, his website and shop window. It is one of the stronger books in the series. It is also one of the more troubling, addressing, informing and entertaining a public that has recently become extraordinarily confused about truth and falsehood, fact and knowledge.
Edge.org’s purpose has always been to collide scientists, business people and public intellectuals in fruitful ways. This year, the mix in the anthology leans towards the cognitive sciences, philosophy and the “freakonomic” end of the non-fiction bookshelf. It is a good time to return to basics: to ask how we know what we know, what role rationality plays in knowing, what tech does to help and hinder that knowing, and, frankly, whether in our hunger to democratise knowledge we have built a primrose-lined digital path straight to post-truth perdition.
Many contributors, biting the bullet, reckon so. Measuring the decline in the art of conversation against the rise of social media, anthropologist Nina Jablonski fears that “people are opting for leaner modes of communication because they’ve been socialized inadequately in richer ones”.
Meanwhile, an applied mathematician, Coco Krumme, turning the pages of Jorge Luis Borges’s short story The Lottery in Babylon, conceptualises the way our relationship with local and national government is being automated to the point where fixing wayward algorithms involves the applications of yet more algorithms. In this way, civic life becomes opaque and arbitrary: a lottery. “To combat digital distraction, they’d throttle email on Sundays and build apps for meditation,” Krumme writes. “Instead of recommender systems that reveal what you most want to hear, they’d inject a set of countervailing views. The irony is that these manufactured gestures only intensify the hold of a Babylonian lottery.”
Of course, IT wasn’t created on a whim. It is a cognitive prosthesis for significant shortfalls in the way we think. Psychologist Adam Waytz cuts to the heart of this in his essay “The illusion of explanatory depth” – a phrase describing how people “feel they understand the world with far greater detail, coherence and depth than they really do”.
Humility is a watchword here. If our thinking has holes in it, if we forget, misconstrue, misinterpret or persist in false belief, if we care more for the social consequences of our beliefs than their accuracy, and if we suppress our appetite for innovation in times of crisis (all subjects of separate essays here), there are consequences. Why on earth would we imagine we can build machines that don’t reflect our own biases, or don’t – in a ham-fisted effort to correct for them – create ones of their own we can barely spot, let alone fix?
Neuroscientist Sam Harris is one of several here who, searching for a solution to the “truthiness” crisis, simply appeals to basic decency. We must, he argues, be willing to be seen to change our minds: “Wherever we look, we find otherwise sane men and women making extraordinary efforts to avoid changing [them].”
He has a point. Though our cognitive biases, shortfalls and the like make us less than ideal rational agents, evolution has equipped us with social capacities that, smartly handled, run rings round the “cleverest” algorithm.
Let psychologist Abigail Marsh have the last word: “We have our flaws… but we can also claim to be the species shaped by evolution to possess the most open hearts and the greatest proclivity for caring on Earth.” This may, when all’s said and done, have to be enough.
BERLIN’S festival of art and media culture Transmediale is an annual reminder that art is more than a luxury good. It gives us the words, images and ideas we need to talk to each other about a changing world.
Big social changes involve big shifts in how art is made and consumed. It is a nerve-racking process for artists, who can have no idea, as they embark on their ventures, whether the public will come to appreciate and enjoy their work. And at this year’s Transmediale, the chickens came home to roost.
To begin at the beginning, back in the 1950s, Andy Warhol and the pop art movement looked at the world through the prism of advertising hoardings and television. A new generation of artists has been making art out of the internet.
Some artists have attempted to imagine the internet itself, paying attention to developments in data management and artificial intelligence, so they can better imagine what the internet is and what it might become.
The performance premiering at the festival this year, James Ferraro’s Dante-esquePlague, was work of this sort: a credible, visceral and downright terrifying portrayal of consciousness emerging from the audio-visual detritus of social media.
Other artists have used the internet as a tool through which to look at the world. Much of this work resembles anthropology more than art. Take Lisa Rave’s film Europium, which flits between trading floors, TV showrooms and a wedding ceremony in Papua New Guinea to trace the material connections and cultural gulfs that distinguish different kinds of money, from seashell dowries to plastic banknotes. In so doing, she constructs a microhistory of the rare element europium that wouldn’t look out of place in a high-end magazine, and brings the hackneyed link between capitalism and colonialism to life.
“The internet sorts. It archives. Many of its artists are, in consequence, good little bureaucrats”
But there is a problem: artists working with the materials of the internet are further removed from physical reality than their forebears. They are looking at the world through what is, really, a single, totalising, bureaucratic machine. (It’s called the World Wide Web for a reason.) And in art, as in life, you are what you eat.
The internet sorts. It archives. Many of its artists are, in consequence, good little bureaucrats who offer “findings”, “research” and “presentations” (at Transmediale we even had an “actualisation”, from artist and gay activist Zach Blas), but rarely anything as trite as finished work.
Nothing ages on the internet; nothing dies. Nothing is ever resolved. Similarly with its art: Heather Dewey-Hagborg’s A Becoming Resemblance, which uses DNA from Chelsea Manning, the former US soldier who leaked classified documents, is to all intents and purposes a brand new piece, but it is still presented as a fragment of a work begun in 2015.
Does the open-endedness of this art make it bad? Of course not. But internet art hardly ever gets finished. There’s always more data to sort, a virtual infinitude of rabbit holes to hurl yourself down, and very little that is genuinely new has had a chance to emerge. I defy a newcomer to tell the difference between the work premiering here and work that is 20 years old.
The field has, as a consequence, turned into the art world’s Peter Pan: the child that never grew up. And we treat it as a child. We tiptoe around anything resembling a negative opinion, as though every time one of us said, “I don’t believe this piece is any good”, a video artist somewhere would fall down dead.
In other words, the world of media art has suffered the same fate that has befallen the rest of the internet-enabled planet. The very technology that promised us the world on a screen has been steadily filtering out the challenges and contrary opinions that made our interests and ideas so vital in the first place, leaving us living in an echo chamber.
It was Lioudmila Voropai, a Ukrainian art historian, who got the gathered artists, curators and academics at Transmediale to confront some chilly realities about their field. We knew the book she was launching contained dynamite because it was entitled Media Art as a By-Product – no punches pulled there. Another reason was that she spent all her time telling us what her book didn’t do. It didn’t criticise. It didn’t take a political position. It asked a few questions. It didn’t have answers. Nothing to see here.
Finally someone piped up: “So the media art we’ve come here to enjoy and talk about and theorise over actually exists only to sustain museums of media art? Is that what you’re saying?”
And Voropai, perhaps figuring that she may as well be hung for a sheep as a lamb, let rip: “The extraordinary thing about media art,” she said, “is that the moment it was institutionally established, it was declared conceptually obsolete.”
This was only the beginning. Speaker after speaker made sincere efforts to get the left-wing, countercultural, transgressive Transmediale participants to look at themselves in the mirror. It took courage to try to get media artists to admit that their radical chic has been stolen by the likes of the just-as-countercultural far-right Breitbart News Network; that they have forgotten (as right-wingers like Donald Trump have not) how to entertain; and that they exist chiefly to sustain the institutions that fund them. These efforts were received with seriousness and courtesy.
“If the internet disappears, our lives will be held hostage by an invisible infrastructure”
Attempts to puncture the “new media art” bubble from the inside might have seemed a bit laughable to outsiders. Occupying most of the venue’s impressive foyer, Hate Library was a printout of the results (pictured left) artist Nick Thurston obtained when he typed “truth” into the search box on the online bulletin board of the white-supremacist Stormfront Europe group. The idea, I think, was to confront the Transmediale crowd with the big, bad world outside. But to the rest of us, this felt like old news. If you go there, and type that, surely you get what you deserve?
Even so, I am inclined to admire people who take their social and artistic responsibilities seriously enough to ask uncomfortable questions of themselves, and risk a bit of awkwardness and ridicule along the way.
After all, much of this work does get under your skin. It does make you look at the world anew. As I was leaving, I looked in at Yuri Pattison’s installation Vitra Alcove (some border thoughts). Pattison has mashed up videogame-generated coastal cities and garbled news tickers to capture the queasy liquidity of mediated life.
Sitting there, bombarded by algorithmically generated fake news and dizzy from the image blizzard, I was reminded of the few fraught days I once spent sitting among New Scientist‘s news team as it fished for real stories in a web-borne ocean of alarmism, self-promotion and misinterpretation. Pattison’s work says at least as much about my life as L. S. Lowry’s paintings of matchstalk men and cats and dogs said about my grandfather’s.
In January 2015, Eric Schmidt, then executive chairman of Google, declared that the internet was destined to disappear. He was talking about the internet of things: how the infrastructure that is beginning to weave together the materials and objects of daily life would burrow its way into our lives, and so become invisible.
But if, in the act of becoming ubiquitous, the internet also disappears, then our lives will be held hostage by a bureaucratic infrastructure we can no longer see, never mind control. Media art explores and shines strong light onto this complacent, hyper-conformist, not-so-brave world. Of course the art is strange, hard to explain – and a work in progress. How could it not be? That is its job.
AFFECTION and delight aren’t qualities you would immediately associate with an exhibition about blood flow. But Ceaseless Motion reaches beyond the science to celebrate the man – 17th-century physician William Harvey – who, the story goes, invented the tradition of doctors’ bad handwriting. He was also a benefactor: when founding a lecture series in his own name, he remembered to bequeath money for the provision of refreshments.
It is an exhibition conceived, organised and hosted by the UK’s Royal College of Physicians, whose 17th-century librarian Christopher Merrett described how to make champagne several years before the monk Dom Pérignon began his experiments. Less happily, Merrett went on a drinking binge in 1666, and let Harvey’s huge book collection burn in London’s Great Fire.
The documents, seals and signatures that survived the flames despite Merrett’s neglect take pride of place in an exhibition that, within a very little compass, tells the story of one of medicine’s more important revolutionaries through documents, portraits and some deceptively chatty wall information.
Before Harvey’s 10 years of intense, solitary study bore fruit, physicians thought blood was manufactured in the liver and then passed through the body under its own volumetric pressure. Heaven help you if you made too much of the stuff. Luckily, physicians were on hand to release this disease-inducing pressure through bloodletting.
It sounds daft now, but clues back then that something quite different was going on were sparse and controversial. The 16th-century physician Andreas Vesalius had puzzled over the heart. If, like every other organ, it fed on blood produced in the liver, why were its walls so impenetrably hard? But even this towering figure, the founder of modern anatomy, decided that his own observations had to be wrong.
It was Hieronymus Fabricius, Harvey’s teacher in Padua, Italy, who offered a new and fruitful tack when he mapped “the little doors in the veins” that, we know now, are valves maintaining the flow of blood back to the lungs.
Within 30 years, Harvey’s realisation that blood pressure is controlled by the heart, and that this organ actively pumps blood around the body in a continuous circuit, had overturned the teachings of the 2nd-century Graeco-Roman physician Claudius Galen in European centres of learning. The new thinking also put close clinical observation at the heart of a discipline that had traditionally spent more time on textual analysis than on examining patients.
The exhibition is housed in a building designed by Denys Lasdun. This celebrated modernist architect was so taken by Harvey’s achievements that he designed the interiors as a subtle homage to the human circulatory system.
With the royal college now celebrating its 500th birthday, its institutional pride is palpable, but never stuffy. As one staff member told me, “We only started talking about ourselves as a ‘Royal’ college after the Restoration, to suck up to the king.”
Those who can visit should be brave and explore. Upstairs, there are wooden panels from Padua with the dried and salted circulatory and nervous systems of executed criminals lacquered into them. They are rare survivors: when pickling methods improved and it was possible to provide medical students with three-dimensional teaching aids, such “anatomical plates” were discarded.
Downstairs, there are endless curiosities. The long sticks doctors carried in 18th-century caricatures were clinical instruments – latex gloves didn’t arrive until 1889. The sticks’ silver ferrules contained miasma-defeating herbs and, sometimes, phials of alcohol. None of them are as handsome as Harvey’s own demonstration rod.
But if a visit in person is out of the question, take a look at the royal college’s new website, launched to celebrate half a millennium of institutional conviviality and controversy. You will have to provide your own biscuits, though.
On Friday 12 January 2018, curators Julie Freeman and Hannah Redler Hawes left work at London’s Open Data Institute confident that, come Monday morning, there would be at least a few packets of crisps in the office.
Artist Ellie Harrison‘s Vending Machine (2009; pictured below) sits in the ODI’s kitchen, one of the more venerable exhibits to have been acquired over the institute’s five-year programme celebrating data as culture. It has been hacked to dispense a packet of salty snacks whenever the BBC’s RSS feed carries a news item containing financial misfortune.
No one could have guessed that, come 7 am on Monday morning, Carillion, the UK government’s giant services contractor, would have gone into liquidation. There were so many packets in the hopper, no one could open the door, say staff.
Such apparently silly anecdotes are the stuff of this year’s show, the fifth in the ODI’s annual exhibition series “Data as Culture”. This year, humour and absurdity are being harnessed to ask big questions about internet culture, privacy and artificial intelligence.
Looking at the world through algorithmic lenses may bring occasional insight, but what really matters here are the pratfalls as, time and again, our machines misconstrue a world they cannot possibly comprehend.
In 2017, artist Pip Thornton fed famous poems to Google’s online advertising service, Google AdWords, and printed the monetised results on till receipts. The framed results value the word “cloud” (as in I Wandered Lonely as a Cloud by William Wordsworth) highly, at £4.73, presumably because Google’s algorithm was dreaming of internet servers. It had no time at all for Wilfred Owen: “Froth-corrupted” (Dulce et Decorum Est) earned exactly £0.00.
You can, of course, reverse this game and ask what happens to people when they over-interpret machine-generated data, seeing patterns that aren’t there.
This is what Lee Montgomery has done with Stupidity Tax (2017). In an effort to understand his father’s mild but unaccountably secretive gambling habit, Montgomery has used a variety of data analysis techniques to attempt to predict the UK National Lottery. The sting in this particular tale is the installation’s associated website, which implies (mischievously, I hope) that the whole tongue-in-cheek effort has driven the artist ever so slightly mad.
Watching over the whole exhibition – literally because it’s peeking through a hole in a ceiling tile – is Franco and Eva Mattes’s Ceiling Cat, a taxidermied realisation of the internet meme, and a comment on the nature of surveillance beliefs (pictured top). “It’s cute and scary at the same time,” the artists say, “like the internet.”
Co-curator Freeman is a data artist herself. If you visited last year’s New Scientist Live you may well have seen her naked mole-rat surveillance project. The 7.5 million data points acquired by the project are now keeping network analysts busy at Queen Mary University of London. “We want to know if mole-rats make good encryption objects,” says Freeman. Their nest behaviours might generate true random numbers, handy for data security. “But the mole-rat queens are far too predictable… Crisp?”
Through a mouthful of salt and vinegar, I ask Freeman where her playfulness comes from. And as I suspected, there’s intellectual steel beneath: “Data is being constantly visualised so we can comprehend it,” she says, “and those visualisations are often done in a very short space of time, for a particular purpose, in a particular context, for a particular audience. Then they acquire this afterlife. All of a sudden, they’re the lenses we’re looking through. If you start thinking about data as something rigid and objective and bearing the weight of truth, then you’ve stopped discerning what is right and what is wrong.”
Freeman wants us to analyse data, not abandon it, and her exhibition is an act of tough love. “When we fetishise data, we end up with what’s happening in social media,” she says. “So many people drowning in metadata, pointing to pointers, and never acquiring any knowledge that’s deep and valuable. There should be some words to express that glut, that need to roll back a little bit. Here, have another crisp.”