THEO PANAYIDES meets a philosopher engaged in building robots who is convinced they will never outsmart us
There’s often a moment in American films when our heroes meet someone like Selmer Bringsjord (he is indeed American, despite the Norwegian name). He might be a small-town judge, or a sheriff. He’s deadpan, and rather doleful. He speaks slowly, in a deep bullhorn voice. He says ‘darn’ to express annoyance. The cinematic Selmer soothes our heroes’ fears, and talks some sense into them. In real life, however, Selmer isn’t a judge or a sheriff but something much less folksy – an academic researcher in the field of Artificial Intelligence (AI), building robots for the relatively near future when machines will have penetrated deep (or even deeper) into everyday life.
How near is that future? “Two years ago, at a ski trip in Colorado, we had a very intellectually violent debate about the timeline,” he replies in his booming, deliberate voice. He himself believes that, when it comes to self-driving cars for instance, 50 years is a reasonable estimate; at that point, it’ll be “a complete done deal. Humans may enjoy driving, and driving fast, but the machines will be doing all of that”. Selmer may or may not be around to witness this (he’ll be 57 next month), but he’s fairly sure it’ll happen; the twist, however, is that – unlike most of the excitable folks who work in AI – he also thinks the human mind will forever be superior to machines and robots, however intelligent.
This is not always a popular view in his field. Futurologists love to talk – and not just hypothetically – about the so-called Singularity, a point in the future when computers or robots will be smart enough to build computers or robots even smarter than themselves, resulting (per Wikipedia) in “a runaway effect… creating intelligence far exceeding human intellectual capacity and control”. In short, many people are convinced that machines will eventually outsmart us, and presumably take over the world. Selmer, however, remains sceptical – though “that doesn’t mean that powerful autonomous machines can’t wreak havoc, and they will. They will, they will.”
The reason for that upcoming havoc is something called formal programme verification: this is essentially the process (a very technical, very expensive process) by which we check that a computer programme is behaving as intended – “and most countries, the US included, are spending next to nothing on it”. Hardly anyone knows how to do it, so we’re building these increasingly autonomous and powerful machines while shutting down the process of keeping an eye on them. (The problem is compounded by the fact that operating systems are privately owned by the likes of Microsoft and Google, so researchers who might want to work on the machines don’t have access to them.) Selmer is aware of this problem yet he’s also, among other things, Director of the Rensselaer AI & Reasoning Laboratory at the Rensselaer Polytechnic Institute in upstate New York – and his work in the lab consists of trying to make precisely the kind of machines he’s talking about, the self-aware kind that may eventually ‘wreak havoc’.
Why does he do it? Practically speaking, I assume it’s because the robotics revolution is coming one way or the other (some would say it’s already here), so he might as well be part of it. Philosophically speaking, I assume it’s because he’s convinced (as already mentioned) that robots will never overtake us, though they might go rogue once in a while – and Philosophy, after all, is his turf, having been the subject of both his BA and PhD. When I ask whose theories of the future he finds most plausible, he mentions Leibniz, the 17th-century German philosopher.
It’s heartening to find a philosopher engaged in building robots – because AI brings with it dilemmas that can only be described as philosophical. Take the self-driving car, for instance. Selmer’s recent talk at TEDx Limassol (we meet at the Elias Beach Hotel, the day before the event) was entitled ‘Can Robots Be True Heroes?’, ‘heroes’ being this year’s TEDx theme – and his point was that, at the end of the day, “robots cannot be genuine heroes” because they don’t have emotions, and heroism consists of “rising above deep emotions that are militating against doing this”. The man who walks into a burning building to save a child is a hero, says Selmer, because every sinew in his body is shouting “No! No, you have a family, and you’ve got 20 years left of your life, and you’ve got great things to accomplish”. Robots have no such qualms, hence they can’t be true heroes.
Maybe so. But what about the self-driving car that decides to sacrifice its own ‘body’ in order to prevent carnage? A child runs into the street, and the robot calculates that crashing the car – i.e. destroying itself – without injuring its owner/occupant will prevent loss of life. Isn’t self-sacrifice a kind of heroism? And, if robots can be heroes, how far will they go? “I think this is an enormous looming issue that no-one is really thinking about,” notes Selmer, looking even more doleful than usual. After all, “this kind of thinking is what machines do. This is what they’ve done in chess… If we don’t want the machines to do this, we will have to programme them somehow not to, because they’re going to see these things that we currently, for the most part, don’t see”.
Think about it. A child runs into the street, and the robot crashes (without injuring its occupant) to save the child’s life. But what if it has to choose between running over the child and crashing into another car? And what if there’s a whole bunch of children in the street, and it calculates that killing its occupant – by crashing into a wall, say – would still be a better option than ploughing into the kids? One assumes there’s some sort of Prime Directive to protect the car’s human owner, as in Isaac Asimov’s sci-fi novels – but an intelligent machine keeps learning and expanding, that’s the idea. Robots already use “naïve utilitarianism,” says Selmer; could they also learn “Kantian ethics”, the idea of basic rules (e.g. that human life is intrinsically valuable) that you never contravene? “This kind of dimension is really hard to figure out how to capture mechanically. I mean, I’m personally working on that, so I know it’s hard”.
Does he think his work with robots has given him a deeper understanding of what it means to be human?
He pauses, his deadpan expression giving no indication that he’s about to be funny. “Well, it would be hubristic for me to say yes. But I will say yes.”
Nothing like artificial intelligence, it seems, to make you appreciate real intelligence – and Selmer Bringsjord comes across that way on a personal level, as a man who appreciates the world and all who live in it. His small-town-judge demeanour is the opposite of arrogant; he’s very approachable, and spends a good 10 minutes after the interview asking for tips on what to see in Cyprus (he and his wife plan to come back soon, and explore more thoroughly). He seems, for want of a better word – or maybe there is no better word – very human.
For one thing, he appreciates travel, having given lectures and interviews in two dozen countries. For another, he appreciates family. He and his wife have been married since 1982, and have been a couple since Selmer was 15: “We were effectively childhood sweethearts”. (So these high-school relationships can work, I muse. “They can work,” he confirms, adding pointedly: “It takes a very understanding, patient woman for it to work”.) He’s also very close to his brother, and it’s long been a Thanksgiving tradition for the two Bringsjord families to go on a trip together – which suggests that he’s also close to his children, a son and daughter working for PWC and Macy’s, respectively.
Maybe he treasures family because he grew up without it, or at least without a part of it: his parents divorced when he was five, and his dad went back to Norway where he promptly dropped out of his son’s life (they only met once more, when Selmer was 16). He was raised by his mum, an entrepreneurial beautician who also dabbled in real-estate – and she always pegged him for a future academic, despite the non-academic family background (his absent father had worked as a builder). “She’d tease me about it. I remember when I was extremely young, she would say: ‘You’re basically not doing anything, it’s really hard to get you to do anything. You’re just sitting there reading!… The only thing you can be is a professor’.”
So he was always a bit of a dreamer?
“I certainly like to dream. I certainly like to read, and have always loved reading”. The internet is like crack cocaine for him; he can sit and read for hours, flitting from research papers to travel pieces. “I have to control myself, yes,” he says soberly. “I love reading about different places in the world and then travelling to them… I mean, to get online and say ‘oh, Cyprus. Darn, I’ve never been to Cyprus, this looks really interesting’, and I start to read about the history of Cyprus – it’s amazing!”. When he’s not reading, he’s writing – not just on robots but, for instance, a spy novel called Soft Wars (“What was the secret passed on from Russian top espionage master Andreev Kasakov to his even more ruthless son?” runs the Amazon synopsis) or a kind of philosophical tract called Abortion: A Dialogue, in which various characters debate the titular issue in a coffee house.
An academic who dabbles in spy novels, a robot-maker with no great faith in robots, an avid traveller who’s never lived outside the Hudson Valley (he was born in White Plains, New York, just a few miles from his current workplace). There are many facets to Selmer Bringsjord – and in fact his low-key style is also deceptive; he’s not just a reader and a dreamer, but also an athlete. He’s a serious skier, a “ski patroller”, and also plays golf and tennis. None of those are team sports, I note, and he shrugs thoughtfully: he does ski and golf with other people, he points out – “but I [also] have no problem enjoying solitude in those two sports, and really enjoying it a lot”. I suspect he may be one of those idealistic maverick types who love humanity but could probably live without actual humans.
Meanwhile, the robots are approaching – autonomous weapons getting ever closer to the battlefield (this is where formal programme verification might be useful), plus of course more benign applications. The Japanese have invested hugely in elder care, especially for patients with Alzheimer’s, and there’s also a huge growth market in education, robots as teaching assistants (maybe as “intelligent agents” rather than physical machines). Won’t people resent the incursion? Evidence shows that “most humans, including children, seem to have no problem developing an unusually strong bond with robots,” he replies. “The robot has to be respectful, courteous – maybe even seemingly loving. It can’t screw up all the time, people don’t have a lot of patience for machinery that screws up. But I think long-term there will be great acceptance.”
Robots can’t be heroes, though – and it’s unlikely they’ll ever be quirky and multi-faceted as Selmer is, and indeed as we all are. I don’t know if he’s right about the human mind, but certainly AI will have trouble replicating the human soul – the lattice of traits and contradictions that make each of us unique, the inner sense of what it’s like to be us. “The US is increasingly an AI economy. But the AI itself is not exactly deep”. You buy a book at Amazon and the online robots make recommendations, cross-referencing your tastes with millions of other readers’. “That is great, I love it, it pays the bills” for whoever designed it – but it’s not really deep thinking. Selmer points to my tape recorder: “You having to rationalise, systematise, take this information from the audio of what I said and produce a document makes [those robots] seem like absolute child’s play”. I feel better already.