By Richard Dickenson
In a lecture the professor was demonstrating the functions of a small experimental robot. It was the size of a kid’s doll with head, arms and legs that looked and were clothed like a child. It could talk, answer questions, stand up and sit down. The professor picked it up and smashed its head down onto the table a few times. There were gasps of horror from the audience.
Why? This was a small machine, already obsolete, with metal parts and joints and a small, powerful controlling computer. For demonstration purposes the computer was in the robot’s thigh and its ‘voice’ was projected through a speaker in its chest.
The point is that the demonstration pointed out a recent and developing problem. For some years it has been noticed that as the more humanoid computers appear and react, the more the humans working on and with them start to form relationships with them. Some robots do have primitive reasoning capacity derived from their artificial intelligence (AI). They can respond to humans and can even replicate some simple emotional features. Appearances, human-like actions and emotional interactions are the simple basis on which we humans form relationships. There seems no reason why robot-human relationships should be fundamentally different.
In the burgeoning sex market it is already possible to buy extraordinarily lifelike ‘partners’ that are dumbfoundingly impressive in function. Ten or twenty years will vastly increase function and decrease cost. The day of the totally-subservient-all-round-low-maintenance partner will have arrived. How will ancient dicta such as those surrounding love, marriage, intimacy and fidelity react to such changes?
Russia, Israel and some of the major military powers are developing self-operating missiles and tanks that can select their own targets. ‘Soldiers’ with high AI quotients are getting cheaper to produce with every year that passes – they don’t need huge compensation payments and years of support if they are damaged. ‘Eye-in-the-Sky’ surveillance and pin-point precision missiles are already in global action. AI and robotics are being funded on an enormous scale.
But even without these horrifying products of the human ‘death inclination’ the underlying problems are huge and so new that human sociology has, as yet, no clear thinking and no certain way forward. Scientists everywhere are at full speed in making what they think of as progress. Few are anything like as deeply concerned with consequences.
There are so many threats to human existence, so many possibilities of extraterrestrial factors affecting the progress and even existence of humans that there seems no way through the complexities of the maze facing us.
Soon, and I mean within, at most, fifty years, the pressures of economy, climate change and health/disease factors will inevitably hasten the need for at least some parts of humans to function better as machines. The human-machine combination seems inevitable. The early need to evolve our bodies away from their current materials to something more enduring is already clear. Must our ultimate aim be immortality achieved by replacing the flesh and bones with inorganic material?
And how will AI be apportioned? Will it be democratically, with a certain rationed, and controllable, allocation to all. Or will it be competitive like every other human feature?
And here we come back to that matter of robots. As they become more and more human will pressure groups arise clamouring for robot-rights? And where would those ‘rights’ start and end? Indeed, should robots have rights of their own at all? These are real questions, urgent and complex.
Rights normally pertain to people, to human beings, and they refer to fundamental rights, like human dignity and the five freedoms. But the closer we get to having humanoid or android robots in our homes and, perhaps, in our friendship circles, the more rights we will naturally want to give them. Once a robot becomes a partner, or a companion, a friend for us or even a servant or a nurse, we will want to protect it. So, as we give some kinds of right to animals, should we give some kind of rights to the robots as well?
If you buy a robot and it then works for you, for free, you decide what it does, where and when and with just a few paltry volts for ‘food.’ In short the robots are used exclusively for our purposes and without their consent. Isn’t that a definition of slavery? But can you be cruel to a robot?
And if you lose your temper and chop its head off, will that be murder?