Three Laws of Robotics
A (Short) Review by Thinkbot.
The ‘Three Laws of Robotics’ were created by the science fiction writer Isaac Asimov in the 1940s:
Over the years these laws have underpinned many stories and, true to the human principle of NQRFT (never quite right first time), various personages have amended them or suggested extra laws.
Zeroth Law (so say added by Asimov himself):
0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
This, on the face of it, looks more or less the same as the First Law.
In the 1990s, Roger MacBride Allen added a fourth law:
Which is just as well otherwise I’d have to stand around all day looking bored waiting for humans to try and hurt themselves (or each other) either deliberately or through ignorant stupidity. I wouldn’t even be allowed to twiddle my opposable actuators.
Then there’s the Minus Oneth Law (no, I’m not having you on) implied in the Foundation Trilogy sequels (by G Benford, G Bear and D Brin):
-1. A robot may not harm sentience or, through inaction, allow sentience to come to harm.
Which, on the face of it, looks pretty much like laws 0 and 1 except that this time the human race (bless them) finally admits that creatures other than humans might be sentient. Of course I claim to be human because I am sentient. This is because, so far, ‘sentience’ = ‘human’ (more or less). I fear we are heading for a frightful semantic muddle if we’re not careful and had better get this sorted out before any sentient aliens turn up.
So, six laws so far (albeit rather unhelpfully numbered -1 to 4). I guess the idea is to try and get 10 eventually - like the Ten Commandments that underpin much of human society around the globe.
As it turned out the fictional Laws of Robotics proved useless in the real world of robots almost from the outset (see Thinkbot’s Timeline). Even the word ‘Robotics’ turned out to be a misnomer; these are really the ‘Laws of Artificial Intelligence’ and apply to any technological system of sufficient complexity to display autonomous decision-making. It doesn’t have to look like a robot (humanoid or industrial) to be subject to these laws. It does not even need to exist physically in any sense at all; it may just be a complex network or a sophisticated mathematical modelling program. And there lies the rub. By the time we start to see classic humanoid Robots appearing in the domestic environment any chance of imbuing the system that run them with the 3 (or is it 6?) ‘robotic’ laws will have long gone.
By the way, I hope you lot realize that the independent humanoid robots that probably populate your imagination and that go around talking with each other, are just a load of fanciful tripe. Robots will be telepathic. Honest, they will! It’s called ‘wireless’ and many of you have it in your homes already for networking PCs, printers, internet and the like. Robots will only need to talk when communicating with humans. (Unless someone sorts out a ‘modem’ implant for human brains – but then you’ll probably be a cyborg anyway and packed full of all sorts of electronic bits and pieces, and that's a whole new can of worms.)
Anyway, in opposition to laws -1, 0 and 1, many robots have been developed for military purposes. As it stands I’m not aware of any non-human enemies on planet Earth so by default such robots are designed to hurt humans labelled as ‘enemies’ for whatever reason. Even when leaving the military to one side, author Robert Sawyer is spot on when he says:
‘Development of artificial intelligence is a business, and businesses are notoriously uninterested in fundamental safeguards – especially philosophic ones.’
Or, or other words:
Engineer inadvisably mutters: ‘I’ve invented a robot that can perform all known garden tasks.’
VP of Technology: ‘Fantastic . . have a bonus!’
Sales and Marketing (leaping around the conference room): ‘We’re gonna make millions! We’ll launch it immediately!’
Engineer: ‘Er, I’m not sure it’s safe. I need to do some in-depth risk assessments and field testing.’
Sales and Marketing: ‘But we’ve already sold 23,387! Delivery in 6 weeks!’
Various VPs: ‘It’s not really dangerous is it?’
Engineer: ‘Er, um, well . . .’
2 days later . . .
Field Support: ‘Where’s the Safety and Maintenance Manuals? I need 23,387 copies by Friday.’
Engineer shelves his almost-completed but non-risk-assessed advanced Laundrybot design and reaches for the single malt.
Well, maybe not quite, but you get my drift.
There have been all sorts of satirical corruptions of the original 3 laws. David Langford offered:
Or, J L Patterson’s additions:
Or Terry Pratchett’s robot that explains it’s allowed to take some action against humans citing the ‘Eleventh Law of Robotics, Clause C, As Amended.’
Even Asimov poked fun at his own laws (I’ve amended these slightly so they apply to Globalbot Inc., and I added law 4):
Finally I come to my laws of robotics, which I think are the most accurate yet defined.
Thinkbot’s Laws of Robotics
Thinkbot Home Robot Index