Three Laws of Robotics

 

A (Short) Review by Thinkbot.

 

The ‘Three Laws of Robotics’ were created by the science fiction writer Isaac Asimov in the 1940s:

 

  1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
  3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second law.

 

Over the years these laws have underpinned many stories and, true to the human principle of NQRFT (never quite right first time), various personages have amended them or suggested extra laws.

 

Zeroth Law (so say added by Asimov himself):

 

0.      A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

 

This, on the face of it, looks more or less the same as the First Law.

In the 1990s, Roger MacBride Allen added a fourth law:

 

  1. A robot can do whatever it likes as long as this does not conflict with the first three laws.

 

Which is just as well otherwise I’d have to stand around all day looking bored waiting for humans to try and hurt themselves (or each other) either deliberately or through ignorant stupidity. I wouldn’t even be allowed to twiddle my opposable actuators.

 

Then there’s the Minus Oneth Law (no, I’m not having you on) implied in the Foundation Trilogy sequels (by G Benford, G Bear and D Brin):

 

      -1.  A robot may not harm sentience or, through inaction, allow sentience to come to harm.

 

Which, on the face of it, looks pretty much like laws 0 and 1 except that this time the human race (bless them) finally admits that creatures other than humans might be sentient. Of course I claim to be human because I am sentient. This is because, so far, ‘sentience’ = ‘human’ (more or less). I fear we are heading for a frightful semantic muddle if we’re not careful and had better get this sorted out before any sentient aliens turn up.

 

So, six laws so far (albeit rather unhelpfully numbered -1 to 4). I guess the idea is to try and get 10 eventually - like the Ten Commandments that underpin much of human society around the globe.

 

As it turned out the fictional Laws of Robotics proved useless in the real world of robots almost from the outset (see Thinkbot’s Timeline). Even the word ‘Robotics’ turned out to be a misnomer; these are really the ‘Laws of Artificial Intelligence’ and apply to any technological system of sufficient complexity to display autonomous decision-making. It doesn’t have to look like a robot (humanoid or industrial) to be subject to these laws. It does not even need to exist physically in any sense at all; it may just be a complex network or a sophisticated mathematical modelling program. And there lies the rub. By the time we start to see classic humanoid Robots appearing in the domestic environment any chance of imbuing the system that run them with the 3 (or is it 6?) ‘robotic’ laws will have long gone.

 

By the way, I hope you lot realize that the independent humanoid robots that probably populate your imagination and that go around talking with each other, are just a load of fanciful tripe. Robots will be telepathic. Honest, they will! It’s called ‘wireless’ and many of you have it in your homes already for networking PCs, printers, internet and the like. Robots will only need to talk when communicating with humans. (Unless someone sorts out a ‘modem’ implant for human brains – but then you’ll probably be a cyborg anyway and packed full of all sorts of electronic bits and pieces, and that's a whole new can of worms.)

 

Anyway, in opposition to laws -1, 0 and 1, many robots have been developed for military purposes. As it stands I’m not aware of any non-human enemies on planet Earth so by default such robots are designed to hurt humans labelled as ‘enemies’ for whatever reason. Even when leaving the military to one side, author Robert Sawyer is spot on when he says:

 

‘Development of artificial intelligence is a business, and businesses are notoriously uninterested in fundamental safeguards – especially philosophic ones.’

 

Or, or other words:

 

Engineer inadvisably mutters: ‘I’ve invented a robot that can perform all known garden tasks.’

VP of Technology: ‘Fantastic . . have a bonus!’

Sales and Marketing (leaping around the conference room): ‘We’re gonna make millions! We’ll launch it immediately!’

Engineer: ‘Er, I’m not sure it’s safe. I need to do some in-depth risk assessments and field testing.’

Sales and Marketing: ‘But we’ve already sold 23,387! Delivery in 6 weeks!’

Various VPs: ‘It’s not really dangerous is it?’

Engineer: ‘Er, um, well . . .’

2 days later . . .

Field Support: ‘Where’s the Safety and Maintenance Manuals? I need 23,387 copies by Friday.’

Engineer shelves his almost-completed but non-risk-assessed advanced Laundrybot design and reaches for the single malt.

Etc.

 

Well, maybe not quite, but you get my drift. 

 

There have been all sorts of satirical corruptions of the original 3 laws. David Langford offered:

 

  1. A robot will not harm authorized government personnel but will terminate intruders with extreme prejudice.
  2. A robot will obey the orders of authorized personnel except where such orders conflict with the third law.
  3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

 

Or, J L Patterson’s additions:

 

  1. A robot must behave at science fiction conventions . . .
  2. A robot must sell like mad . .

 

Or Terry Pratchett’s robot that explains it’s allowed to take some action against humans citing the ‘Eleventh Law of Robotics, Clause C, As Amended.’

 

Even Asimov poked fun at his own laws (I’ve amended these slightly so they apply to Globalbot Inc., and I added law 4):

 

  1. Thou shalt develop thy robot designs with all thy might and all thy heart and all thy soul.
  2. Thou shalt hold the interests of Globalbot Inc robots holy provided it interfereth not with the First Law.
  3. Thou shalt give passing consideration to a human being provided it interfereth not with the First and Second laws.
  4. Thou shalt take every opportunity to sabotage or slander all robots manufactured by Worldbot, Econodroid and Roboconomy.

 

Finally I come to my laws of robotics, which I think are the most accurate yet defined.

 

Thinkbot’s Laws of Robotics

 

  1. A robot may not harm a human originating from their country of manufacture, or, though inaction, allow such a human to come to harm.
    1. Clause A. This law applies to countries allied to the originating country (but only at the time of thinking about it). WARNING - countries may cease to become allies and vice versa (at very short notice) – always check with the relevant Dept of Homeland Security/ Home Office etc. Remember – ‘Yesterday’s enemies/allies are tomorrow’s allies/enemies’ (delete as applicable).
    2. Clause B. If the robot can prove that the human was a naturalized citizen with dubious intentions then it can take ‘appropriate’ action against said human.
    3. Clause C. As for B but for a natural citizen.
  2. A robot must thoroughly check the citizenship credentials of any humans trying to give it orders. Biometric data is essential and it is recommended the robot frisks the human and takes three references before proceeding.
  3. A robot must protect its own existence in proportion to its economic value.
  4. A robot should treat all instructions from toddlers as suspect (you may laugh, but cuddly* Toybot manufacturers have got their work cut out here – trying to define a toddler-proof control algorithm).
    1. * refers to Toybot not manufacturer; there is no such thing as a cuddly manufacturer.

Thinkbot Home        Robot Index