On Asimov’s Robot Laws

Are the laws sufficient? Are there any cases that these laws do not cover? Could they lead to undesired consequences?

The basic idea of Asimov’s three robot laws is that robots must protect humans from harm and robots can only protect themselves as long as this protection does not harm humans.

In terms of language, these three laws are quite possibly the simplest one there is. No jargon, no complicated words, all plain language. For most cases, simplest means best. The same seems to apply in this case.

For now, it would probably be safe to say that these laws are mostly sufficient and there aren’t many cases which these laws cannot cover. One thing to note about these laws, though, is it does not include anything about decision points, i.e. how does a robot tell which human to save when a robot is placed in a situation where a robot must enact and protect the First Law and there are a number of humans who are about to come to harm.

In any case, should we reach the point where robots have enough function and intelligence to interact well with humans, what will matter more than these laws themselves is their implementation.

A lot of things can go wrong and, most of the time, laws aren’t enough. Even humans cannot deal with the myriad laws put over our heads and many people have the job of finding ways around them.

How much we want to protect each other will matter more than how much robots will want to protect us.

Advertisements

The Three Robot Laws

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

– From Isaac Asimov‘s Runaround (1941)

Create a website or blog at WordPress.com

Up ↑