On Asimov’s Robot Laws

Are the laws sufficient? Are there any cases that these laws do not cover? Could they lead to undesired consequences?

The basic idea of Asimov’s three robot laws is that robots must protect humans from harm and robots can only protect themselves as long as this protection does not harm humans.

In terms of language, these three laws are quite possibly the simplest one there is. No jargon, no complicated words, all plain language. For most cases, simplest means best. The same seems to apply in this case.

For now, it would probably be safe to say that these laws are mostly sufficient and there aren’t many cases which these laws cannot cover. One thing to note about these laws, though, is it does not include anything about decision points, i.e. how does a robot tell which human to save when a robot is placed in a situation where a robot must enact and protect the First Law and there are a number of humans who are about to come to harm.

In any case, should we reach the point where robots have enough function and intelligence to interact well with humans, what will matter more than these laws themselves is their implementation.

A lot of things can go wrong and, most of the time, laws aren’t enough. Even humans cannot deal with the myriad laws put over our heads and many people have the job of finding ways around them.

How much we want to protect each other will matter more than how much robots will want to protect us.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: