I don’t believe we can ever tell that. Advancement works that way. Once we reach an apparent goal, someone will always be there to say that there is something more we can do and then we will move forward again.
Are the laws sufficient? Are there any cases that these laws do not cover? Could they lead to undesired consequences?
The basic idea of Asimov’s three robot laws is that robots must protect humans from harm and robots can only protect themselves as long as this protection does not harm humans.
In terms of language, these three laws are quite possibly the simplest one there is. No jargon, no complicated words, all plain language. For most cases, simplest means best. The same seems to apply in this case.
For now, it would probably be safe to say that these laws are mostly sufficient and there aren’t many cases which these laws cannot cover. One thing to note about these laws, though, is it does not include anything about decision points, i.e. how does a robot tell which human to save when a robot is placed in a situation where a robot must enact and protect the First Law and there are a number of humans who are about to come to harm.
In any case, should we reach the point where robots have enough function and intelligence to interact well with humans, what will matter more than these laws themselves is their implementation.
A lot of things can go wrong and, most of the time, laws aren’t enough. Even humans cannot deal with the myriad laws put over our heads and many people have the job of finding ways around them.
How much we want to protect each other will matter more than how much robots will want to protect us.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
– From Isaac Asimov‘s Runaround (1941)
Speaking of “almost too human”, in Cory Doctorow’s Magic Kingdom, people do not have to die anymore. If something fatal should happen to a person, all that needs to be done is to grow a clone then “restore from backup”. Somehow, this makes people essentially just data. Does that make those people not really people anymore but just robots?
Google found me this when I was looking for Baymax. Nice bonus!
Image Source: Hiccup & Toothless vs Hiro & Baymax
Who’s your favorite robot?
I do not know very many robots, fictional or real, so I can’t say much. I would have said WALL-e and the way WALL-e is trying to save humanity but then I remembered Baymax. It is almost scary how Baymax acts almost too human but that part is also something to look forward to.
Image Source: Baymax and Hiro
Inevitably, we have developed a need for automation, despite the fact that humanity was able to live without it for most of history.
Today, much of the need for robots arises from the fact that humans have this need to learn more. And, in many cases, humanity wants to learn more about places we cannot yet explore on our own, i.e. the inside of a volcano or a hurricane, the surface of Mars, deep under the oceans, or even the inside of a person. We cannot get there ourselves, so we send in robots while it is yet too expensive and/or too dangerous to get there ourselves.
When there’s movies involving humans and aliens or humans and animals, they always make me think that humans don’t think of each other as really very smart. The other side mostly gets the better. Case in point: Ratatouille. The rats could understand human spoken language but all the humans could get from the rats were squeaking. When it’s aliens, them aliens always get the more advanced tech. And they always get to be the ones coming in for a visit while humans are always stuck on one planet. Except maybe Planet 51.
Whatever artificial intelligence can be built, it will be a reflection of humanity itself. Whatever a computer becomes, it is always just whatever its creators (and teachers) make it into. Whether artificial intelligence will be concerning, dangerous, or terrifying even, it will be up to us.
For instance, there was that tweetbot that turned rouge, tweeting racist and other offensive comments. We can say that the manifested behavior was not the intent of its creators. Nevertheless, it turned that way because of its teachers, those people who interacted with it.
More than AI, it’s actually the same for humans: we could teach each other peace or we could teach each other war. Human history says we’ve chosen the latter all this while.
Well, given where we’re at, AI seems pretty concerning.