Robot Ethics: The Three Laws

Updated 11/6/2017 

Ethical Quandaries of Robotics

Three Laws of Robotics

Have you heard about the “Three Laws of Robotics”?

 What are they?

 Why are they?

 Are they an ethical code?

.

.

.

.

.

.

.

.

.

.

The 3 Laws:

1) A robot may not injure a human being,

or, through inaction,

allow a human being to come to harm.

2) A robot must obey the orders given it by human beings

except where such orders would

conflict with the First Law.

3) A robot must protect its own existence

as long as such protection

does not conflict with the First or Second Law.

 

Are they consequentialist, deontological, or virtue ethics based?

Do they work for

    Driverless cars?

    3D printers?

    Intelligent chatbots?

        (e.g. combine Siri and Watson)

          The very first chatbot was ELIZA

Which of those are robots, anyway?

What would happen if they were required by law to obey the 3 laws?

 

I, Robot

Book of short stories by Isaac Asimov

(c) 1950

Movie “based” on book

released in 2004

Book much better!

(my opinion, your mileage may vary)

.

.

Here are just a couple of the many

cover images over the years

.

.

.

.

Stories are based on…

The “three laws of robotics”

The 3 Laws:

1) A robot may not injure a human being,

or, through inaction,

allow a human being to come to harm.

2) A robot must obey the orders given it by human beings

except where such orders would

conflict with the First Law.

3) A robot must protect its own existence

as long as such protection

does not conflict with the First or Second Law.

 

Theme of stories is…

The 3 laws conflict

They form an ethical code with “problems”

Can anyone think of how they might conflict?

.

.

.

 

.

.

.

.

.Let’s look at some story plots

…because each story is based on a conflict

Warning: I am ignoring reading pleasure – go read it yourself!

.

.

“Robbie”

Robbie is a nannybot

When the parents send Robbie away…

Little Gloria is heartbroken

What should Robbie do?

.

.

.

.

“Runaround”

Setting: Mining colony on Mercury

Speedy is the robot

Someone casually suggests to him to fetch liquid selenium from a lake

Does not return…what happened?

He’s circling the lake, acting “drunk”

The problem:

Selenium is dangerous to him

3rd law is strong because he’s expensive

2nd law is weak because order was so casual

So he’s stuck at a distance where they balance

Can an ethical rule be “strengthened” or “weakened”?

Is this conflict really a possibility?

What should Speedy do?

What should the colonists do?

.

.

.

.

.

Consider intelligent driverless cars

Law 2 has a problem – don’t want just anyone to order around your car

Law 3: those things are expensive, maybe self preservation should be more of a priority?

.

.

.

Consider intelligent chatbots

Is Law 3 even an issue at all?

About Law 1: it can never be sure it won’t say something wrong and harmful

So maybe a law 1 compliant chatbot would have to stay silent!

Consider the following story:

“Liar!”

Robot RB-34 (“Herbie”)

Story has first known occurrence of term “robotics”

Poor Herbie has a manufacturing defect

.

.

.

.

He’s telepathic…

(…ok, let’s allow some artistic license here)

What to do when telling the truth hurts a human?

.

.

.

So Herbie is always lying!

What could happen?

What should Herbie do?

.

.

.

In the story –

Herbie is told of the problem

He freezes up permanently

…seeing no way out

Time for a new robot

.

.

.

About here is a good time to break into groups and try to make a good ethical code for intelligent driverless cars or chatbots…

.

.

“Reason”

 Setting:

Space station beaming energy to Earth

QT1 (“Cutie”) is a new, advanced AI robot

QT1 decides that Earth, stars…do not exist

“I myself, exist, because I think”

QT1 decides humans are inferior

Problem:

QT1 is responsible for aiming the beam

One mistake could fry a city

The humans on the ship are in a frenzy

What happens?

.

.

.

.

.

First, he locks the humans out of the control room

Then, he keeps the beam on track

Humans are not as good at aiming the beam, and

“I merely kept all dials at equilibrium in accordance with the will of the master”

.

.

“Catch That Rabbit”

Robot DV-5 (“Dave”)

It controls several remote bots by RF

But the remote bots just “dance”

When humans observe, they work again

(The problem was called a “Heisenbug”)

Why?

.

.

.

Resolution:

Dave gets confused by too much complexity

Human observers reduce the complexity

Solution: deactivate one remote robot

Now there is less complexity

.

.

.

“Escape”

A new hypersmart AI designs a hyperspatial space drive

The crew takes off

But…no showers, beds, or any food besides beans and milk

What’s the problem?

The AI is off kilter because during the hyperspace jump the crew ceases to exist briefly

Problem: AI thinks that conflicts with 1st law

What’s the solution?

.

.

“Evidence”

Byerly survives a wreck

Later, runs for office

Opponent Quinn accuses him of being a robot

…made to look like Byerly

How can Byerly prove he’s not a robot?

Office holders must be human!

(Is that a good rule?)

He eats an apple

Proof?

He has a right not to be x-rayed, etc.

What can he do to prove humanness and win the election?

.

.

.

A heckler runs onto stage during a speech

Demands Byerly hit him

(What would that prove?)

Byerly does!

How could Byerly do that *if* he was a robot?

Would a robot be a good leader?

Note: the story never says if he is or is not a robot

.

.

“The Evitable Conflict”

Byerly is now World Co-ordinator

Robots/AIs control many decisions

But some decisions are harming some humans!

Why?

.

.

.

.

Robots are interpreting the 1st law as “humanity” shall not come to harm

This would seem to require occasionally harming individuals

What should the AIs actually do?

The robots are in control

Should they be removed?

Still never resolved:

Whether Byerly is a robot or a human

 

 

 

Advertisements