Robot Ethics and the “I, Robot” Dilemmas

(Updated 4/23/2014)

I, Robot

Book of short stories by Isaac Asimov

(c) 1950

Movie “based” on book

released in 2004

Book much better!

My opinion –

your mileage may vary

.

.

.

.

.

.

Stories are based on…

The “three laws of robotics”

What are they?

.

.

.

.

.

.

.

.

.

.

The 3 Laws:

1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

.

.

.

.

.

.

.

.

.

Theme of stories is…

The 3 laws conflict

They form an ethical code with “problems”

Can anyone think of how they might conflict?

.

.

.

.

.

.

.

Let’s look at the story plots

…because each story is based on a conflict

Warning:

I am ignoring reading pleasure –

go read it yourself!

.

.

“Robbie”

Robbie is a nannybot

When the parents send Robbie away…

Little Gloria is heartbroken

What should Robbie do?

.

.

.

.

“Runaround”

Setting: Mining colony on Mercury

“Speedy” is an expensive new kind of robot

He is casually requested to fetch liquid selenium from a lake

Does not return…what happened?

He’s circling the lake, acting “drunk”

The problem:

Selenium is dangerous to him

3rd law is strong because he’s expensive

2nd law is weak because order was so casual

So he’s stuck at a distance where they balance

Can a law be “strengthened” or “weakened”?

Is this conflict really a possibility?

What should Speedy do?

What should the colonists do?

.

.

“Reason”

 Setting:

Space station beaming energy to Earth

QT1 (“Cutie”) is a new, advanced AI robot

QT1 concludes that Earth, stars…do not exist

“I myself, exist, because I think”

(Recall Descartes’ famous conclusion)

QT1 decides humans are inferior

…and that nothing outside the spaceship actually exists

Problem:

QT1 is responsible for aiming the energy beam

One mistake could fry a city

The humans on the ship are in a frenzy

What happens?

Paradox:

Human commands robot to believe other humans don’t exist

Robot must obey due to 3rd law

Human then commands robot to harm “fake,” “nonexistent” humans

2nd law is now inoperative, robot must obey

Is it even possible to command a belief?

.

.

“Catch That Rabbit”

Robot DV-5 (“Dave”)

It controls several remote bots by RF

But when emergency strikes, the remote bots just “dance”

Why?

.

.

.

Resolution:

Dave gets confused by too much complexity

During emergencies all 6 robots need detailed instruction simultaneously

Solution: deactivate one remote robot

Now there is less complexity

So they shoot one robot to get rescued

.

.

“Liar!”

Robot RB-34 (“Herbie”)

Story has first known occurrence of term “robotics”

Poor Herbie has a manufacturing defect

.

.

.

.

He’s telepathic…

(…ok, let’s allow some artistic license here)

What to do when telling the truth hurts a human?

.

.

.

So Herbie is always lying!

What could happen?

What should Herbie do?

.

.

.

In the story –

Herbie is told of the problem

He freezes up permanently

…seeing no way out

Time for a new robot

.

.

.

“Escape”

A new hypersmart AI designs a hyperspatial space drive

The crew takes off

But…no showers, beds, or any food besides beans and milk

What’s the problem?

The AI is off kilter because during the hyperspace jump the crew ceases to exist briefly

Problem: AI thinks that conflicts with 1st law

What’s the solution?

.

.

“Evidence”

Byerly survives a wreck

Later, runs for office

Opponent Quinn accuses him of being a robot

…made to look like Byerly

How can Byerly prove he’s not a robot?

Office holders must be human!

(Is that a good rule?)

He eats an apple

Proof?

He has a right not to be x-rayed, etc.

What can he do to prove humanness and win the election?

.

.

.

A heckler runs onto stage during a speech

Demands Byerly hit him

(What would that prove?)

Byerly does!

How could Byerly do that *if* he was a robot?

Would a robot be a good leader?

Note: the story never says if he is or is not a robot

.

.

“The Evitable Conflict”

Byerly is now World Co-ordinator

Robots/AIs control many decisions

But some decisions are harming some humans!

Why?

.

.

.

.

Robots are interpreting the 1st law as “humanity” shall not come to harm

This would seem to require occasionally harming individuals

What should the AIs actually do?

The robots are in control

Should they be removed?

Still never resolved:

Whether Byerly is a robot or a human

5-minute class presentation signup form

Please indicate first, second, and third choice for your class presentation, and hand in. Presentations to the class are 5 minutes each. Estimated 8 people per class period without running over.

Your name:___________________________________

Available presentation days:

Tu 4/16/13:

Th 4/18/13:

Tu 4/23/13:

Th 4/25/13:

Tu 4/30/13: Ali, Jacob D., who else?

Th 5/2/13 (last day of this class): Elizabeth, Chase, who else?

Th 5/9/13, 1:30-3:30 (final exam period, no final, projects due, remaining class presentations):

 

“Last Lecture” Discussion Questions

Please discuss the video with your group, filling out this form as you discuss. Hand in at end of class.
Your name:_____________________________
1. The video was intended as life advice to whom?
2. List the advice items that your group can recall below. (Many, but not all, are related to ethics.) For each, note whether you agree or not.