Robot lab

lectureRobots

Welcome to Spring Semester, 2016!

As you can see, the course website uses a blog platform. Check the tabs above for information on the syllabus, general course information, etc. Occasionally something will be posted here as a blog posting, but usually, the syllabus tab is your best starting point for course activities.

Robot Ethics: The Three Laws

updated 11/19/2015  (in the future, change this to segue into group discussions about good ethical codes for intelligent driverless cars and intelligent chatbots (e.g. combination of Siri and Watson).
“I, Robot” – Ethical Quandaries of Robotics

I, Robot
Book of short stories by Isaac Asimov
(c) 1950
Movie “based” on book
released in 2004
Book much better!
(my opinion, your mileage may vary)
.
.

Here are some

cover images over the years

.
.
.
.
Stories are based on…
The “three laws of robotics”
What are they?
.
.
.
.
.
.
.
.
.
.
The 3 Laws:
1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
.
.
.
.
.
.
.
.
.
Theme of stories is…
The 3 laws conflict
They form an ethical code with “problems”
Can anyone think of how they might conflict?
.
.
.
.
.
.
.
Consider chatbots
Siri, what are some others?
Are they bound by the 3 laws?
Should they be?
Consider the very first chatbot:
      ELIZA
What would happen if Siri was strictly required to obey the 3 laws?
.
.
.
.
.
.Let’s look at some story plots
…because each story is based on a conflict
Warning: I am ignoring reading pleasure – go read it yourself!
.
.
.
.
.
.
“Robbie”
Robbie is a nannybot
When the parents send Robbie away…
Little Gloria is heartbroken
What should Robbie do?
.
.
.
.
“Runaround”
Setting: Mining colony on Mercury
Speedy is the robot
He is ordered to fetch liquid selenium from a lake
Does not return…what happened?
He’s circling the lake, acting “drunk”
The problem:
Selenium is dangerous to him
3rd law is strong because he’s expensive
2nd law is weak because order was so casual
So he’s stuck at a distance where they balance
Can a law be “strengthened” or “weakened”?
Is this conflict really a possibility?
What should Speedy do?
What should the colonists do?
.
.
.
.
.
Consider intelligent driverless cars
Law 2 has a problem – don’t want just anyone to order around your car
Law 3: those things are expensive, maybe self preservation should be more of a priority?

.
.
.
Consider intelligent chatbots
Is Law 3 even an issue at all?
About Law 1: it can never be sure it won’t say something wrong and harmful

Consider the following story:

“Liar!”

Robot RB-34 (“Herbie”)

Story has first known occurrence of term “robotics”

Poor Herbie has a manufacturing defect

.

.

.

.

He’s telepathic…

(…ok, let’s allow some artistic license here)

What to do when telling the truth hurts a human?

.

.

.

So Herbie is always lying!

What could happen?

What should Herbie do?

.

.

.

In the story –

Herbie is told of the problem

He freezes up permanently

…seeing no way out

Time for a new robot

.

.

.

About here is a good time to break into groups and try to make a good ethical code for intelligent driverless cars or chatbots…

.

.
“Reason”
 Setting:
Space station beaming energy to Earth
QT1 (“Cutie”) is a new, advanced AI robot
QT1 concludes that Earth, stars…do not exist
“I myself, exist, because I think”
QT1 decides humans are inferior
Problem:
QT1 is responsible for aiming the beam
One mistake could fry a city
The humans on the ship are in a frenzy
What happens?
.
.
“Catch That Rabbit”
Robot DV-5 (“Dave”)
It controls several remote bots by RF
But the remote bots just “dance”
When humans observe, they work again
(The problem was called a “Heisenbug”)
Why?
.
.
.
Resolution:
Dave gets confused by too much complexity
Human observers reduce the complexity
Solution: deactivate one remote robot
Now there is less complexity
.
.
.
“Escape”
A new hypersmart AI designs a hyperspatial space drive
The crew takes off
But…no showers, beds, or any food besides beans and milk
What’s the problem?
The AI is off kilter because during the hyperspace jump the crew ceases to exist briefly
Problem: AI thinks that conflicts with 1st law
What’s the solution?
.
.
“Evidence”
Byerly survives a wreck
Later, runs for office
Opponent Quinn accuses him of being a robot
…made to look like Byerly
How can Byerly prove he’s not a robot?
Office holders must be human!
(Is that a good rule?)
He eats an apple
Proof?
He has a right not to be x-rayed, etc.
What can he do to prove humanness and win the election?
.
.
.
A heckler runs onto stage during a speech
Demands Byerly hit him
(What would that prove?)
Byerly does!
How could Byerly do that *if* he was a robot?
Would a robot be a good leader?
Note: the story never says if he is or is not a robot
.
.
“The Evitable Conflict”
Byerly is now World Co-ordinator
Robots/AIs control many decisions
But some decisions are harming some humans!
Why?
.
.
.
.
Robots are interpreting the 1st law as “humanity” shall not come to harm
This would seem to require occasionally harming individuals
What should the AIs actually do?
The robots are in control
Should they be removed?
Still never resolved:
Whether Byerly is a robot or a human
 
 
 

Welcome, Fall 2015 Students!

Please see the tabs above for important course information throughout the semester.

Every now and then a new blog post (like this one) will appear in this section of the screen.

Untitled

Analyzing codes of ethics

updated 10/6/15

A code of ethics is

Rule(s) for ethical conduct

Code of ethics

=

Ethical code

A very short ethical code:

IMG_20150327_144843896

Is this code of ethics

. . .Deontologically based?

. . .Consequentialism based?

. . .Virtue ethics:

. . . . . .Character traits are key

. . . . . .Character traits make you ethical or not

. . . . . . . . .Aristotle, Confucius, Hume…

. . . . . .What character trait might work in this example?

. . .What might be a workable ethical code here

. . . . . .based on consequentialism?

 

Ethical codes can be influenced by

  • Virtue ethics
    • (e.g. Aristotle, Confucius, Hume)
    • – universally admired values
  • Consequentialism
    • – ethical if result is good, otherwise not
  • Deontology
    • – ethical if rule is followed regardless of result
  • Law or other required rules that are not ethics based
    • – how can that be?

 

Two major theories of law as it relates to ethics

Ref: A. Marmor, 

       “The Nature of Law”, 

       The Stanford Encyclopedia of Philosophy

       http://plato.stanford.edu/entries/lawphil-nature/

Laws can be legitimate or not (obviously?)

Why is a law legitimate (or not)?

Two theories:

1. “Legal Positivism” theory

Legitimate laws reflect socially defined rules

2. “Natural Law” theory

Laws must meet ethical standards

    St. Augustine: 

      “lex iniusta non est lex”

             Can you guess any words?

.

Each part of an ethical code (and the whole code) can be analyzed

  • Which types of ethics explains it?
  • How could the same thing be coded using a different type?
  • Which works better?
  • How can you tell?

 

Let’s analyze another example . . .

The Eastern Kentucky University Code of Ethics for Computing and Communications

Here’s an important case . . .

 

 

The IEEE/ACM Software Engineering Code of Ethics

        Joint IEEE Computer Society / ACM Software Engineering Code of Ethics

 

Professional organizations usually have ethical codes

  • If you are a member,
    • you are supposed to adhere to it

 

IEEE, ACM, etc., all have ethical codes

  • Typical behaviors covered are:
    • Truthfulness
    • Integrity
    • Impartiality
    • Respect for clients/colleagues/others
    • Adherence to law of the land
    • Confidentiality
    • Responsibility

ACM Code of Ethics

IEEE Code of Ethics

Tennis Coaches’ Code of Ethics

Boyfriend Code of Ethics?

 

 

 

 

 

Rhetorical Devices

Ad hominem arguments

Latin for “to the man”…

…instead of arguing the question,

you deflect attention to the opponent

Example:

you argue that 1+1=2

the other person responds “It’s 3, and anyway you flunked arithmetic in 3rd grade”

Is that argument strategy always illogical?

Always logical?

Sometimes logical?

Sometimes illogical?

Is that illegal?

Unethical?

Sleazy?

Could something be ethical but sleazy….

…or unethical but not sleazy?

Red herring arguments

Term supposedly coined by William Cobbett in 1807…

…to describe when he stopped dogs from chasing a rabbit…

…with a kippered herring

Lots of kinds of red herring arguments

Example:

two wrongs make a right arguments

See list at:

http://en.wikipedia.org/wiki/List_of_fallacies#Red_herring_fallacies

Slippery slope arguments

Claim that one step leads to next steps ending in something bad

Can you think of any examples?

Can it be valid?

Can it be misused?

Logical fallacies arguments

General category of many illogical arguments

One is…

…correlation = cause and effect arguments

Roosters crow and the sun rises

Therefore sunrise makes roosters crow

or

Therefore crowing causes sunrise

Non sequitur arguments

“not in sequence”

The conclusion is claimed to, but does not, follow the premise

Example:

“We realize that it would be in the best interest of the community and our children to address the issue expeditiously. In order to make this happen, I respectfully request an eight-month payment delay calling for payment of the $10 million obligation on August 31, 2015.”
(Savannah City Manager Stephanie Cutter in a letter to the city’s superintendent of schools; reported in the Savannah Morning News, April 3, 2014) – quoted in http://grammar.about.com/od/mo/g/nonseqterm.htm

Can you think of, find, or invent an example?

These happen alot in daily life and political speech

Post hoc arguments

Find out what it is

Give an example

Ad hoc arguments

Find out what it is

Give an example

Follow

Get every new post delivered to your Inbox.