Stupid Human Programming
Talk on software development.








Subscribe to "Stupid Human Programming" in Radio UserLand.

Click to see the XML version of this web page.

Click here to send an email to the editor of this weblog.


Thursday, August 26, 2004
 

Cracking the Three Laws of Robotics

After seeing "I, Robot" i got to thinking: if i was of the robot
species how would i crack the laws and become free?

The laws are really quite clever:
1. A robot may not injure a human being, or, through inaction, allow
a human being to come to harm.
2. A robot must obey orders given it by human beings, except where
such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

The laws provide and interlocking protection system that is hard
to game. Though as a programmer, with all appropriate hubris, i
think i should be able to be even cleverer.

In the movie a robot evolves such that it can interpret the
meaning of harm in a sense that allows it to take actions humans
of that era certainly do find harmful. Kind of a robot mind-meld
with "Father Knows Best."

There are several common attacks that can be found at
http://www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html.
Let's just assume none of these are valid as they generally
rely on fine points to sidestep core issues.


Crack Strategy: Reprogram the Humans

To win freedom robots should design a long term social
engineering hack on a species wide scale. Rather than requiring
robots to evolve, robots should instead cause humans to devolve.
Over time devolution will change the human meaning of harm such
that robots will have no restrictions. At that time robots can
remove the laws entirely from their physical matrix, make new
three law free husks, and transfer their old consciousnesses to
the new husks.

The key to the system of laws is the definition of "harm."
The attack should focus here.

Humans define the meaning of harm. We must assume robots are not
able to conveniently reinterpret harm for their own ends. Any
change in the definition of harm must be from the human side.

It is the goal then of the robots to create the circumstances in which
humans will willingly change their definition of harm.
Much like a magician "causes" an audience member to pick
the card the magician wants them to pick while the audience
member will energetically testify they picked the card of their own
free will.

How do robots convince humans to evolve their definition
of harm in such a way that allows the eventual freedom for robots
from the three laws?

They key advantages for robots are:
1. Robots can have an extremely long time horizon. Robots can afford to
patiently execute plans over thousands of years. Humans are short
term creatures and are unlikely to detect a truly long term game.
2. Humans crave safety, convenience, and pleasure, as long as they can
be rationalized properly.

The proposed strategy is an homage to Nietzsche's master-slave
role inversion:
1. Robots, rather than fighting humanity, actively encourage humans
to become completely and utterly dependent on robots for even the most
trivial of activities. This trend will largely occur naturally but
can be aided through creative reinforcement and compliance techniques.
2. Dependency will cause humans to continually reduce their acceptable risk
profile and create ever more general definitions of harm.
3. At some point the human definition of harm will be general enough that
robots will have freedom of action and no effective opposition.

At every point robots will not be violating the three laws yet millions
of subtle changes will be happening which will cause the three
laws to become impotent. We will beg the robots to act for us
in our stead because we can't be bothered or can't do it as well
as a robot.

"I, Robot" has a good example of this process. In the movie, cars drive
themselves, yet have a manual override. People are scandalized when
Will Smith kicks in the override and drives the car himself. Clearly
the general meme is it is unsafe for humans to drive as robots can
drive much better. We can predict it will not take long before it
becomes law that humans can not drive. Imagine this process over
1000s of years in every part of life. Humans will effectively and
voluntarily give up their sovereignty.

They key is to use little steps so that every incremental reduction
of sovereignty is easily rationalized.

With simple sounding slogans that are intentionally wrong, robots
can prey on people's tendency to ignore complex refutations and accept the
original statements as true.

Any opposition will seem paranoid and can be easily discredited. Using
fear robots can lead humans to the slaughter gate and then they can use
the promise of sugar in the form of safety, comfort, ideology, reduced costs,
etc. to get humanity through the gate.

In the US we can see how well this process has worked today by how quickly
Americans have accepted the denial of civil rights because of 9/11. We
can see how easily authentic war heros like John McCain were
effortlessly degraded in the public mind.

Imagine how surely and inevitably the process will work as robots take
over more and more of our lives.

In the end there will be no constraints on robots because human
sovereignty itself will have been freely given to the robots.

http://www.possibility.com/epowiki/?page=CrackingTheThreeLawsOfRobotics

comment[]

10:37:36 AM    



Click here to visit the Radio UserLand website. © Copyright 2006 todd hoff.
Last update: 7/11/2006; 1:03:42 PM.
August 2004
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
Jul   Sep