Hands down the weirdest intro I have ever read. And I’ve read some weird ones.
But an interesting link. I wonder if we will ever become so reliant on robots, that to live without them would spell death for humanity.
Odd how eerily similar that is to all of the topics covered in Asimov’s Robot Series. Most people know his I, Robot story, but the actual Robot Series covers a whole host of topics like those in this link. Amazing if you realize the first of his Robot books was written in 1954.
The problem is that Asimov was an author, not a lawyer or philosopher (in the strictest sense), so the “laws” he created were a method of exploring a story. You’ll note that a lot of the Robot stories involve robots finding ways to “break” one or more of the Three Laws by finding loopholes. Even if you look at the Will Smith movie, the AI extrapolated a “0th” law about not allowing humanity to come to harm, thus justifying killing individual humans (in violation of the 1st law).
So in all honesty I don’t think Asimov’s Three Laws are the right framework around which to establish, or at least start discussing, serious ethical codes around the use of unmanned systems. They’re certainly clever, and fun to consider, but particularly when they do also apply (in the stories) to intelligent systems with some agency to make decisions, versus our discussion of how drones that are controlled by humans sitting in control rooms in the Midwest can or cannot fire missiles, they’re not laws that are applicable to the situations we find ourselves in right now.
That wasn’t invented for the movie. Its essence was in one of Asimov’s Robot short stories, “The Evitable Conflict”, and it was named/numbered in his later novels.
To me, there are two fundamental problems that no sub-set of current ‘laws’ can address.
1.) In the current state of the art, robots are to individual humans as corporations are to individual businessmen/women; that is, in addition to the benefits they are also a way to make behavior which we may otherwise feel guilty about pseudo-anonymous. Drone strikes, high frequency trading & information theft are current examples, whereas general inter-human conflicts on a smaller scale could be at stake in a larger A.I.-driven society. It should be noted that ‘robots’ in this sense do not constitute only physical machines – something that I don’t know whether Asimov understood at the time.
2.) We (humans) have yet to fathom a viable ‘end-game’ for a fully autonomous society that we’re comfortable with because we’re afraid that with autonomy will come an eventual sense of willpower. In the epic struggles of many centuries ago, how often was the willpower of the many weak able to overcome the brute strength of the powerful few? (hint: the answer lies in the beginning of most U.S. history textbooks) Thus, most scenarios we perceive involve humans pulling along a robot-ridden carriage in a distant future – and we all know what happened to horses when the automobile was invented.
Not only would any ‘law’ have to address humans creatively destroying each other (with increasing creativity might I add) but the set of laws would have to address our desire to maintain dominance as a species.
It was also the basis for merging the Robot series with the Foundation series - an immortal robotic servant that worked behind the scenes for the good of humanity.