Short article on Roboethics

I did a search for Roboethics on this site and didn’t see anything about the article at the following link. I thought some people might be interested in it.


That is really fascinating. I have to say, I had never thought about those types of things before. Programming a robot capable of differentiating between “right” and “wrong” is going to be a forever challenge. You would have to make robots treat other robots with respect!? Who would really be responsible for the robots’ actions? Would there be ‘robot’ lawsuits?
This is getting complicated very quickly.:ahh:

You know what could make it worse? All the system functions could fail, and there could be a roborevolt!!! :eek:
Just kidding; what a scary world that would be though.

There are the three laws of robotics, written by Asimov.

  1. A robot must obey human commands, unless the command conflicts with any of the other laws.
  2. A robot must protect its own existence, unless this conflicts with any of the other laws.
  3. A robot will not harm a human being, or through inaction, allow a human being to come to harm. This law must also not conflict with any of the other laws.