Published in Martin Paul Eve
Author Martin Paul Eve
Even as worldwide militaries develop autonomous killer robots, when we think of the ethics of AI, we often turn to the Asimov principles: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.