JennylynCarter Posted December 15, 2015 Report Share Posted December 15, 2015 Isaac Asimov, the famed science fiction author, did more than write novels. He is also credited with coming up with the three laws of robotics: A robot may not injure a human being, or through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the first law. A robot must protect its own existence as long as such protection does not conflict with the first or second laws. According to Asimov, these laws will need to govern the use of robotics to keep both them and humanity safe. A major fear is that artificially intelligent robots could eventually pose a threat to humans either by actively seeking to harm them, or by failing to act in a manner that would preserve human life. Because humans are beginning to give robots control over essential infrastructure, the latter is as especially big concern. Recently, an employee at a Volkswagen plant in Germany was crushed when he became trapped in a robot arm. The machine was only doing what it was programmed to do and wasn’t able to alter its programming even when a human’s life was in danger. To make robots safer for humans, robotics researchers at Tufts University are working on developing artificial intelligence that can deviate from its programming if the circumstances warrant it. The technology is still primitive but it’s an important step if artificially intelligent robots are going to be coexisting with humans someday. How it works Researchers at Tufts University’s Human-Robot Interaction lab designed a robot that recognizes that it is allowed to disobey orders when there is a good reason. For example, when facing a ledge, the robot will refuse to walk forward even when ordered to. Not only will the robot refuse, but he is programmed to state the reason—that he would fall if he were to obey. To understand how the robot is able to do this, we have to first understand the concept of “felicity conditions.” Felicity conditions refer to the distinction between understanding the command being given, and the implications of following that command. To design a robot that could refuse to obey certain orders, the researchers programmed the robot to go through five logical steps when given a command: Do I know how to do X Am I physically able to do X now? Am I normally physically able to do X? Am I able to do X right now? Am I obligated based on my social role to do X? Does it violate any normative principle to do X? This five step logical process enables the robot to determine whether or not a command would cause harm to itself or a human before following an order. The researchers recently presented their work at the AI for Human-Robot Interaction Symposium in Washington DC. Artificial Intelligence News brought to you by artificialbrilliance.com Source: ibtimes.co.uk/robots-are-being-taught-say-no-commands-just-like-asimovs-three-laws-robotics-1530689 Quote Link to comment Share on other sites More sharing options...
Luke_Wilbur Posted December 21, 2015 Report Share Posted December 21, 2015 Nice article. You might want to read Rudy Rucker. He has a different take on hardware and software ethics. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.