Jump to content
Washington DC Message Boards

Asimov’s three laws of robotics in action


JennylynCarter

Recommended Posts

Isaac Asimov, the famed science fiction author, did more than write novels. He is also credited with coming up with the three laws of robotics:

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.

  3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

According to Asimov, these laws will need to govern the use of robotics to keep both them and humanity safe. A major fear is that artificially intelligent robots could eventually pose a threat to humans either by actively seeking to harm them, or by failing to act in a manner that would preserve human life. Because humans are beginning to give robots control over essential infrastructure, the latter is as especially big concern.

Recently, an employee at a Volkswagen plant in Germany was crushed when he became trapped in a robot arm. The machine was only doing what it was programmed to do and wasn’t able to alter its programming even when a human’s life was in danger. To make robots safer for humans, robotics researchers at Tufts University are working on developing artificial intelligence that can deviate from its programming if the circumstances warrant it. The technology is still primitive but it’s an important step if artificially intelligent robots are going to be coexisting with humans someday.

How it works

Researchers at Tufts University’s Human-Robot Interaction lab designed a robot that recognizes that it is allowed to disobey orders when there is a good reason. For example, when facing a ledge, the robot will refuse to walk forward even when ordered to. Not only will the robot refuse, but he is programmed to state the reason—that he would fall if he were to obey. To understand how the robot is able to do this, we have to first understand the concept of “felicity conditions.” Felicity conditions refer to the distinction between understanding the command being given, and the implications of following that command. To design a robot that could refuse to obey certain orders, the researchers programmed the robot to go through five logical steps when given a command:

  1. Do I know how to do X

  2. Am I physically able to do X now? Am I normally physically able to do X?

  3. Am I able to do X right now?

  4. Am I obligated based on my social role to do X?

  5. Does it violate any normative principle to do X?

This five step logical process enables the robot to determine whether or not a command would cause harm to itself or a human before following an order. The researchers recently presented their work at the AI for Human-Robot Interaction Symposium in Washington DC.

Artificial Intelligence News brought to you by artificialbrilliance.com

Source: ibtimes.co.uk/robots-are-being-taught-say-no-commands-just-like-asimovs-three-laws-robotics-1530689

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...