• Maiq
    link
    fedilink
    192 months ago
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • @xia@lemmy.sdf.org
      link
      fedilink
      English
      72 months ago

      Could you imagine an artifical mind actually trying to obey these? You can’t even get past #1 without being aware of the infinite number of things you could do cartesian-producted with all the consequential downstream effects of those actions until the end of time.