• Varyk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    7 months ago

    No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.

    Autonomous killings is an absolutely terrible, terrible idea.

    The incident I’m thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:

    In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a “retaliatory” nuclear strike.

    https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident

    As faulty as humans are, it’s a good a safeguard as we have to tragedies. Keep a human in the chain.

    • alternative_factor@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.