I think AI is neat.

  • R0cket_M00se@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    5 months ago

    To make the analogy actually comparable the human in question would need to be learning about it for the first time (which is analogous to the training data) and in that case you absolutely could convince the small child of that. Not only would they believe it if told enough times by an authority figure, you could convince them that the colors we see are different as well, or something along the lines of giving them bad data.

    A fully trained AI will tell you that you’re wrong if you told it the sky was orange, it’s not going to just believe you and start claiming it to everyone else it interacts with. It’s been trained to know the sky is blue and won’t deviate from that outside of having its training data modified. Which is like brainwashing an adult human, in which case yeah you absolutely could have them convinced the sky is orange. We’ve got plenty of information on gaslighting, high control group and POW psychology to back that up too.

    • Kecessa@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      5 months ago

      Feed LLMs all new data that’s false and it will regurgitate it as being true even if it had previously been fed information that contradicts it, it doesn’t make the difference between the two because there’s no actual analysis of what’s presented. Heck, even without intentionally feeding them false info, LLMs keep inventing fake information.

      Feed an adult new data that’s false and it’s able to analyse it and make deductions based on what they know already.

      We don’t compare it to a child or to someone that was brainwashed because it makes no sense to do so and it’s completely disingenuous. “Compare it to the worst so it has a chance to win!” Hell no, we need to compare it to the people that are references in their field because people will now be using LLMs as a reference!