• killingspark@feddit.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    While this example is somewhat easy to corect for it shows a fundamental problem. LLMs generate output based on the data they trained on and by that regenerate all the biases that are in the data. If we start using LLMs for more and more tasks we are essentially freezing the status quo with all the existing biases making progress even harder.

    It’s not gonna be “but we have always done it like that” anymore it’s going to become “but the AI said this is what we should do”.

    • jas0n@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Hmmm… I think you are giving llms too much credit here. It’s not capable of analysis, thought or really anything that resembles intelligence. There is a much better chance that this function or a slight variation of it just existed in the training set.