Automated image generators are often accused of spreading harmful stereotypes, but studies usually only look at MidJourney. Other tools make serious efforts to increase diversity in their output, but effective remedies remain elusive.

  • ubermeisters@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    8 months ago

    That’s 100% a real issue. Fortunately for all these clickbait articles, most people don’t really grasp how these things are trained or how input data affects them during training.

    • Car@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      8 months ago

      And even if we could provide the training algorithm a perfectly diverse dataset, who gets to decide what that means? You could probably poll a million anthropologists from across the world and observe trends, but no certain consensus. What if polling anthropologists in underdeveloped nations skews in a different direction than what we consider rich countries? How about if a country was a colonizer in the past or has participated in a violent revolution?

      How do we decide who qualifies as an anthropologist? Is a doctorate required, or is a college degree with numerous publications sufficient?

      I don’t think we’ll ever see a perfectly neutral solution to this problem. At best, we can come equipped with knowledge that these tools may come with some biases, like when you analyze texts from the past. You make the best with what you have and strive to improve