When a machine moderates content, it evaluates text and images as data using an algorithm that has been trained on existing data sets. The process for selecting training data has come under fire as it’s been shown to have racial, gender and other biases.

  • nxfsi@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    1 year ago

    >FAGMAN trains ai to ban content about criminal gang activity

    >FAGMAN ai bans journalist documenting criminal gang activity without regards to context because it is a machine

    I’m gonna have to say that the ai is correct here. Rather it’s the entire approach to “content safety” that’s wrong.