You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • DdCno1@kbin.social
    link
    fedilink
    arrow-up
    127
    arrow-down
    2
    ·
    6 months ago

    It seems like the entire industry is in pure panic about AI, not just Google. Everyone hopes that LLMs will end years of homeopathic growth through iteration of long-existing technology, which is why it attracts tons of venture capital.

    Google, which sits where IBM was decades ago, is too big, too corporate and too slow now, so they needed years to react to this fad. When they finally did, all they were able to come up with was a rushed equivalent of existing LLMs that suffers from all of the same problems.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        It’s also useful because it gives a corporate controlled filter for all information, that most people will never truly appreciate is being used as a mouthpiece.

        The end goal of this is fairly obvious: imagine Google where instead of the sponsored result and all subsequent results, it’s just the sponsored result.

    • NutWrench@lemmy.world
      link
      fedilink
      English
      arrow-up
      54
      arrow-down
      1
      ·
      6 months ago

      I think this is what happens to every company once all the smart / creative people have gone. All you have left are the “line must always go up” business idiots who don’t understand what their company does or know how to make it work.

      • _number8_@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        6 months ago

        similarly i’m tired of apple fanboys pretending the company hasn’t gotten dramatically worse since jobs died as well. yeah he sucked in his own ways but things were starkly less shitty and belittling. tim cook would be gone for those fucking lightning-3.5mm dongles

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      6 months ago

      Just want to say that homeopathic growth is both hilarious and perfectly adequate description of what modern tech industry is.

    • SomeGuy69@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      6 months ago

      The snake ate it’s tail before it’s fully grown. The AI inbreeding might be already too far integrated, causing all sorts of Mumbo-Jumbo. Also they have layers of censorship, which effect the results. The same that happened to chatgpt, the more filters they added, the more it confused the result. We don’t even know if the hallucinations are fixable, AI is just guessing after all, who knows if AI will ever understand 1+1=2, by calculating, instead of going by probability.

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago

        Hallucinations aren’t fixable, as LLMs don’t have any actual “intelligence”. They can’t test/evaluate things to determine if what they say is true, so there is no way to correct it. At the end of the day, they are intermixing all the data they “know” to give the best answer, without being able to test their answers LLMs can’t vet what they say.

      • Ech@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        Even saying they’re guessing is wrong, as that implies intention. LLMs aren’t trying to give an answer, let alone a correct answer. They just put words together.

    • jaybone@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Well their search has been shit for years and no one seems to be in any “panic” to fix that. How tone deaf thinking adding AI to their shittified search matters to anyone.

      “But it will summarize our SEO advertisement search results!”

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      7
      ·
      6 months ago

      Journalists are also in a panic about LLMs, they feel their jobs are threatened by its potential. This is why (in my opinion) we’re seeing a lot of news stories that will focus on any imperfections that can be found in LLMs.

      • EldritchFeminity@lemmy.blahaj.zone
        cake
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        6 months ago

        They’re not threatened by its potential. They, like artists, are threatened by management who think that LLMs are good enough today to replace part or all of their staff.

        There was a story from earlier this year of a company that owns 12-15 different gaming news outlets who fired about 80% of their writing staff and journalists - replacing 100% of their staff at the majority of the outlets with LLMs and leaving a skeleton crew at the rest.

        What you’re seeing isn’t some slant trying to discredit LLMs. It’s the results of management who are using them wrong.

        • QuadratureSurfer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          6 months ago

          What I mean is that Journalists feel threatened by it in someway (whether I use the word “potential” here or not is mostly irrelevant).

          In the end this is just a theory, but it makes sense to me.

          I absolutely agree that management has greatly misunderstood how LLMs should be used. They should be used as a tool, but treated like an intern who’s speaking out loud without citing any sources. All of their statements and work should be double checked.