• remotelove@lemmy.ca
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    1
    ·
    2 months ago

    This kind of skill might help developers build AI agents that identify buttons or fields on a webpage to handle tasks like making a reservation at a restaurant.

    … to improve efficiency of click farms and to bypass captchas.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    2 months ago

    This reads like an ad. They claim to use 1000 times less data than proprietary models, except nobody knows how much data they use or how big proprietary models actually are. Also there’s a giant asterisk here they fail to mention: Molmo outperforms the competition at visual benchmarks, not actual text chat.

  • lunarul@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    Instead of writing captions, the team asked annotators to record 60- to 90-second verbal descriptions answering a list of questions about each image. They then transcribed the descriptions—which often stretched across several pages—and used other large language models to clean up, crunch down, and standardize them.

    So those other LLMs are needed to train this one?

  • homoludens@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    but an order of magnitude smaller

    I’m pretty sure that would be three orders of magnitude.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      2 months ago

      They’re not talking about the same thing.

      Last week, researchers at the Allen Institute for Artificial Intelligence (Ai2) released a new family of open-source multimodal models competitive with state-of-the-art models like OpenAI’s GPT-4o—but an order of magnitude smaller.

      That’s in reference to the size of the model itself.

      They then compiled a more focused, higher quality dataset of around 700,000 images and 1.3 million captions to train new models with visual capabilities. That may sound like a lot, but it’s on the order of 1,000 times less data than what’s used in proprietary multimodal models.

      That’s in reference to the size of the training data that was used to train the model.

      Minimizing both of those things is useful, but for different reasons. Smaller training sets make the model cheaper to train, and a smaller model makes the model cheaper to run.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        After a quick skim, seems like the article has lots of errors. Molmo is trained on top of Qwen. The smallest ones are trained on something by the same company as Molmo.