• swordsmanluke@programming.dev
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 months ago

    So… unlike Stable Diffusion or LLMs, the point of this research isn’t actually to generate a direct analog to the input, in this case video games. It’s testing to see if a generative model can encode the concepts of an interactive environment.

    Games in general have long been used in AI research because they are models of some aspect of reality. In this case, the researchers want to see if a generative AI can learn to predict the environment just by watching things happen. You know, like real brains do.

    E.g. can we train something that learns the rules of reality just by watching video combined with “input signals”. If so, it opens up whole new methods for training robots to interact with the real world.

    That’s why this is newsworthy beyond just “AI Buzz” cycle.