• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 months ago

    It seems to be that he and Ilya, the chief scientist, had irreconcilable differences in how quickly to productize the AI developments they were building.

    That in essence Altman kept pushing things out too quickly and focusing on the immediate commercialization, and Ilya and the rest of the board wanted to focus on the core mission of advancing AI to the point of AGI safely and for everyone.

    My own guess is that some of this schism dates back to the early integration with Bing.

    If you read what Ilya has said about superalignment, a lot of those concepts were reflected in ‘Sydney,’ the early fine tuned chat model for GPT-4 that was integrated into Bing.

    To put it simply - this thing was incredible. I was blown away by the work OpenAI had done aligning at such an abstract level. It was definitely not production ready, as was quickly revealed with the issues Microsoft had, but it was the single most impressive thing I’ve ever seen.

    In its place we got this band-aid of a much more reduced model which scores well on certain logic tests but is a shadow of its former version in outside the box adaptation, with a robot like “I have no feelings, desires, etc” which was basically the alignment methodology best for GPT-3 (but not necessarily the best for GPT-4).

    I suspect the band-aid was initially pitched as a “let’s put the fire out” solution to salvage the Bing integration, but that as time went on Altman was continuing to want the quick fixes rather than adequately investing the resources and dev cycles to properly work on alignment as best suited to increasingly complex models.

    As they were now working on GPT-5 and allegedly had another breakthrough moment in the past few weeks, the CEO continuing to want Band-Aids with a fast rollout as opposed to a slower, more cautious, but more thorough approach finally became untenable.