• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    I’m sorry dude, but it’s been a long day.

    You clearly have no idea WTF you are talking about.

    The research other than the DeepMind researcher’s independent follow-up was all being done at academic institutions, so it wasn’t “showing off their model.”

    The research intentionally uses a toy model to demonstrate the concept in a cleanly interpretable way, to show that transformers are capable and do build tangential world models.

    The actual SotA AI models are orders of magnitude larger and fed much more data.

    I just don’t get why AI on Lemmy has turned into almost the exact same kind of conversations as explaining vaccine research to anti-vaxxers.

    It’s like people don’t actually care about knowing or learning things, just about validating their preexisting feelings about the thing.

    Huzzah, you managed to dodge learning anything today. Congratulations!