• survirtual@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 days ago

    By real, I mean an LLM anchored in objective consensus reality. It should be able to interpolate between truths. Right now it interpolates between significant falsehoods with truths sprinkled in.

    It won’t be perfect but it can be a lot better than it is now, which is starting to border on useless for any type of serious engineering or science.

    • jeeva@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      23 hours ago

      That’s just… Not how they work.

      Equally, from your other comment: a parameter for truthiness, you just can’t tokenise that in a language model. One word can drastically change the meaning of a sentence.

      LLMs are very good at one thing: making probable strings of tokens (where tokens are, roughly, words).

      • survirtual@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        22 hours ago

        Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”

        Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.