• frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    10
    arrow-down
    6
    ·
    2 months ago

    They don’t understand though. A lot of AI evangelists seem to smooth over that detail, it is a LLM not anything that “understands” language, video nor images.

    We’re into the Chinese Room problem. “Understand” is not a well-defined or measurable thing. I don’t see how it could be measured except from looking at inputs&outputs.

    • Barabas [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      1
      ·
      2 months ago

      Does this mean that my TI-84 calculator was actually an AI since it could solve equations I put into it? Or Wolfram Alpha? Or a speed camera? These are all able to read external inputs to produce an output. At which point does your line go, because the current technology is nowhere near where mine goes.

      We are currently ruining the biosphere so that some people might earn a lot of money by being able to lay off workers. If you remove this integral part to what “AI” is and all other negative externalities of course it will look better, but not all of the externalities are tied to the capitalist mode of production. Economies and resource allocation would still be a thing without capitalism, it isn’t like everything magically becomes good.

      • Infamousblt [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        2 months ago

        A choose your own adventure novel is an AI because you feed it a set of inputs (page numbers) and it feeds you a set of outputs (a dynamic story).

    • space_comrade [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 months ago

      “Understand” is not a well-defined or measurable thing.

      So why attribute it to an LLM in the first place then? All of the LLMs are just floating point numbers being multiplied and added inside a digital computer, the onus is on the AI bros to show what kind of floating point multiplication is real “understanding”.

      • frightful_hobgoblin@lemmy.ml
        link
        fedilink
        arrow-up
        6
        arrow-down
        5
        ·
        edit-2
        2 months ago

        But it’s inherently impossible to “show” anything except inputs&outputs (including for a biological system).

        What are you using the word “real” to mean, and is it aloof from the measurable behaviour of the system?

        You seem to be using a mental model that there’s

        • A: the measurable inputs & outputs of the system

        • B: the “real understanding”, which is separate

        How can you prove B exists if it’s not measurable? You say there is an “onus” to do so. I don’t agree that such an onus exists.

        This is exactly the Chinese Room paper. ‘Understand’ is usually understood in a functionalist way.

        • But, ironically, the Chinese Room Argument you’re bringing up supports what others are saying that LLMs do not ‘understand’ anything.

          It seems to me like you are establishing ‘understanding’ with a functionalist meaning to be able to say that input/output is equivalent to understanding in order to say the measurable process in itself shows ‘understanding’. But that’s not what Searle, and seemingly the others here, seem to mean by ‘understanding’. As Searle argues, it is not purely the syntactic manipulation in question but the semantic. In other words, these LLMs do not “know” the information they provide, they are just repeating based off the input/output process with which they were programmed. LLMs do not project or internalize any meaning to the input/output process. If they had some reflexive consciousness and any ‘understanding’, then they could have critically approach the meaning of the information in order to assess its validity against facts rather than just naïvely proclaiming that cockroaches got their name because they like to crawl into penises at night. Do you believe LLMs are conscious?

        • space_comrade [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          How can you prove B exists if it’s not measurable?

          Because I’ve felt it, I’ve felt how understanding feels, because ultimately understanding is a conscious experience within a mind, you cannot define understanding without referencing conscious experience, you cannot possibly define it only in terms of behavior or function. So either you have to concede that every floating point multiplication in a digital chip “feels like something” at some level or you show what specific kind of floating point multiplication does.

    • booty [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I don’t see how it could be measured except from looking at inputs&outputs.

      Okay, then consider that when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning, proving that it does not have any functional understanding of anything and instead simply outputs random noise that sometimes looks similar to what one would output if they did understand the content in question.

      • frightful_hobgoblin@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Right. Like if I were talking to someone in total delirium and their responses were random and not a good fit for the question.

        LLMs are not like that.

          • frightful_hobgoblin@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            2 months ago

            when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning

            Can you paste an example of this error?

            • booty [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              2 months ago

              Have you ever used an LLM?

              Here’s a screenshot I took after spending literally 10 minutes with chatgpt very confidently stating incorrect answers to a simple question over and over. (from this thread) Not only is it completely incapable of coming up with a very simple correct answer to a very simple question, it is completely incapable of responding in a coherent way to the fact that none of its answers are correct. Humans don’t behave this way. Nothing that understands what is being said would respond this way. It responds this way because it has no understanding of the meaning of anything that is being said. It is responding based on statistical likelihoods of words and phrases following one another, like a markov chain but slightly more advanced.

              • UlyssesT [he/him]@hexbear.net
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                2 months ago

                You were arguing with such an incredibly misanthropic piece of shit that of course they see a sufficient number of TI-88s bolted together as direct analogues to self-aware and conscious human intelligence.

                Look at how that piece of shit treats other human beings: like the inferior “meat computers” that such a techbro mindset provides.

                https://hexbear.net/comment/5438712