• renegadespork@lemmy.jelliefrontier.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 hours ago

    Personally I have yet to find a use case. Every single time I try to use an LLM for a task (even ones they are supposedly good at), I find the results so lacking that I spend more time fixing its mistakes than I would have just doing it myself.

    • Scubus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      So youve never used it as a starting point to learn about a new topic? You’ve never used it to look up a song when you can only remember a small section of lyrics? What about when you want to code a block of code that is simple but monotonous to code yourself? Or to suggest plans for how to create simple sturctures/inventions?

      Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

      Hell, ask chatgpt what use cases it would recommend for itself, im sure itll have something interesting.

      • renegadespork@lemmy.jelliefrontier.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        as a starting point to learn about a new topic

        No. I’ve used several models to “teach” me about subjects I already know a lot about, and they all frequently get many facts wrong. Why would I then trust it to teach me about something I don’t know about?

        to look up a song when you can only remember a small section of lyrics

        No, because traditional search engines do that just fine.

        when you want to code a block of code that is simple but monotonous to code yourself

        See this comment.

        suggest plans for how to create simple sturctures/inventions

        I guess I’ve never tried this.

        Anything with a verifyable answer that youd ask on a forum can generally be answered by an llm, because theyre largely trained on forums and theres a decent section the training data included someone asking the question you are currently asking.

        Kind of, but here’s the thing, it’s rarely faster than just using a good traditional search, especially if you know where to look and how to use advanced filtering features. Also, (and this is key) verifying the accuracy of an LLM’s answer requires about the same about of work as just not using an LLM in the first place, so I default to skipping the middle-man.

        Lastly, I haven’t even touched on the privacy nightmare that these systems pose if you’re not running local models.