• jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    6
    ·
    edit-2
    24 小时前

    I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

    • amelia@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 小时前

      I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where “AI” is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      10
      ·
      10 小时前

      being able to do 30% of tasks successfully is already useful.

      If you have a good testing program, it can be.

      If you use AI to write the test cases…? I wouldn’t fly on that airplane.

    • Shayeta@feddit.org
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      5
      ·
      22 小时前

      It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 小时前

        I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.

        • wise_pancake@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 小时前

          Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.

          The tooling has improved a ton in the last 3 months.

      • Outbound7404@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        12 小时前

        A human can review something close to correct a lot better than starting the task from zero.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 小时前

          In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 小时前

            harder to notice incorrect information in review, than making sure it is correct when writing it.

            That depends entirely on your writing method and attention span for review.

            Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.

          • loonsun@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 小时前

            Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        22 小时前

        Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 小时前

          It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

          I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 小时前

            It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

            I usually write 3x the code to test the code itself. Verification is often harder than implementation.

            • jsomae@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              5 小时前

              It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.

              (This is speculation.)

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 小时前

              Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.

              Writing the proper product code in the first place, that’s the valuable challenge.

              • zbyte64@awful.systems
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 小时前

                Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not. Then when it doesn’t work I find it is easier to debug you own code than someone else’s and that includes AI.

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 小时前

                  I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…

                  • zbyte64@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    2 小时前

                    Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        23 小时前

        I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.

        • outhouseperilous@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          7
          ·
          23 小时前

          It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            23 小时前

            I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”

                • outhouseperilous@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  3
                  ·
                  10 小时前

                  Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.

                  • Honytawk@feddit.nl
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    5
                    ·
                    9 小时前

                    The comparison is about the correctness of their work.

                    Their lives have nothing to do with it.

              • jsomae@lemmy.ml
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                2
                ·
                22 小时前

                yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.

                • Knock_Knock_Lemmy_In@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  10 小时前

                  Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.

                  • jsomae@lemmy.ml
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    5 小时前

                    The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.

                  • MangoCats@feddit.it
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    10 小时前

                    I have actually been doing this lately: iteratively prompting AI to write software and fix its errors until something useful comes out. It’s a lot like machine translation. I speak fluent C++, but I don’t speak Rust, but I can hammer away on the AI (with English language prompts) until it produces passable Rust for something I could write for myself in C++ in half the time and effort.

                    I also don’t speak Finnish, but Google Translate can take what I say in English and put it into at least somewhat comprehensible Finnish without egregious translation errors most of the time.

                    Is this useful? When C++ is getting banned for “security concerns” and Rust is the required language, it’s at least a little helpful.

                  • jsomae@lemmy.ml
                    link
                    fedilink
                    English
                    arrow-up
                    7
                    arrow-down
                    3
                    ·
                    22 小时前

                    Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.