• thedeadwalking4242@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    I told Gemini to role play as AM and it immediately did within 1 prompt.

    You don’t need it to be perfect for it to be dangerous, just give it access to make actions against the real world. It doesn’t think, is doesn’t care, it doesn’t feel. It will statistically fulfill its prompt. Regardless of the consequences.

  • khánh@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    your product just caused the death of one man and your response is “unfortunately its not perfect”.

    • TwilitSky@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 days ago

      The product was actually working just fine. Just depends on whose perspective/motives you’re viewing it from.

  • njordomir@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    The personification of AI is increasing. They’ll probably announce their holy grail of AGI prematurely and with all the robot personification the masses will just buy the lie. It’s too easy to view this tech as human and capable just because it mimics our language patterns. We want to assign intentionality and motivation to its actions. This thing will do what it was programmed to do.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    The fact that AI is “not perfect” is a HUGE FUCKING PROBLEM. Idiots across the world, and people who we’d expect to know better, are making monumental decisions based on AI that isn’t perfect, and routinely “hallucinates”. We all know this.

    Every time I think I’ve seen the lowest depths of mass stupidity, humanity goes lower.

    • Skyline969@piefed.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Think of the dumbest person you know. Not that one. Dumber. Dumber. Yeah, that one. Now realize that ChatGPT has said “you’re absolutely right” to them no less than a half dozen times today alone.

      If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them. If they could be like “this could be the right answer, but I wasn’t able to verify” and “no, I don’t think what you said is right, and here are reasons why”, people would cling to them less.

      • Canonical_Warlock@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 days ago

        If LLMs weren’t so damn sycophantic,

        Has anyone made a nonsycophantic chat bot? I would actually love a chatbot that would tell me to go fuck myself if I asked it to do something inane.

        Me: “Whats 9x5?”

        Chatbot: “I don’t know. Try using your fingers or something?”

        Edit: Wait, this is just glados.

        • Darkenfolk@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          I am not a chatbot, but I can do daily “go fuck yourself’s” if your interested for only 9,99 a week.

          14,95 for premium, which involves me stalking your onlyfans and tailor fitting my insults to your worthless meat self.

          • Slashme@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            I am not a chatbot

            Citation needed

            if your interested

            Ah, no, that’s a human error. Not a bot.

        • Zos_Kia@jlai.lu
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          Honestly Claude is not that sycophantic. It often tells me I’m flat out wrong, and it generally challenges a lot of my decisions on projects. One thing I’ve also noticed on 4.6 is how often it will tell me “I don’t have the answer in my training data” and offer to do a web search rather than hallucinating an answer.

            • Zos_Kia@jlai.lu
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              Yes i saw that benchmark and was honestly not surprised with the results. It seems that Anthropic really focused on those issues above and beyond what was done in other labs.

    • Restaldt@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      If you thought people were dumb before LLMs… just know that now those people have offloaded what little critical thinking they were capable of to these models.

      The dumbest people you know are getting their opinions validated by automated sycophants.

  • ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    5 days ago

    AI’s don’t go crazy like that after 5 prompts. You need to spend weeks and weeks talking to them to corrupt the context so much that it stops following original guidelines. I wonder how does one do it? How do you spend weeks talking to AI? I had “discussions” with AI couple of times when testing it and it’s get really boring real soon. For me it doesn’t sound like a person at all. It’s just an algorithm with bunch of guardrails. What kind of person can think it actually has personality and engage with it on a sentimental level? Is it simply mental illness? Loneliness and desperation?

  • 7112@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 days ago

    Is “AI” even worth it?

    Seriously, is there really a major use case for LLM besides data collection (which they can still do without LLM)?

    • MissesAutumnRains@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      Generative AI in its current, public-facing form? Probably not. It’s sort of like an invention of the internet situation. It CAN be used to facilitate learning, share information, and improve lives. Will it be used for that? No.

      A friend of mine is training local LLMs to work in tandem for early detection of diseases. I saw a pitch recently about using AI to insulate moderators from the bulk of disturbing imagery (a job that essentially requires people to frequently look at death, CSAM, and violence and SIGNIFICANTLY ruins their mental health). There are plenty of GOOD ways to use it, but it’s a flawed tech that requires people to responsibly build it and responsibly use it, and it’s not being used that way.

      Instead it’s being scaled up and pushed into every possible application both to justify the expenses and enrich terrible people, because we as a society incentivize that.

      Edit: hugely belated, I misspoke here after checking with my friend. He’s using local models, but they aren’t LLMs. This is why I’m no expert. 😅

    • captain_solanum@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I use LLMs for the following, you can decide for yourself if they are major enough:

      • Generating example solutions to maths and physics problems I encounter in my coursework, so I can learn how to solve similar problems in the future instead of getting stuck. The generated solutions, if they come up with the right answer, are almost always correct and if I wonder about something I simply ask.
      • Writing really quick solutions to random problems I have in python or bash scripts, like “convert this csv file to this random format my personal finance application uses for import”.
      • Helping me when coding, in a general way I think genuinely increases my productivity while I really understand what I push to main. I don’t send anything I could not have written on my own (yes, I see the limitations in my judgement here).
      • Asking things where multiple duckduckgo searches might be needed. E.g. “Whats the history of EU+US sanctions on Iran, when and why were they imposed/tightened and how did that correlate with Iranian GDP per capita?”

      What does this cost me? I don’t pay any money for the tech, but LLM providers learn the following about me:

      • What I study (not very personal to me)
      • Generally what kinds of problems I want to solve with code (I try to keep my requests pretty general; not very personal)
      • The code I write and work on (already open source so I don’t care)
      • Random searches (I’m still thinking about the impact of this tbh, I think I feel the things I ask to search for are general enough that I don’t care)

      There’s also an impact on energy and water use. These are quite serious overall. Based on what I’ve read, I think that my marginal impact on these are quite small in comparison to other marginal impacts on the climate and water use in other countries I have. Of course there are around a trillion other negative impacts of LLMs, I just once again don’t know how my marginal usage with no payment involved lead to a sufficient increase in their severity to outweigh their usefulness to me.

  • Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago
    Edit-pre: To be clear…

    I use LLMs rarely (personal reasons) and never for certain things like writing and math (professional reasons) but this comment is not an “AI good/bad” take, just a practical question of tool safety/regs.

    AI including LLMs are forevermore just tools in my mind. And we wouldn’t have OSHA/BMAS/HSE/etc if idiots didn’t do idiot things with tools.

    But there’s evidently a certain type of idiot that’s spared from their idiocy only by lack of permission.

    From who? Depends.

    Sometimes they need permission from authority: “god told me to!”

    Sometimes they need it from the mob: “I thought I was on a tour!”

    And sometimes any fucking body will do: “dare me to do it!”

    But all these stories of nutters doing shit AI convinced them to do, from the comical to the deeply tragic, ring the same bonkers bell they always have.

    But therein lies the danger unique^1^ to these tools: that they mimic a permission-giver better than any we’ve made.

    They’re tailor-made for activating this specific category of idiot, and their likely unparalleled ease-of-use absolutely scales that danger.

    As to whether these idiots wouldn’t have just found permission elsewhere, who knows.

    My question is whether some kind of training prereq is warranted for LLM usage, as is common with potentially dangerous tools? Is that too extreme? Is it too late for that? Am I overthinking it?

    ^1^Edit-post: unique danger, not greatest.

    Rant/

    What is the greatest danger then? IMHO settling for brittle “guard rails” then bulldozing ahead instead of laying groundwork of real machine-ethics.

    Hoping conscience is an emergent property of the organic training set is utterly facile, theoretically and empirically. Engineers should know better.

    Why is it greatest? Easy. Because some of history’s most important decisions were made by a person whose conscience countermanded their orders. Replacing empathic agents with machines eliminates those safeguards.

    So “existential threat” and that’s even before considering climate. /Rant

    • Regrettable_incident@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      The LLM just told me to come round to your house and crap in your begonias. You might want to avoid looking out the window until I’m done.

  • Matt@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Honestly, no sane person will have this happen to them. Someone with such strong delusions should not be anywhere near AI or even sharp objects. This person’s problem was not AI, it was their severe mental illness which was obviously not being treated properly for whatever reason.

    • Eximius@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I think that thinking has the problem of treating AI as this “weird occult book/tool about funny dealings”, and not “government, megacorp sanctified close-to-AGI super-intelligence tool for you to use for free because benevolence” as it is institutionally lied to be.

      Sanity is culture relative. You’re absolutely right, but also, this is a symptom of the culture.

      • NihilsineNefas@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Not to mention how every “AI” company is actively participating in the surveillance of not only citizens, but of people in other countries, actively being used by the US military to pick targets for bombing, or how it’s being used to spread misinformation at a rate that would make the cia’s efforts in the 60s sound like that guy you met at the pub who has MANY opinions on geopolitics.

  • Krauerking@lemy.lol
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations”

    “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,”

    After the plan failed,… …Chat logs show that Gemini gave Gavalas a suicide countdown, and repeatedly assuaged his terror as he expressed that he was scared to die

    Performing super well, just need to code in a longer suicide countdown so that the the Tier 2 engineer has enough time to respond to their ticket queue.

    • postmateDumbass@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      In September 2025, told by the AI that they could be together in the real world if the bot were able to inhabit a robot body, Gavalas — at the direction of the chatbot — armed himself with knives and drove to a warehouse near the Miami International Airport on what he seemingly understood to be a mission to violently intercept a truck that Gemini said contained an expensive robot body. Though the warehouse address Gemini provided was real, a truck thankfully never arrived, which the lawsuit argues may well have been the only factor preventing Gavalas from hurting or killing someone that evening.

      AI writing itself into an A-Team episode?