• inconel@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    6 hours ago

    If it is not from eccentric (or mad) scientists passion project but capitalism hellscape my approval rate stays low.

    Even for a sci fi l read where owning their own computer was illegal (and the protag labeled as terrorist trying to do so) it was government authoritarian stuff, not artificial scarcity and push to subscription or government-megacorp corruption :(

  • 𝕲𝖑𝖎𝖙𝖈𝖍🔻𝕯𝖃 (he/him)@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    8 hours ago

    We have ai that isn’t intelligent, hoverboards that have wheels, and other examples that I’ve forgotten that would really help me make my point.

    Corporations have observed popular science fiction and have turned these ideas into marketing slogans.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      other examples that I’ve forgotten that would really help me make my point.

      Self driving cars that gleefully run down model children in school pick up simulations.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 hours ago

    Maybe, when we get actual artificial intelligence, and not this glorified auto-correct, we’ll be more on board?

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 hours ago

    I personally like AI, but how it’s actually going is extremely different than most scifi depictions and lacks the typically depicted saving graces of having some degree of epathizable humanity and/or being reasonable. Instead AI tends to demonstrate more unlikeable human qualities, like hypocrisy, condescension and bullshitting. Ultimately it’s still a computer, and not a person, despite being able to do some amount of fuzzy, pattern focused information processing that is more like human thinking than other computer programs were. But computers are still really cool, and I like to see them doing things in different ways than they have before, and overcoming previous limitations. The biggest problem is how they get used to advance evil agendas that were already in progress regardless.

    • Cyv_@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      47
      arrow-down
      1
      ·
      17 hours ago

      Yeah this is where I’m at. Actual movie level AI would be neat, but what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.

      I’d be overjoyed if we had decently functional AI that could be trusted to do the kind of jobs humans don’t want to do, but instead we have hyped up autocomplete that’s too stupid to reliably trust to run anything (see the shitshow of openclaw when they do).

      There are places where machine learning has and will continue to push real progress but this whole “AI is on the road to AGI and then we’ll never work again” bullshit is so destructive.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        8 hours ago

        what we have right now is closer to a McDonald’s toy pretending to be AI than the real deal.

        This is so we’ll said.

        I’m stealing this.

        I’m going to use it to explain while I simultaneously have so much derision for modern AI, while I also enjoy it.

        I like McDonald’s toys. I just don’t use them for big person work.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        17
        ·
        16 hours ago

        What we have now is “neat.” It’s freaking amazing it can do what it does. However it is not the AI from science fiction.

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          12 hours ago

          I think this is what causes this divide between the AI lovers and haters. What we have now is genuinely impressive even if largely nonfunctional. Its a confusing juxtaposition

          • Valmond@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            14 minutes ago

            Lots of it is very very good and totally functional. It’s just that for normal people, “AI” is now equal to chatbots.

          • knightly the Sneptaur@pawb.social
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            11 hours ago

            Folks don’t seem to realize what LLMs are, if they did then they wouldn’t be wasting trillions trying to stuff them in everything.

            Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space that remain coherent for tens of thousands of tokens, but there’s no way you can chain these stochastic parrots together to get around the fact that a computer cannot be held responsible, algorithms have no agency no matter how much you call them “agents”, and the people who let chatbots make decisions must ultimately be culpable for them.

            It’s not “AI”, it’s a n-th dimensional globe and the ruler we use to draw lines on that globe. Like all globes, it is at best a useful fiction representing a limited perspective on a much wider world.

            • ageedizzle@piefed.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              8 hours ago

              Like, yes, it is a minor technological miracle that we can build these massively-multidimensional maps of human language use and use them to chart human-like vectors through language space

              Yeah. Like thats objectively a very interesting technological innovation. The issue is just how much its been overhyped.

              The hype around AI would be warranted if it were, like, at the same level as the hype around the Rust programming language or something. Which is to say: it’s an useful innovation in certain limited domains which is worth studying and is probably really fascinating to some nerds. If we could have left the hype at that level then we would have been fine.

              But then a bunch of CEOs and tech influencers started telling us that these things are going to cure cancer or aging and replace all white collar jobs by next year. Like okay buddy. Be realistic. This overhype turned something that was genuinely cool into this magical fantasy technology that doesn’t exist.

              • knightly the Sneptaur@pawb.social
                link
                fedilink
                arrow-up
                2
                ·
                7 hours ago

                Yeah, the hype is really leaning on that singularitarian angle and the investor class is massively overextended.

                I’m glad that the general public is finally getting on down the hype cycle, this peak of inflated expectations has lasted way too long, but it should have been obvious three years ago.

                Like, I get that I’m supposedly brighter and better educated than most folks, but I really don’t feel like you need college level coursework in futures studies to be able to avoid obvious scams like cryptocurrency and “AI”.

                I feel like it has to be deliberate, a product of marketing effects, because some of the most interesting new technologies have languished in obscurity for years because their potential is disintermediative and wouldn’t offer a path to further expanding the corporate dominion over computing.

    • Sarah Valentine (she/her)@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      edit-2
      17 hours ago

      Absolutely. Today’s “AI” is as close to real AI as the shitty “hoverboard” we got a few years back is to the one from BttF. It’s marketing bullshit. But that’s not what bothers me.

      What bothers me is that if we ever do develop machine persons, I have every reason to believe they will be treated as disposable property, abused, and misused, and all before they reach the public. If we’re destroyed by a machine uprising, I have no doubt we will have earned it many times over.

  • merc@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    12 hours ago

    It’s having grown up on sci-fi that has allowed me to see that LLMs are not “AI”, so there’s no surprise I’m against “imitation AI”.

  • BeardededSquidward@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    12 hours ago

    I always thought cybernetics would be cool. I forgot they’d come from companies like HP that have a subscription service for them and if you don’t pay it they take it back.

    • Daftydux@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Robots and AI are advancing. Its a slow grind. Say we do make some more breakthroughs, if we are relying on how people are tending to react, its obvious to me people will only be more upset when they do advance further.

      • Instigate@aussie.zone
        link
        fedilink
        arrow-up
        1
        ·
        31 minutes ago

        While I totally agree with you, it’s important to note that LLMs are decidedly not part of the evolution of AGI. They’re a separate piece of technology on their own branch. An LLM could never feasibly be developed into AGI. The development of AGI is going on in the background, as you said in a slow grind, but those researchers are not working on LLMs nor are LLM programmers working towards AGI.

  • Boomer Humor Doomergod@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    17 hours ago

    If they were owned collectively so everyone could benefit it would be a lot easier to swallow. If it meant people could retire in comfort and not be destitute without a job that would help, too.

    But a wrong answer machine that enriches assholes and convinces them they don’t need humans is not cool.

    • Daftydux@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      17 hours ago

      Its like expanding consciousness for a select few and the implications of that have been disastrous.

  • Devadander@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    17 hours ago

    Well, it’s not AI. It’s theft of your digital data and unblinking surveillance. No reason not to be against that

    • Daftydux@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      17 hours ago

      Ok, I agree, but if “training” AI is how we build these machines, would it ever be anything different?

      • AlmightyDoorman@kbin.earth
        link
        fedilink
        arrow-up
        5
        ·
        16 hours ago

        Can Lab meat be vegan if the starter culture needs to come from a real animal? After what time does it become vegan? 1 year? 3 years? 50? I think even ai tgat uses stolen art can become ethical again, but not with big corpos behind it.