Prompted by the recent troll post, I’ve been thinking about AI. Obviously we have our criticisms of both the AI hype manchildren and the AI doom manchildren (see title of the post. This is a Rationalist free post. Looking for it? Leave)

But looking at the AI doom guys with an open mind, sometimes it appear that they make a halfway decent argument that’s backed up by real results. This YouTube channel has been talking about the alignment problem for a while, and I think he probably is a bit of a Goodhart’s Law merchant (as in, by making a career out of measuring the dangers of AI, his alarmism is structural) so he should be taken with a grain of salt, it does feel pretty concerning that LLMs show inner misalignment and are masking their intentions (to anthropomorphize) under training vs deployment.

Now, I mainly think that these people are just extrapolating out all the problems with dumb LLMs and saying “yeah but if they were AGI it would become a real problem” and while that might be true if taking the premise at face value, the idea that AGI will ever happen is itself pretty questionable. The channel I linked has a video arguing that AGI safety is not a Pascal’s mugging, but I’m not convinced.

Thoughts? Does the commercialization of dumb AI make it a threat on a similar scale to hypothetical AGI? Is this all just a huge waste of time to think about?

  • Carl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 days ago

    I’m increasingly of the belief that a major part of our own consciousness is socially contingent, so creating an artificial one can’t be done in one fell swoop by one computer getting really smart, it has to be the result of reverse-engineering the entire process of evolution that lead to consciousness as we understand it.

    • Des [she/her, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 days ago

      really interesting because my partner and I were doing some worldbuilding and came up with something like this! we had two methodologies: one was for artificial life which involved exactly what you describe; starting with a deep, complex simulation sped up by asteroid-sized computers that start from scratch. after you have an artificial life model the AI basically had to be bound to a human at birth and “grow up” and learn with them, essentially developing in parallel as an artificial sibling while they exist in a symbiotic relationship. this becomes a cultural norm, and ties artificial life to humanity as a familial relation. (this was a far future society where single child households were the norm)