I’m going to talk about the hard AI problem of rogue AI, not the obviously more pressing one of AI being used for bad things, because that can’t be stopped
So, Big Yud is mad and constantly full of nuclear Bay Area brainworm takes. but some of his arguments against alignment make sense. At least enough to engage with them and try to solve them, but bombing data centres et al is dumb and won’t solve the issue since most code is already so inefficient.
AI is not going to go FOOM in the way he suggested, basically because LLMs exist, which he thought were impossible as part of his arguments. He never considered we would have time to practice alignment with extremely limited AI at near human levels. And that’s kicked the core of his doom argument away.
I think UlyssesT and I once agreed that he is the dumbest smart guy alive and will probably become leftist after trying and failing at every tech nerd principal. If someone could keep him the hell away from the Bay Area he might normalise and stop writing weird shit.
For what it’s worth he also thinks Transformers have no chance at AGI, at least alone. And his thoughts on the dumb approaches used by AI companies are sound. But he is arguing in good faith and has probably thought about this more than anyone else. (Nobody even in the Yud cult except for a few weirdos ever took Roko’s Basilisk seriously. Yud deleted the post mostly because he saw that people would go weird about it, a rare example of him being exactly correct.)
As for everyone else…Altman doesn’t care and is using it for market capture. Anthropic probably does care but is more or less forced down the same path of ineffective safety that captures the market.
Ilya definitely cares but for all the wrong reasons, he’s massively pro Israel so he’ll never achieve a consistently aligned AI anyway, but I’d not want him working on this stuff.
But ultimately hard alignment is a secondary concern against dumb corporate HR managers trying to AI the compliance team.
I’m going to talk about the hard AI problem of rogue AI, not the obviously more pressing one of AI being used for bad things, because that can’t be stopped
So, Big Yud is mad and constantly full of nuclear Bay Area brainworm takes. but some of his arguments against alignment make sense. At least enough to engage with them and try to solve them, but bombing data centres et al is dumb and won’t solve the issue since most code is already so inefficient.
AI is not going to go FOOM in the way he suggested, basically because LLMs exist, which he thought were impossible as part of his arguments. He never considered we would have time to practice alignment with extremely limited AI at near human levels. And that’s kicked the core of his doom argument away.
I think UlyssesT and I once agreed that he is the dumbest smart guy alive and will probably become leftist after trying and failing at every tech nerd principal. If someone could keep him the hell away from the Bay Area he might normalise and stop writing weird shit.
For what it’s worth he also thinks Transformers have no chance at AGI, at least alone. And his thoughts on the dumb approaches used by AI companies are sound. But he is arguing in good faith and has probably thought about this more than anyone else. (Nobody even in the Yud cult except for a few weirdos ever took Roko’s Basilisk seriously. Yud deleted the post mostly because he saw that people would go weird about it, a rare example of him being exactly correct.)
As for everyone else…Altman doesn’t care and is using it for market capture. Anthropic probably does care but is more or less forced down the same path of ineffective safety that captures the market.
Ilya definitely cares but for all the wrong reasons, he’s massively pro Israel so he’ll never achieve a consistently aligned AI anyway, but I’d not want him working on this stuff.
But ultimately hard alignment is a secondary concern against dumb corporate HR managers trying to AI the compliance team.