

He’s not a demon that craves blood for no reason. There’s geostrategic goals at stake.
/u/outwrangle before everything went to shit in 2020, /u/emma_lazarus for a while after that, now I’m all queermunist!
He’s not a demon that craves blood for no reason. There’s geostrategic goals at stake.
Well like I said, 10 years after highschool - 28 for me.
I was kind of politically aware since Middle School during the Iraq invasion, mostly because I watched MSNBC every night with my dad over dinner. I’d say I only actually became class conscious after I graduated from highschool into financial crisis and Occupy and Wikileaks and the Arab Spring and “we came, we saw, he died, haha!”
And then it took another 10 years to finally start reading theory. Embarassing!
Fucking
Kautsky!
Critical support for Trump’s struggle against US cultural and scientific imperialism.
What is the reason you think philosophy of the mind exists as a field of study?
In part, so we don’t assign intelligence to mindless, unaware, unthinking things like slime mold - it’s so we keep our definitions clear and useful, so we can communicate about and understand what intelligence even is.
What you’re doing actually creates an unclear and useless definition that makes communication harder and spreads misunderstanding. Your definition of intelligence, which is what the AI companies use, has made people more confused than ever about “intelligence” and only serves the interests of the companies for generating hype and attracting investor cash.
Let me rephrase. If your definition of intelligence includes slime mold then the term is not very useful.
There’s a reason philosophy of the mind exists as a field of study. If we just assign intelligence to anything that can solve problems, which is what you seem to be doing, we are forced to assign intelligence to things which clearly don’t have minds and aren’t aware and can’t think. That’s a problem.
If your definition of intelligence doesn’t include awareness it’s not very useful.
My understanding is that the reason LLMs struggle with solving math and logic problems is that those have certain answers, not probabilistic ones. That seems pretty fundamentally different from humans! In fact, we have a tendency to assign too much certainty to things which are actually probabilistic, which leads to its own reasoning errors. But we can also correctly identify actual truth, prove it through induction and deduction, and then hold onto that truth forever and use it to learn even more things.
We certainly do probabilistic reasoning, but we also do axiomatic reasoning i.e. more than probability engines.
Slime mold can solve mazes.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
What? No.
Chatbots can’t think because they literally aren’t designed to think. If you somehow gave a chatbot a body it would be just as mindless because it’s just a probability engine.
I’m 100% certain this person saw an electoral map with most of Oregon red by land mass and then based their political opinions on it.
It was all fun and games until he said Israel doesn’t know that the fuck it’s doing.
You literally called it borderline magic.
Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.
They don’t report Russia’s claims this way. They don’t report Iran’s claims this way.
And people can see it, which is why they don’t trust the media anymore.
Because Israel has a complete ban on reporters in Gaza, for example, there’s no way to corroborate or refute what Israel said. It’s newsworthy to repeat what Israel said, but you can’t blame the media when someone reads that and assumes that the government is telling the truth
If there’s no way to corroborate or refute what Israel said, don’t print what Israel said. Lies aren’t newsworthy, except as a way to report on the lies themselves for the purpose of debunking them.
Remember when Israel first started bombing hospitals and blamed Islamic Jihad for it? They still don’t claim responsibility for Al-Ahli Arab Hospital, but after a year of targeting hospitals and doctors it’s ridiculous to deny it at this point.
Yet there were few retractions or corrections. As far as CNN and The Guardian are concerned, Israel didn’t bomb that hospital. What a joke.
As you said yourself, the government lies all the time, so why would you assume that “the government said X happened” means that “X happened”.
I don’t think people make that assumption anymore, but that’s because people stopped trusting the media. They published and promoted so many government lies that they’ve destroyed their own credibility.
People expect the media to investigate government claims and to publish the truth, not just parrot the lies they’re fed. When the media doesn’t do that, when all the major news outlets become court stenographers, people lose faith in the media.
Maybe people are expecting too much, but that’s what people have been taught to expect. They were taught that journalists find the truth and report on it. They’re finding out that journalists basically just print what their sources say and they can’t just trust things because they’re in the news anymore.
And it’s going to get worse forever.
My definition of artificial is a system that was consciously engineered by humans.
And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.
Human-like object concept representations emerge from datasets made by humans because humans made them.
Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!
He doesn’t kill political opponents just to watch them bleed? My point is that he does things for reasons.
The fact that Westoids can only imagine Russians as subhuman violent idiots is fascinating.