Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
The once and future king + ol’ muskrat give their most sensible total nuclear annihilation takes. Fellas, are we cooked?
This has to be hands down the absolute dumbest take I’ve seen from Musk ever. Dude has the mental capacity of a boiled pear.
They are both stupid men who repeat stuff they hear to make them look good. So the question is who are this time the “very smart people” that are telling numbnuts like these two that nuclear war is survivable - and by extension winnable? Because if that is the US defense establishment, then yeah we might be cooked.
Alex Jones for one is convinced USA would “win” a nuclear war, so
Considering they were saying this while having trouble doing internet radio at scale, a problem basically solved 20 years ago, I’m not sure we should listen to them.
Related to Musk, Trump and all the other fools. PrimalPoly revealing just how shallow and culture war brainwormed thinker he is.
Image Description.
Musk: Happy to host Kamala on an 𝕏 Spaces too PrimalPoly: Suggested questions for Kamala:
-
How do crypto blockchains work, & why are so many Americans skeptical of Central Bank Digital Currencies?
-
How would you stop the US gov’t from colluding with Big Tech social media companies to censor Americans?
-
What is the main cause of inflation?
-
What is a woman? Description ends, question I have for anybody with a screenreader, does this spoiler method work? And also does the screenreader properly work with the letter: X as used on twitter, namely 𝕏.
- They don’t & yall are “skeptical” of ID cards because it’s the Mark of the Beast, so go figure
- Easy, regulate Big Tech to the ground until there’s only Small-to-Medium Tech left
- Air, most of the time.
- Your mom.
Can I be a VP or at least Chief of Staff now
You have all my votes!
5.) Why do people keep calling us weird?
“Well If you had read my paper on evolutionary psychology I did while looking at sex workers, accusations of weirdness is a actually sign of …”
Spoilers are an html element so they should work everywhere. Mastodon just shows the text without spoiler or CW. Letters in a different typeface specified by Unicode are announced the same as regular letters for this purpose, emphasis, to the dismay of mathematicians that would want “double-stroke X” to be announced.
-
Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion. Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it, maybe some of them will self reflective on what the hell they think “Effective” means anyways.
I cannot get over the fact that this man child who is so concerned with “the future of humanity” is both out right trying to buy the presidency and downplaying the very real weapons that can easily wipe out 70% of the Earth’s population in 2 hours. Remember ya’ll, the cost of microwaving the world is negligible compared to the power of spicy autocomplete.
The Bismarck Analysis crew were sneering at Sagan being a filthy peace activist so I would hazard that the era of ‘survivable nuclear war’ rides again.
let’s see how things are going on twitter
friend looked at this and said “ah race science geoguesser guy”
Genomeguesser
“Inexplicable Cimmerian Vibes” is the name of my next band.
Bonus points if this turns out to be the output of an LLM trained by phrenologists.
“Inexplicable Cimmerian Vibes”
But all the band members have the Homer Simpson bodytype. Sadly Okilly Dokilly stopped.
omg, next time my wife asks me how she looks, I’m definitely dropping that “legible magyar admixture”
Edit: Didn’t work. She started talking about how in the old country, the Hungarians chased her family out of the village for being religious minorities. I give this approach 0 bags of popcorn and a magen david.
“Babe, you’re looking Haplogroup I-M437 tonight. No. Not M-437. Damn, girl, you’re an M-438.”
When you really want to confuse your astronomy post-doc partner.
EDIT: I’ve been reliably informed that that’s too many Messier objects.
That’s certainly one approach to commenting on someone’s picture. Pretty sure it’s better to stick with the standard “Wow! 😍😍😍” but this certainly sticks out from the crowd?
the original is one of the new approach from the Dimes Square nazis, isn’t it?
In another forum I’m in, someone posted this article and asked if someone, anyone could understand it. I kept schtum.
In particular she felt like Anna, whom she’d been closest with, was being dishonest about what they hoped to achieve with this whole project. Sanje further alleged that Anna’s good standing largely stemmed from her incomprehensibility, because people don’t have a clue what this is actually all about. Possibly Anna doesn’t, either.
most straightforward hegelian
jfc what did I just spend 20m reading
I got as far as “Dimes Square bohemians” in the fourth sentence before realizing that everything in that article I recognized, I would regret.
Haela Hunt-Hendrix, the singer from the black metal band Litvrgy, was one of the principal organizers of this “symposium.” […] Besides making music, she seems to be interested in esoteric religious themes, numerology, and Orthodox iconography. In any case, Hunt-Hendrix claimed that Anna was stealing her ideas and twisting them in a “cryptofascist” manner.
Oh no! How could she!
Oh my god I only now notice the title is a reference to Tiqqun’s Preliminary Materials For a Theory of the Young-Girl, this is like a supercollision of dumbfuck cryptoreactionary nonsense I’ve obsessed over.
Hegelian e-girls’ VIP “symposium”
Excuse the rather formal philosophical latin but qvid in fvck?
I tried looking through the post to find out what possibly they could have to do with Hegel and found
Thankfully, Matthew shared the Googledoc the e-girls had sent him with their prepared remarks. My commentary over the next several paragraphs will only make sense if you read over them (they’re mercifully short), so I’d urge everyone to open up the hyperlink and give it a quick look.
Okay, first of all, it’s like 5 pages, “mercifully short” lol, go take a hike. Second,
Concrete philosophizing means applying insight to the alchemical transformation of everyday life.
This is in the first paragraph. I feel like reading this would make me devolve into an entire day of incoherent screaming and I have enough respect for my coworkers and loved ones to not subject them to that
I found the most HN comment of all time:
What sort of mating strategy are you optimizing for?
Optimizing mating strategies? But I’m terrible at chess!
please tell me the topic was repopulating endangered species of animals
oh who the fuck am I kidding, it’s the orange site
we’re talking about the endangered species of white cishet males here please don’t demean this important topic with jokes
Edit I wish I was joking but a luser with a classical Greek-ish handle replies
Women are optimizing 80% of men out of the gene pool.
What the men do is irrelevant.
What the men do is irrelevant.
What most of the orange site frequenting men do is indeed irrelevant, though for different reasons than they think.
Some nerds found the r/K selection strategy wikipedia pages.
That’s the state of the job market these days: even the pick-up artists have STEM degrees.
It’s amazing to watch them flock together like this, nature is beautiful 😍
This is what happens in the absence of a natural predator.
No joke but actually yes?
Looks like an opportunity for the DoT to introduce a breeding colony of traffic cones to the area!
I used to wonder if they had thought about deadlock/livelock re self driving cars. Thanks to modern technology I no longer have to wonder. Thanks!
Oh this sounds like a dog I used to have as a kid! They needed more enrichment during the day or else she’d bark into the void all night and get super excited when another dog barked back.
Have they tried taking the waymos out for walkies?
saw a video of this yesterday, that “honk” title extremely understates how fucking dumb the problem is
in the video I saw, those dumb-ass things are literally crawling forward and back in the parking lot, because the one in front of it is also doing it, because…
yes, a multi-car movement deadlock, with a visually clear solution (which any human driver would be able to implement in seconds) that nonetheless still doesn’t happen because….? I guess waymo didn’t code in inter-car communication or something
seriously, find a copy and watch. it’ll give a lovely kicker to your day :>
This came up in a podcast I listen to:
WaPo: "OpenAI illegally barred staff from airing safety risks, whistleblowers say "
archive link https://archive.is/E3M2p
OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.
While I’m not prepared to defend OpenAI here I suspect this is just to shut up the most hysterical employees who still actually believe they’re building the P(doom) machine.
I mean, if you play on the doom to hype yourself, dealing with employees that take that seriously feel like a deserved outcome.
Short story: it’s smoke and mirrors.
Longer story: This is now how software releases work I guess. Alot is running on open ai’s anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there’s no more training data. So the next trick is that for their next batch of models they have “solved” various problems that people say you can’t solve with LLMs, and they are going to be massively better without needing more data.
But, as someone with insider info, it’s all smoke and mirrors.
The model that “solved” structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it’s a price optimization afaik).
The next large model launching with the new Q* change tomorrow is “approaching agi because it can now reliably count letters” but actually it’s still just agents (Q* looks to be just a cost optimization of agents on the backend, that’s basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they’re so confident in this model that they don’t run the resulting python themselves. It’s still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um… checks notes count the number of letters in a sentence.
But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.
Expect more of this around GPT-5 which they promise “Is so scary they can’t release it until after the elections”. My guess? It’s nothing different, but they have to create a story so that true believers will see it as something different.
Yeah, I’m not in any doubt that the C-level and marketing team are goosing the numbers like crazy to keep the buuble from bursting, but I also think they’re the ones that are most cognizant of the fact that ChatGPT is definitely not the Doom Machine. But I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
I also believe they have employees who they cannot fire because they would spread a hella lot doomspeak if they did, who are True Believers.
Part of me suspects they probably also aren’t the sharpest knives in OpenAI’s drawer.
It can be both. Like, probably OpenAI is kind of hoping that this story becomes wide and is taken seriously, and has no problem suggesting implicitly and explicitly that their employee’s stocks are tied to how scared everyone is.
Remember when Altman almost got outed and people got pressured not to walk? That their options were at risk?
Strange hysteria like this doesn’t need just one reason. It just needs an input dependency and ambiguity, the rest takes of itself.
Well, it’s now yesterday’s tomorrow and while there’s an update I’m not seeing a Q* announcement.
Q*
My understanding is that it was renamed or rebranded to Strawberry which itself nebulous marketting maybe it’s the new larger model or maybe it’s GPT-5 or maybe…
it’s all smoke and mirrors. I think my point is, they made some cost optimizations and mostly moved around things that existed, and they’ll keep doing that.
OH
I first saw this then later saw the “openai employees tweeted 🍓” and thought the latter was them being cheeky dipshits about the former. admittedly I didn’t look deeper (because ugh)
but this is even more hilarious and dumb
I’m not seeing a Strawberry announcement either.
The wikipedia page for TESCREAL is “disputed”, always a good sign when the online right launches skirmish actions against front-line trenches
‘TESCREAL’ refers to a nonsense conspiracy theory that disparages people such as Nick Bostrom without citing any sources that are credible on the question of whether Nick Bostrom is an ‘evil eugenicist’ or whatever.
WP:LOL. WP:LMAO even.
I’m ok with this because everytime Nick Bostrom’s name is used publicly to defend anything, and then I show people what Nick Bostrom believes and writes, I robustly get a, “What the fuck is this shit? And these people are associated with him? Fuck that.”
A credible source on whether Nick Bostrom is a weirdo is Nick Bostrom cited verbatim
WP:YEAHOK, WP:IWILLALLOWIT
Post from July, tweet from today:
It’s easy to forget that Scottstar Codex just makes shit up, but what the fuck “dynamic” is he talking about? He’s describing this like a recurring pattern and not an addled fever dream
There’s a dynamic in gun control debates, where the anti-gun side says “YOU NEED TO BAN THE BAD ASSAULT GUNS, YOU KNOW, THE ONES THAT COMMIT ALL THE SCHOOL SHOOTINGS”. Then Congress wants to look tough, so they ban some poorly-defined set of guns. Then the Supreme Court strikes it down, which Congress could easily have predicted but they were so fixated on looking tough that they didn’t bother double-checking it was constitutional. Then they pass some much weaker bill, and a hobbyist discovers that if you add such-and-such a 3D printed part to a legal gun, it becomes exactly like whatever category of guns they banned. Then someone commits another school shooting, and the anti-gun people come back with “WHY DIDN’T YOU BAN THE BAD ASSAULT GUNS? I THOUGHT WE TOLD YOU TO BE TOUGH! WHY CAN’T ANYONE EVER BE TOUGH ON GUNS?”
Embarrassing to be this uninformed about such a high profile issue, no less that you’re choosing to write about derisively.
Surely this is 3 or 4 different anti-gun control tropes all smashed together.
ah, jeez, AI bros are trying to make deepfakes even fucking worse:
Deep-Live-Cam is trending #1 on github. It enables anyone to convert a single image into a LIVE stream deepfake, instant and immediately
Most of the replies are openly lambasting this shit like it deserves, thankfully
“help artists with tasks such as animating a custom character or using the character as a model for clothing etc”
The “deepfake” and “(uncensored)” in the repo description have me questioning that ever so slightly
Who had Trump accusing the Harris campaign of using AI to inflate crowd size photos on their Election ‘24 bingo card? Anyway, I’m sure that being associated with fraud and fakes is Good For AI.
it going co-evolution on the same path as cybercoins did is chefskiss.bmp
Picked up an oddly good sneer from a gen-AI CEO, of all people (thanks to @ai_shame for catching it):
jesus, that’s telling. and I can 100% see that sentence forming in the heads of the types of people who fall over themselves to create something like these tools. so caught up in the math and the technical cool, they can’t appreciate other beauty
Not a sneer, but something that’ll inspire plenty of schadenfreude:
Brian Merchant: The artists fighting to save their jobs and their work from AI are gaining ground
Brian’s done plenty of good sneers on AI, I’d recommend checking him out
Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?
This is kind of the central issue to me honestly. I’m not a lawyer, just a (non-professional) artist, but it seems to me like “using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work” is extremely not fair use. In fact it’s kind of a prototypically unfair use.
Meanwhile Midjourney and OpenAI are over here like “uhh, no copyright infringement intended!!!” as though “fair use” is a magic word you say that makes the thing you’re doing suddenly okay. They don’t seem to have very solid arguments justifying them other than “AI learns like a person!” (false) and “well google books did something that’s not really the same at all that one time”.
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as “oh, both sides have good points! who will turn out to be right in the end!” really bugs me for some reason. Like, it seems to me that there’s a notable asymmetry here!
I dunno, I know that legally we don’t know which way this is going to go, because the ai people presumably have very good lawyers
You’re not wrong on the AI corps having good lawyers, but I suspect those lawyers don’t have much to work with:
-
Pretty much every AI corp has been caught stealing from basically everyone (with basically everyone caught scraping without people’s knowledge or consent, and OpenAI, Perplexity, and Anthropic all caught scraping against people’s explicit wishes)
-
Said data was used to create products which, either implicitly or [explicitly]((https://archive.is/jNhpN), produce counterfeits of the stolen artists’ work
-
Said counterfeits are, in turn, destroying the artists’ ability to profit from their original work and discouraging them from sharing it freely
-
And to cap things off, there’s solid evidence pointing to the defendants being completely unrepentant in their actions, whether that be Microsoft’s AI boss treating such theft as entirely acceptable or Mira Murati treating the job losses as an afterthought
If I were a betting man, I’d put my money on the trial being a bloodbath in the artists’ favour, and the resulting legal precedent being one which will likely kill generative AI as we know it.
God, that would be the dream, huh? Absolutely crossing my fingers it all shakes out this way.
Stranger things have happened. But in either case, we should commit to supporting every effort. If one punch doesn’t work take another. Death by a million cuts.
-
Like, it seems to me that there’s a notable asymmetry here!
I think that’s a great framing here.
The link seems b0rked, do you have another one?
Fixed the link - thanks for catching it.
https://www.reddit.com/r/CharacterAI/comments/1eqsoom/guys_we_have_to_do_somthing_about_this_fiӏtеr/
This community pops up on /r/all every so often and each time it scares me.
Sometimes I see kids games (and all games really) have ultra-niche, super-online protests that are like “STOP Zooshacorp from DESTROYING K-Smog vs. Batboy Online”, and when I look closer it’s either even more confusing or it’s about something people didn’t like in the latest update. This is like that, but with an awful twist where it’s about people getting really attached to these AI girlfriend/sex roleplay apps. The spelling and sentences make it seem like it’s mostly kids, too.
edit: here’s a terrible example!
Yesterday I saw a link to some podcast/post float by, of an interview with some genml company “discussing people falling in love with, having relations with, and even wanting to marry”, where the ceo is “okay with it”. didn’t click because ugh, but will see if I can find it
and ofc all these weird fucking things will pop the moment their vc runs out or openai raises prices or whatever. bet you they don’t have any therapy contingency for helping people with their ai partners suddenly getting vc-raptured
I remember 15 years ago when I read about a Japanese man marrying a character from a dating sim game (source, archive link).
The internet clowned on him, but he was very serious, and it was the first time when I realized that these “anime waifu” people probably aren’t all just taking the piss.
There’s a whole socio-economic angle there, of course, which I don’t think I wanna get into here, but to me this whole “AI girlfriend” market really seems like a low-effort take on “dating sim as a service” with as much game removed as possible but the exploitative nature turned up to fucking eleven.
The weird thing, is. From my perspective. Nearly every, weird, cringy, niche internet addiction I’ve ever seen or partaken in myself, has produced both two things: people who live through it and their perspective widens, and people who don’t.
Like, I look back at my days of spending 2 days at a time binge playing World of Warcraft with a deep sense of cringe but also a smirk because I survived and I self regulated, and honestly. Made a couple of lifetime friends. Like whatever response we have to anime waifus, I hope we still recognize the humanity in being a thing that wants to be entertained or satisfied.
It’s really funny that this was probably the closest thing to a killer app powered by genAI to exist.
Wonder if they’re getting rid of this stuff because they realized it’s actually a liability to mine these ERP convos for data and they’re burning money on every conversation as it is.
Archive link? The post was removed
post still seems up here. try
old.reddit
if attempting to view from mobile (the “new” reddit site is remarkably rabid on mobile)There’s a link to what appears to have been a picture (reddit.com/gallery) but it’s dead. If you go through “new” Reddit it just says “removed by moderators”.
I can’t really tell what this is about from “we HAVE to do something about this” when “this” is an empty space :/
Oh whoops, I should have archived it.
There were about 7 images posted of users roleplaying with bots, all ending with a bot response that cut off halfway with an error message that read “This content may violate our policies; blablabla; please use the report button if you believe this is a false positive and we will investigate.” The last one was some kind of parody image making fun of the warning.
Most of them were some kind of romantic roleplay with bad spelling. One was like, “i run my hand down your arm and kiss you”, and the bots response triggered the warning. Another one was like, “*is slapped in the face* it’s okay, I still love you” and the rest of the message generated a warning. There wasn’t enough context for that one, so the person might have been writing it playfully (?), but that subreddit has a lot of blatant sexual violence regardless.
yall might want to take notice of this thing https://discuss.tchncs.de/post/20460779
https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2024-08-14/Recent_research
STORM: AI agents role-play as “Wikipedia editors” and “experts” to create Wikipedia-like articles, a more sophisticated effort than previous auto-generation systems
ai slop in extruded text form, now longer and worse! and burns extra square kilometers of rainforest
People out there acting like “research” using LLMs is ethical
LLM, tell me the most obviously persuasive sort of science devoid of context. Historically, that’s been super helpful so let’s do more of that.
literally why would you do this. you can research anything you stupid bastards why would you make this
we propose the STORM paradigm for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking
oh come the fuck on
The authors hail from Monica S. Lam’s group at Stanford, which has also published several other papers involving LLMs and Wikimedia projects since 2023 (see our previous coverage: WikiChat, “the first few-shot LLM-based chatbot that almost never hallucinates” – a paper that received the Wikimedia Foundation’s “Research Award of the Year” some weeks ago).
from the same minds as STOTRMPQA comes: we constructed this LLM so it won’t generate a response unless similar text appears in the Wikipedia corpus and now it almost never entirely fucks up. award-winning!
Babe, new AI doom vector just dropped: AGI will corrupt knitting sites so crafters make Langford visual hack patterns![1]
https://www.zdnet.com/article/how-ai-scams-are-infiltrating-the-knitting-and-crochet-world/
[1] doom scenario is my interpretation, not actually included in ZDnet article.
Sadly, Langford hacks seem to have never achieved memetic takeoff. Having an internet legally enforced on pain of death to be text-only would probably be a good thing.
Jimmy Buffet fans in shambles.
Capitalist blames lazy workers for not putting in the hours to make themselves obsolete.
WSJ: Eric Schmidt Says Google Is Falling Behind on AI—And Remote Work Is Why
Archive: https://archive.is/JXvtV
Eric Schmidt Says Google Is Falling Behind on AI—And Remote Work Is Why
That’s another great benefit of remote work, then.
Also imagine …
work-life balance […] was more important than winning
… saying this unironically.
I always want to point out how there are never specific metrics attached to these criticisms. Whenever I’ve seen actual numbers checked there doesn’t appear to be a significant impact between before and after companies started WFH during the pandemic.
Of course I also haven’t looked to close because I’ve been too busy enjoying my life rather than pretending my boss is funny at a water cooler.
So one one hand the CEO’s want their minions back into office and on the other they want to replace them with AI’s?
Sounds like a conundrum. Or a business opportunity!
Presenting Srvile! The brand new Servility as a Service company, with AI powered robots that will laugh at all boss jokes at the water cooler and say things like “That is such a great idea boss! Since I am an AI I can’t realise that you are just regurgitating what you read on Xshitter!” and “We certainly need more AI to solve any problem!”
Call now to order!
(AI may at times be enhanced by remote human control for “quality control”. Actual level of servility may vary and is not guaranteed.)