- cross-posted to:
- technology@lemmy.ml
- techtakes@awful.systems
- cross-posted to:
- technology@lemmy.ml
- techtakes@awful.systems
cross-posted from: https://lemmy.ml/post/21121074
OpenAI, a non-profit AI company that will lose anywhere from $4 billion to $5 billion this year, will at some point in the next six or so months convert into a for-profit AI company, at which point it will continue to lose money in exactly the same way. Shortly after this news broke, Chief Technology Officer Mira Murati resigned, followed by Chief Research Officer Bob McGrew and VP of Research, Post Training Barret Zoph, leaving OpenAI with exactly three of its eleven cofounders remaining.
This coincides suspiciously with OpenAI’s increasingly-absurd fundraising efforts, where (as I predicted in late July) OpenAI has raised the largest venture-backed fundraise of all time $6.6 billion— at a valuation of $157 billion.
One of my big worries with the way people are using LLMs is that they’re being trained to trust whatever they spit out. Hey Google, what’s the nutritional content of peanuts? And people are learning not to ask where the information came from or to check sources.
One of the many reasons this worries me is that very soon these businesses are going to need to recoup the billions they’re spending, and I wonder how long until these systems start feeding paid promotions to a population that’s been trained to accept whatever they’re told. imagine what some businesses, or governments, would pay to have exactly their choice of words produced on demand in response to knowledge queries.
Worst of it is, this has been a problem for as long as I can remember, and it’s getting so much worse than ever now
Which search results or which queries could one show the average user to make that point?
there’s no real universal example. you need to show them that it is wrong about something you know they know, to avoid the Gell-Mann amnesia effect.
I say this from experience. unfortunately some people are just average and have interests that are entirely subjective, like makeup trends or alternative medicine, and the effect that "always check the sources"has on those people is to make them distrust every source since nothing agrees with anything else on those topics.
My search engine usage for 25 years has been just me going “yeah right” and changing the query to make it better. But I’m wired to distrust what I feel is bullshit, and I’ve experienced not many people are.