On the other hand, the track record of old social networks is not great.
And it’s reasonable to posit Twitter is deep into the enshitifiication cycle.
On the other hand, the track record of old social networks is not great.
And it’s reasonable to posit Twitter is deep into the enshitifiication cycle.
Still perfectly runnable in kobold.cpp. There was a whole community built up around with Pygmalion.
It is as dumb as dirt though. IMO that is going back too far.
People still run or even continue pretrain llama2 for that reason, as its data is pre-slop.
One good thing that may come of Trump is shaking a lot of complacency out of other countries. Maybe even hurting the far right in them, once the population gets a lot of exposure to full MAGA.
The facebook/mastadon format is much better for individuals, no? And Reddit/Lemmy for niches, as long as they’re supplemented by a wiki or something.
And Tumblr. The way content gets spread organically, rather than with an algorithm, is actually super nice.
IMO Twitter’s original premise, of letting novel, original, but very short thoughts fly into the ether has been so thoroughly corrupted that it can’t really come back. It’s entertaining and engaging, but an awful format for actually exchanging important information, like discord.
This is called prompt engineering, and it’s been studied objectively and extensively. There are papers where many different personas are benchmarked, or even dynamically created like a genetic algorithm.
You’re still limited by the underlying LLM though, especially something so dry and hyper sanitized like OpenAI’s API models.
as opposed to trying desperately to find your article as you scroll and having pop ups and other things interrupt you as you read
Joke’s on the websites, as I run Cromite, so no pop ups or anything.
…But also, most of the written web is trash now.
:(
the YouTube experience is far less annoying on average.
Are you sure about that?
I opened YT links without premium on a new browsers and holy moly! I got 1-3 minute unskippable ads every time.
I immediately clicked them off, of course.
GN is indeed a rare outlier. They’re like an oldschool tech site that rose at the exact right time to grow up on YouTube.
And our site was like the opposite. Uh… let’s just say many Lemmy users wouldn’t like its editor, but he did not hold back gut punches, and refused to watch his site turn into a clickbait farm.
To add to this:
All LLMs absolutely have a sycophancy bias. It’s what the model is built to do. Even wildly unhinged local ones tend to ‘agree’ or hedge, generally speaking, if they have any instruction tuning.
Base models can be better in this respect, as their only goal is ostensibly “complete this paragraph” like a naive improv actor, but even thats kinda diminished now because so much ChatGPT is leaking into training data. And users aren’t exposed to base models unless they are local LLM nerds.
I don’t know when the goal post got moved
Ken Paxton, at least?
I briefly wrote articles for an oldschool PC hardware outlet (HardOCP if anyone remembers)… And I’m surprised any such sites are still alive. Mine shut down, and not because they wanted to.
Why?
Who reads written text over their favorite YouTube personality, or the SEO garbage that pops up first on their search, or first party articles/recs on steam, and so on? No one, except me apparently, as their journalistic integrity aside, I’m way too impatient for youtube videos, and am apparently the only person on the planet that believes influencers as far as I can throw them.
And that was before Discord, Tiktok, and ChatGPT really started eating everything. And before a whole generation barely knew what a website is.
They cited Eurogamer as an offender here, and thats an outstanding/upstanding site. I’m surprised they can even afford to pay that much as a business.
And I’m not sure what anyone is supposed to do about it.
I saw a Brexit meme awhile back, of America going “hold my beer.”
Well, the UK is holding our beer now. Here we go!
BTW, as I wrote that post, Qwen 32B coder came out.
Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
Yep.
32B fits on a “consumer” 3090, and I use it every day.
72B will fit neatly on 2025 APUs, though we may have an even better update by then.
I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.
I kinda get Republicans taking oil money and lying, buts its completely unreal that we elected an honest-to-god climate change denier. Trump actually thinks its BS, out loud, even with his own military telling him this is a national security threat.
History is going to crucify him (and Americans), and the sad part is he’s too old to ever suffer from it, or any of the consequences.
Yeah, well Alibaba nearly (and sometimes) beat GPT-4 with a comparatively microscopic model you can run on a desktop. And released a whole series of them. For free! With a tiny fraction of the GPUs any of the American trainers have.
Bigger is not better, but OpenAI has also just lost their creative edge, and all Altman’s talk about scaling up training with trillions of dollars is a massive con.
o1 is kind of a joke, CoT and reflection strategies have been known for awhile. You can do it for free youself, to an extent, and some models have tried to finetune this in: https://github.com/codelion/optillm
But one sad thing OpenAI has seemingly accomplished is to “salt” the open LLM space. Theres way less hacky experimentation going on than there used to be, which makes me sad, as many of its “old” innovations still run circles around OpenAI.
I’d posit the algorithm has turned it into a monster.
Attention should be dictated more by chronological order and what others retweet, not what some black box thinks will keep you glued to the screen, and it felt like more of the former in the old days. This is a subtle, but also very significant change.