I really don’t understand this perspective. I truly don’t.
You see a new technology with flaws and just assume that those flaws will always be there and the technology will never progress.
Like. Do you honestly think this is the one technology that researchers are just going to say “it’s fine as-is, let’s just stop improving it”?
You don’t understand the first thing about how it works but people like you are SO certain that the way it is now is how it will always be, and that because there are flaws developing it further is pointless.
There’s a lot of indication that LLMs are peaking. It’s taking exponentially more compute and data to get incremental improvements. A lot of people are saying OpenAI’s new model is a regression (I don’t know, I haven’t really played with the new model much). More foundational breakthroughs need to be made, and these kinds of breakthroughs are often the result of “eureka” moments which can’t be manifested by just throwing more money at the problem. It’s possible it will take decades before someone discovers a major breakthrough (or it could be tomorrow).
Right. You don’t get it. You hear people talk about a new technology but actually they haven’t talked about anything, they are trying to sell you snake oil, but you convince yourself that you understand what they mean, and that it’s somehow meaningful.
We could talk about the history of AI in software development, you know it goes back decades, and there are legitimate areas of research. But the bubble that people are riding right now, they are throwing LLMs at the general public and pretending those LLMs are good enough to replace large swaths of the current workforce, but that’s not going to happen because it won’t work, because that’s not how those models are designed. And then the snake oil salesman, they do classic bait and switch, and they start talking about expert systems and minor improvements to them, as if that is something new.
But even if my prediction is wrong, what that actually means is that people shouldn’t need to work full-time jobs anymore.
To be fair, if your argument is that some day AI research will be legitimate and no longer snake oil, then you could easily be right. But there’s no good reason to think that day is going to be in the next few years, rather than the next few decades or even the next few centuries.
I’ve actually worked professionally in the field for a couple of years since it was interesting to me originally. I’ve built RAG architecture backends for self hosted FOSS LLMs, i’ve fine tuned LLMs with new data, And I’ve even took the opposite approach where I embraced the hallucinations as I thought it could be used for more creative tasks. (I think this area still warrants research). I also enjoy TTS and STT use cases and have FOSS models for those on most of my devices.
I’ll admit that the term AI is extremly vauge. It’s like saying you study medicine, it’s a big field. But I keep coming to the conclusion that LLMs and predictive generative models in general simply do not work for the use cases that it’s being marketed for to consumers, CEOs, and Governments alike.
This " AI race" happened because Deepseek was able to create a model that was more or less equivalent to OpenAI and Anthropic models. It should have been seen as a race between proprietary and open source since deep seek is one of the more open models at that performance level. But it became this weird nationalist talking point on both countries instead.
There are a lot of things the US is actually in a race with China in. Many of which are things that would have immediate impact. Like renewable energy, international respect, healthcare advances, military sufficiency, human rights, food supplies, and afordible housing, just to name a few.
The promise of AI is that it can somehow help in the above categories eventually, and that’s cool. But we don’t need AI to make improvements to them right now.
I think AI is a giant distraction, while the the talk of nationalistic races is just being used for investor buy in.
I really don’t understand this perspective. I truly don’t.
You see a new technology with flaws and just assume that those flaws will always be there and the technology will never progress.
Like. Do you honestly think this is the one technology that researchers are just going to say “it’s fine as-is, let’s just stop improving it”?
You don’t understand the first thing about how it works but people like you are SO certain that the way it is now is how it will always be, and that because there are flaws developing it further is pointless.
I just don’t get it.
There’s a lot of indication that LLMs are peaking. It’s taking exponentially more compute and data to get incremental improvements. A lot of people are saying OpenAI’s new model is a regression (I don’t know, I haven’t really played with the new model much). More foundational breakthroughs need to be made, and these kinds of breakthroughs are often the result of “eureka” moments which can’t be manifested by just throwing more money at the problem. It’s possible it will take decades before someone discovers a major breakthrough (or it could be tomorrow).
Right. You don’t get it. You hear people talk about a new technology but actually they haven’t talked about anything, they are trying to sell you snake oil, but you convince yourself that you understand what they mean, and that it’s somehow meaningful.
We could talk about the history of AI in software development, you know it goes back decades, and there are legitimate areas of research. But the bubble that people are riding right now, they are throwing LLMs at the general public and pretending those LLMs are good enough to replace large swaths of the current workforce, but that’s not going to happen because it won’t work, because that’s not how those models are designed. And then the snake oil salesman, they do classic bait and switch, and they start talking about expert systems and minor improvements to them, as if that is something new.
But even if my prediction is wrong, what that actually means is that people shouldn’t need to work full-time jobs anymore.
To be fair, if your argument is that some day AI research will be legitimate and no longer snake oil, then you could easily be right. But there’s no good reason to think that day is going to be in the next few years, rather than the next few decades or even the next few centuries.
I’ve actually worked professionally in the field for a couple of years since it was interesting to me originally. I’ve built RAG architecture backends for self hosted FOSS LLMs, i’ve fine tuned LLMs with new data, And I’ve even took the opposite approach where I embraced the hallucinations as I thought it could be used for more creative tasks. (I think this area still warrants research). I also enjoy TTS and STT use cases and have FOSS models for those on most of my devices.
I’ll admit that the term AI is extremly vauge. It’s like saying you study medicine, it’s a big field. But I keep coming to the conclusion that LLMs and predictive generative models in general simply do not work for the use cases that it’s being marketed for to consumers, CEOs, and Governments alike.
This " AI race" happened because Deepseek was able to create a model that was more or less equivalent to OpenAI and Anthropic models. It should have been seen as a race between proprietary and open source since deep seek is one of the more open models at that performance level. But it became this weird nationalist talking point on both countries instead.
There are a lot of things the US is actually in a race with China in. Many of which are things that would have immediate impact. Like renewable energy, international respect, healthcare advances, military sufficiency, human rights, food supplies, and afordible housing, just to name a few.
The promise of AI is that it can somehow help in the above categories eventually, and that’s cool. But we don’t need AI to make improvements to them right now.
I think AI is a giant distraction, while the the talk of nationalistic races is just being used for investor buy in.
Appreciate you expanding on the earlier comment. All fair points.
Feelings don’t care about logic. It’s that easy.