To expand on this a bit AI in medicine is getting super good at cancer screening in specific use cases.
People now heavily associate it with LLMs hallucinating and speaking out of their ass but forget about how AI completely destroys people at chess. AI is already getting better than top physics models at weather predicting, hurricane paths, protein folding and a lot of other use cases.
AI’s uses in specific well defined problems with a specific outcome can potentially become way more accurate than any human can. It’s not so much about removing humans but handing humans tools to make medicine both more effective and efficient at the same time.
The problem is the use of ai in everything as a generic term. Algorithms have been around for awhile and im pretty sure the ai cancer detections are machine learning that are not at all related to LLMs.
Yeah absolutely, I’m specifically talking about AI as a neural network/reinforcement learning/machine learning and whatnot. Top of the line weather algorithms are now less accurate than neural networks.
LLMs as doctors are pretty garbage since they’re predicting words instead of classifying a photo into yes/no or detecting which part of the sleep cycle a sleeping patient is in.
Fun fact, the closer you get the actual math the less magical the words become. Marketing says “AI”, programming says “machine learning” or “neural network”, mathematicians say “reinforcement learning”.
Maybe it was my CS major talking there. An algorithm is a sequence of steps to reach a desired outcome such as updating a neural network. The network itself is essentially just a big heap of values you multiply through if you were curious.
To expand on this a bit AI in medicine is getting super good at cancer screening in specific use cases.
People now heavily associate it with LLMs hallucinating and speaking out of their ass but forget about how AI completely destroys people at chess. AI is already getting better than top physics models at weather predicting, hurricane paths, protein folding and a lot of other use cases.
AI’s uses in specific well defined problems with a specific outcome can potentially become way more accurate than any human can. It’s not so much about removing humans but handing humans tools to make medicine both more effective and efficient at the same time.
The problem is the use of ai in everything as a generic term. Algorithms have been around for awhile and im pretty sure the ai cancer detections are machine learning that are not at all related to LLMs.
Yeah absolutely, I’m specifically talking about AI as a neural network/reinforcement learning/machine learning and whatnot. Top of the line weather algorithms are now less accurate than neural networks.
LLMs as doctors are pretty garbage since they’re predicting words instead of classifying a photo into yes/no or detecting which part of the sleep cycle a sleeping patient is in.
Fun fact, the closer you get the actual math the less magical the words become. Marketing says “AI”, programming says “machine learning” or “neural network”, mathematicians say “reinforcement learning”.
I guess I worked with a guy working with algorithms and neural networks so I sorta just equated them. I was very obviously not a CS major.
Maybe it was my CS major talking there. An algorithm is a sequence of steps to reach a desired outcome such as updating a neural network. The network itself is essentially just a big heap of values you multiply through if you were curious.