If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.
Pretty callous and myopic responses here.
If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.
It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.
It’s completely valid to want the tool to just be a tool and “nothing more”.
I get it, it’s not meant to be used this way, but like…
“The purpose of a system is what it does”
great (and brief) article.
lel we have a lot to learn from those early systems theorists / cyberneticians.
Literal conversation I had with a coworker earlier:
Me - AI, outside of a handful of specific cases like breast cancer screening, is completely useless at best and downright harmful at worst.
Coworker - no AI is pretty good actually, I used ChatGPT to improve my CV.
Me - did you get the job?
Coworker -
Except the CV isn’t the only factor to get a job, so your argument is meaningless