All this really does is show areas where the writing requirements are already bullshit and should be fixed.
Like, consumer financial complaints. People feel they have to use LLMs because when they write in using plain language they feel they’re ignored, and they’re probably right. It suggests that these financial companies are under regulated and overly powerful. If they weren’t, they wouldn’t be able to ignore complaints when they’re not written in lawyerly language.
Press releases: we already know they’re bullshit. No surprise that now they’re using LLMs to generate them. These shouldn’t exist at all. If you have something to say, don’t say it in a stilted press-release way. Don’t invent quotes from the CEO. If something is genuinely good and exciting news, make a blog post about it by someone who actually understands it and can communicate their excitement.
Job postings. Another bullshit piece of writing. An honest job posting would probably be something like: “Our sysadmin needs help because he’s overworked, he says some of the key skills he’d need in a helper are X, Y and Z. But, even if you don’t have those skills, you might be useful in other ways. It’s a stressful job, and it doesn’t pay that well, but it’s steady work. Please don’t apply if you’re fresh out of school and don’t have any hands-on experience.” Instead, job postings have evolved into some weird cargo-culted style of writing involving stupid phrases like “the ideal candidate will…” and lies about something being a “fast paced environment” rather than simply “disorganized and stressful”. You already basically need a “secret decoder ring” to understand a job posting, so yeah, why not just feed a realistic job posting to an LLM and make it come up with some bullshit.
Exactly. LLM’s assisting people in writing soul-sucking corporate drivel is a good thing, I hope this changes the public perception on the umbrella of ‘formal office writing’. (including: internal emails, job applications etc.) So much time-wasting bullshit to form nothing productive.
I mean there are court documents written with the help of AI.
And there are lawyers who have been raked over the coals by judges when the lawyers have submitted AI-generated documents where the LLM “hallucinated” cases that didn’t exist which were used as precedents.
I am not saying the two are equally comparable, but I wonder if the same “most rapid change in human written communication” could also have been said with the proliferation of computer-based word processors equipped with spelling and grammar checks.
I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.
Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.
60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.
Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.
I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.
As a person who is intrigued in linguistics, I wonder how
AILLMs will affect real languages. I wonder if there is any research papers on this.I’m not aware of any paper about this; specially with how recent LLMs are, it’s kind of hard to detect tendencies.
That said, if I had to take a guess, the impact of LLMs in language will be rather subtle:
- Some words will become more common because bots use them a lot, and people become more aware of those words. “Delve” comes to my mind. (Urgh. I hate this word.)
- Swearing will become more common too. I wouldn’t be surprised if we saw an uptick of “fuck” and “shit” after ChatGPT was released. That’s because those bots don’t swear, so swearing is a good way to show “I’m human”.
- Idiosyncratic language might also increase, as a mix of the above and to avoid sounding “bland and bot-like”. Including letting some small typos to go through on purpose.
Text-to-speech, mentioned by @Shelbyeileen@lemmy.world, is another can of worms; it might reinforce non-common pronunciations until they become common. This should not be a big issue e.g. in Italian (that uses a mostly regular spelling), but it might be noticeable in English.
I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.
I can only see this get worse.
It’s not just the internet.
Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.
I genuinely believe that the general effectiveness of written communication has regressed.
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
Apparently Claude sonnet 3.7 is the best one for coding
I use it to construct regex’s which, for my use cases, can get quite complicated. It’s pretty good at doing that.
I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.
The Internet was shit before LLMs
It had its fair share of shit and that gradually increased with time, but LLMs are like a whole new level of flooding everything with zero effort
If it’s due to LLM is it “human written communication”?
Even if it was fully AI generated its still human communication in a written format, at least until the AIs start writing to each other without a human intermediary.
I thought there was a social network that is completely filled with AI and no real humans.
Edit: found it https://socialai.co/
https://www.theverge.com/2024/9/17/24247253/social-ai-app-replace-humans-with-bots
Reddit had a subreddit dedicated to bots just posting to each other as well.
/r/subreddit_simulator, IIRC
I think that one used simple Markov chains and was really entertaining for years.
I also remember there was a bot you could summon that would simulate a comment written by you, and it was funny to see those.