cross-posted from: https://lemmy.dbzer0.com/post/50693956
Transcript
A post by [object Object] (@zzt@mas.to) saying: courtesy of @davidgerard@circumstances.run, Proton is now the only privacy vendor I know of that vibe codes its apps: In the single most damning thing I can say about Proton in 2025, the Proton GitHub repository has a “cursorrules” file. They’re vibe-coding their public systems. Much secure! I am once again begging anyone who will listen to get off of Proton as soon as reasonably possible, and to avoid their new (terrible) apps in any case. https://circumstances.run/@davidgerard/114961415946154957
It has a reply by the author saying: in an unsurprising update for those familiar with how Proton operates, they silently rewrote their monorepo’s history to purge .cursor and hide that they were vibe coding: https://github.com/ProtonMail/WebClients/tree/2a5e2ad4db0c84f39050bf2353c944a96d38e07f
given the utter lack of communication from Proton on this, I can only guess they’ve extracted .cursor into an external repository and continue to use it out of sight of the public
Speaking as someone who hates generative AI but has been forced to adapt to using AI in the programming field to stay relevant, this doesn’t suggest they’re vibe coding. The programming world is the only place AI has actually added value (I should note it’s done some neat stuff helping with diagnoses in the medical world too), but like everything, you get what you put into it.
Feed it enough instruction and context, and it can handle the drudgery of things like tech debt updates and other things a programmer knows how to do, but would rather offload to a tool. I’ve had Claude do refactors like that while stepping through and reviewing every single change. It has saved me hours, spared me from hell, and made me look good at work.
That’s my grounded take as a person that has worked with Claude a ton.
But AI everywhere else? Fucking worthless. The whole point is to do the bullshit mundane tasks so that us humans can do art and passionate work, not the opposite.
I’d say this is mostly because you can immediately test the AI’s results and rule out anything it got wrong, and whatever errors you generate can then be fed back into the AI so it can refine what it’s already written. You never have to just trust the AI (assuming you yourself still know how to code) like you have to when using it for research or for solving problems where you don’t get immediate feedback.
Whether this means programming is actually a viable niche for generative AI or whether this speaks more to the limitations and inherent unreliability of the “knowledge” the AI has, I can’t say.
Also, I don’t know if it’s just me but I’m more scared by how fast AI is advancing rather than looking forward to what it can do for me. That definitely clouds my perception when something is AI generated and makes me a lot more dismissive of any real benefits AI might have brought.
Yeah, you get immediate feedback, vs a scenario where you have to manually check the “facts” it provides in order to ensure it’s not hallucinating. I’ve had Copilot straight up hallucinate functions on me and I knew that they were bullshit instantly.
I iterate with it a ton and feed it back errors it makes, or things like type mismatches. It fixes them instantly and understands the issue almost every single time.
That’s the trick. Iterate often and always give it new instructions if it does something stupid. Basically be as verbose as needed and give it tons of context, desired standards, pitfalls to avoid, whatever. It helps a ton.
It will allow you to see if the AI has made any syntax or runtime errors. It does not tell you about any logic errors.
Logic errors are already the most dangerous kind of programming error, and using AI just makes them even harder to find.
Using AI will only help you with syntax (which any good IDE should already be able to do) and finding information faster than a search engine (but leaving out important context). AI is not useful for programming anything that will be made public.
The danger of vibe coding is that the people doing it either don’t have the skills to or don’t think it’s importsnt to review the AI changes.
If you work with an AI and instead of taking time typing through boring tasks, take time reading through the changes, them there isn’t much of an issue. A skilled software engineer is capable of noticing logic errors in a code they read.
If the generated code is too unmecessarily complex to ensure its logic is okay, then scrap it.
I don’t use it in that way (only use JetBrains’ line completion AI) but I don’t see a problem if it is used that way.
However, if I review a code that was partly generated by AI and notice that the dev let through shitty code without review, the review will be salty.
Oh I need to learn from you. I was literally just told I need to learn AI to stay relevant. What’s the minimum way to go about doing so?
I’ve had the greatest success with Claude. The company I work for basically let us all go wild with a few to trial, and Claude has been the best for all of us—even better than GitHub Copilot.
I pay for my own pro plan outside of work and use the VSCode plugin. I’d say read the quickstart guide and experiment with it. Start off with having it do smaller changes and don’t be afraid to be verbose. The more context, the better. Point it to existing files you want to follow the patterns of and model after; give it links to resources for best practices, etc. You can also use it in “plan mode” if you want to see its proposed approach before it starts editing.
I also recommend leaving it so that each change it makes requires your approval (it will do this by default and you can step through everything). That way you always have some control and if it does something dumb, you can stop it at that step and pivot with a different instruction. Alternatively, if you want to see it go ham and carry everything out without approval at each step, you can enable auto-accept.
Once you get into it, start looking into how to craft instruction files. You can have those at your disposal for things like writing tests, language-specific guidelines and practices, etc. That way you can make sure it uses those as a reference so you don’t have to give it the same instructions over and over with every prompt.
If you hate writing tests, I’ve had really good luck letting it handle that. I tend to use it more for the bulk tasks that suck. For things where I want more control, I work with it on a piecemeal basis in my project.
I use it for obscure methods that I don’t know immediately and searching the documentation would take longer than just letting the AI write a code snippet and then looking at the functions that it uses if I don’t recognize any.
It’s kind of like searching, except I can ask for things in a more vague manner.