If they mean “30% of the code we wrote last month” then I might believe it. Though I bet it is not across the board but deep in one or two areas. Still, it’s a crazy number.
But he said something like “30% of the code in our repositories” which would mean everything, including their entire legacy of code. And that I simply do not believe.
My first thought on reading that is: yeah, like about 98% of the human genome is “junk DNA” that we have little or no idea what it might be doing. Sometimes when we cut it out, nobody ever notices, sometimes when we cut it out the system won’t boot up.
Its a shit article with Tech crunch changing the words to get people in a flap about AI (for or against), the actual quote is
“I’d say maybe 20 percent, 30 percent of the code that is inside of our repos today and some of our projects are probably all written by software”
“Written by software” reasonably included machine refactored code, automatically generated boilerplate and things generated by AI assistants. Through that lens 20% doesnt seem crazy.
I’ve been “automatically writing code” for a system of about a dozen modules - we specify a glue file in .json between all the modules and the code generating software makes units to go in each module to do the communication interfacing based on the glue spec. That system has been running for more than 10 years now, it writes a couple hundred thousand lines of “new code” every time we modify the glue file.
Of course it’s just bad writing, but I kind of wouldn’t put it past management to try shoving their multitude of codebases through an LLM at this point.
If they mean “30% of the code we wrote last month” then I might believe it. Though I bet it is not across the board but deep in one or two areas. Still, it’s a crazy number.
But he said something like “30% of the code in our repositories” which would mean everything, including their entire legacy of code. And that I simply do not believe.
My first thought on reading that is: yeah, like about 98% of the human genome is “junk DNA” that we have little or no idea what it might be doing. Sometimes when we cut it out, nobody ever notices, sometimes when we cut it out the system won’t boot up.
Its a shit article with Tech crunch changing the words to get people in a flap about AI (for or against), the actual quote is
“Written by software” reasonably included machine refactored code, automatically generated boilerplate and things generated by AI assistants. Through that lens 20% doesnt seem crazy.
The A stands for Automation, right?
I’ve been “automatically writing code” for a system of about a dozen modules - we specify a glue file in .json between all the modules and the code generating software makes units to go in each module to do the communication interfacing based on the glue spec. That system has been running for more than 10 years now, it writes a couple hundred thousand lines of “new code” every time we modify the glue file.
Of course it’s just bad writing, but I kind of wouldn’t put it past management to try shoving their multitude of codebases through an LLM at this point.
It wouldn’t surprise me at all if they entered the entire codebase for Windows 11 into an LLM and asked it to optimize it or some shit lol
lmao I just said the same thing before reading your comment
And surprise surprise, it’s worse than ever
Funny considering windows 7 consists of exactly 0% AI generated code.
Yeah that’s a good point.