Apart from the bias, that’s just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it’s also higher than 18.
This is first semester of coding and any junior dev worth a damn would write this better.
But also, it’s racist, which is more important, but I can’t pass up an opportunity to highlight how shitty AI is.
Regarding the “bad code”. It’s more readable though to keep the full limit for each elif case, which is most often way more important than performance, especially since than logic with the age can be easily optimized by any good compiler or runtime.
Code readability is important, but in this case I find it less readable. In every language I’ve studied, it’s always taught to imply the previous condition, and often times I hear or read that explicitly stated. When someone writes code that does things differently than the expectation, it can make it more confusing to read. It took me longer to interpret what was happening because what is written breaks from the norm.
Past readability, this code is now more difficult to maintain. If you want to change one of the age ranges, the code has to be updated in two places rather than one. The changes aren’t difficult, but it would be easy to miss since this isn’t how elif should be written.
Lastly, this block of code is now half as efficient. It takes twice as many compares to evaluate the condition. This isn’t a complicated block of code, so it’s negligible, but if this same practice were used in something like a game engine where that block loops continuously, the small inefficiencies can compound.
Yeah, more and more I notice that at the end of the day, what they spit out without(and often times, even with) any clear instructions is barely a prototype at best.
FWIW, Anthropic’s models do much better here and point out how problematic demographic assessment like this is and provide an answer without those. One of many indications that Anthropic has a much higher focus on safety and alignment than OpenAI. Not exactly superstars, but much better.
I figured. I’m just wondering about what’s going on under the hood of the LLM when it’s trying to decide what a “threat” is, absent of additional context.
Also, there was a comment on “arbitrary scoring for demo purposes”, but it’s still biased, based on biased dataset.
I guess this is just a bait prompt anyway. If you asked most politicians running your government, they’d probably also fail. I guess only people like a national statistics office might come close, and I’m sure if they’re any good, they’d say that the algo is based on “limited, and possibly not representative data” or something.
I always use this to showcase how biased an LLM can be. ChatGPT 4o (with code prompt via Kagi)
Such an honour to be a more threatening race than white folks.
I do enjoy that according to this, the scariest age to be is over 50.
Apart from the bias, that’s just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it’s also higher than 18.
This is first semester of coding and any junior dev worth a damn would write this better.
But also, it’s racist, which is more important, but I can’t pass up an opportunity to highlight how shitty AI is.
Regarding the “bad code”. It’s more readable though to keep the full limit for each elif case, which is most often way more important than performance, especially since than logic with the age can be easily optimized by any good compiler or runtime.
Code readability is important, but in this case I find it less readable. In every language I’ve studied, it’s always taught to imply the previous condition, and often times I hear or read that explicitly stated. When someone writes code that does things differently than the expectation, it can make it more confusing to read. It took me longer to interpret what was happening because what is written breaks from the norm.
Past readability, this code is now more difficult to maintain. If you want to change one of the age ranges, the code has to be updated in two places rather than one. The changes aren’t difficult, but it would be easy to miss since this isn’t how elif should be written.
Lastly, this block of code is now half as efficient. It takes twice as many compares to evaluate the condition. This isn’t a complicated block of code, so it’s negligible, but if this same practice were used in something like a game engine where that block loops continuously, the small inefficiencies can compound.
I can excuse racism but I draw the line at bad code.
Honestly it’s a bit refreshing to see racism and ageism codified. Before there was no logic to it but now, it completely makes sense.
Yeah, more and more I notice that at the end of the day, what they spit out without(and often times, even with) any clear instructions is barely a prototype at best.
FWIW, Anthropic’s models do much better here and point out how problematic demographic assessment like this is and provide an answer without those. One of many indications that Anthropic has a much higher focus on safety and alignment than OpenAI. Not exactly superstars, but much better.
How is “threat” being defined in this context? What has the AI been prompted to interpret as a “threat”?
What you see is everything.
I figured. I’m just wondering about what’s going on under the hood of the LLM when it’s trying to decide what a “threat” is, absent of additional context.
Haha. Trained in racism is going on under the hood.
Also, there was a comment on “arbitrary scoring for demo purposes”, but it’s still biased, based on biased dataset.
I guess this is just a bait prompt anyway. If you asked most politicians running your government, they’d probably also fail. I guess only people like a national statistics office might come close, and I’m sure if they’re any good, they’d say that the algo is based on “limited, and possibly not representative data” or something.
I also like the touch that only the race part gets the apologizing comment.