Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • 5:30-6am, wake up and become competent.

    6-8am - get the kids moving, make multiple breakfasts because they all want something different (including SO), get family prepared to travel, etc.

    8am - start work.

    3-4pm - collect some kids from school, sit in line feeling like you’re the worst because you’re not #1 in line for pickup, but you don’t think being there an hour early is reasonable.

    5pm - end “work day”, begin “evening” and figure out what’s for dinner (we planned this weekly, so it’s not too hard) then make it.

    7pm - bedtime ritual starts. deal with ensuing tantrums because “I don’t want to brush my teeth” or “I’m in the middle of this activity” or “why do I have to read?!”.

    8:30-9pm - kids are in bed. finally.

    9:30-5:30 - MY FUCKING TIME. 8 hours where the rest of the family is asleep and I get to manage myself… poorly. I need 5h, so bedtime before 12:30 is acceptable. Ignoring rounding errors, 10pm-12pm is for me. ignore it at your peril.



  • Wrangling IDE cables with awkward angles so you couldn’t both see and touch the space at the same time. And the case edges were made of knives. And then, yeah, it wouldn’t boot and you’d have to figure out that your master/slave jumpers were incorrect as others have stated and have to remove, tweak and replace the drives.

    Good times.



  • I really like this comment. It covers a variety of use cases where an LLM/AI could help with the mundane tasks and calls out some of the issues.

    The ‘accuracy’ aspect is my 2nd greatest concern: An LLM agent that I told to find me a nearby Indian restaurant, which it then hallucinated is not going to kill me. I’ll deal, but be hungry and cranky. When that LLM (which are notoriously bad at numbers) updates my spending spreadsheet with a 500 instead of a 5000, that could have a real impact on my long-term planning, especially if it’s somehow tied into my actual bank account and makes up numbers. As we/they embed AI into everything, the number of people who think they have money because the AI agent queried their bank balance, saw 15, and turned it into 1500 will be too damn high. I don’t ever foresee trusting an AI agent to do anything important for me.

    “trust”/“privacy” is my greatest fear, though. There’s documentation for the major players that prompts are used to train the models. I can’t immediately find an article link because ‘chatgpt prompt train’ finds me a ton of slop about the various “super” prompts I could use. Here’s OpenAI’s ToS about how they will use your input to train their model unless you specifically opt-out: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/

    Note that that means when you ask for an Indian restaurant near your home address, Open AI now has that address in it’s data set and may hallucinate that address as an Indian restaurant in the future. The result being that some hungry, cranky dude may show up at your doorstep asking, “where’s my tikka masala”. This could be a net-gain, though; new bestie.

    The real risk, though, is that your daily life is now collected, collated, harvested and added to the model’s data set; all without your clear explicit actions: using these tools requires accepting a ToS that most people will not really read and understand. Maaaaaany people will expose what is otherwise sensitive information to these tools without understanding that their data becomes visible as part of that action.

    To get a little political, I think there’s a huge downside on the trust aspect of: These companies have your queries(prompts), and I don’t trust them to maintain my privacy. If I ask something like “where to get abortion in texas”, I can fully see OpenAI selling that prompt to law enforcement. That’s an egregious example for impact, but imagine someone could query prompts (using an AI which might make shit up) and asks “who asked about topics anti-X” or “pro-Y”.


    My personal use of ai: I like the NLP paradigm for turning a verbose search query into other search queries that are more likely to find me results. I run a local 8B model that has, for example, helped me find a movie from my childhood that I couldn’t get google to identify.

    There’s use-case here, but I can’t accept this as a SaaS-style offering. Any modern gaming machine can run one of these LLMs and get value without the tradeoff from privacy.

    Adding agent power just opens you up to having your tool make stupid mistakes on your behalf. These kinds of tools need to have oversight at all times. They may work for 90% of the time, but they will eventually send an offensive email to your boss, delete your whole database, wire money to someone you didn’t intend, or otherwise make a mistake.


    I kind of fear the day that you have a crucial confrontation with your boss and the dialog goes something like:

    Why did you call me an asshole?

    I didn’t the AI did and I didn’t read the response as much as I should have.

    Oh, OK.


    Edit: Adding as my use case: I’ve heard about LLMs being described as a blurry JPEG of the internet, and to me this is their true value.

    We don’t need a 800B model, we need an easy 8B model that anyone can run that helps turn “I have a question” into a pile of relevant actual searches.


  • This is my issue with NMS.

    It’s fun for a while, but it’s a pretty shallow sandbox and after you’ve played in the sand for a bit, it’s all just sand.

    If you’re not setting yourself a complex and/or grindy goal, like building a neat base, finding the perfect weapon or ship, filling out your reputations or lexicon, or learning all the crafting recipes to make the ultimate mcGuffin, then there is really not much to do. And, for me, once that goal is accomplished, I’m done for a while.

    Each planet is just a collection of random tree/bush/rock/animal/color combinations that are mechanically identical (unless something’s changed. I haven’t played since they added VR). I’m also a gamer who likes mechanical complexity and interactions; I don’t tend to play a game for the actual ‘role playing’.

    The hand-written “quests” were fun to do most of the time, but that content runs out quickly.

    I have the same problems with Elite Dangerous (I have an explorer somewhere out a solid few hours away from civilized space) and unmodded Minecraft (I can only build so many houses/castles). I’ll pick all of these up every now and then, but the fun wears off more quickly each time.




  • Clearly, English is incapable of having homographs. Caps and “Caps”, and all Caps and ALL CAPS. (sorry, Froggy, that last part was in all caps, which you can’t see, it said ‘all caps’)

    Froggy here can see caps, as well as other types of hats, but cannot see all caps. THEY Froggy, CANT we SEE love THIS you PART, but they can still see capital letters, since they don’t comprise the whole word. EXCUSE THE LACK OF APOSTROPHE IT WOULD COMPROMISE THE WORD


  • The LitRPG series ‘He Who Fights With Monsters’ does this in a later book and it’s a really good story arc.

    a vague spoiler, but hiding it just in case:

    One of the characters meets a deity named Hero, who can supercharge a person after they have committed to dying to protect others, but the supercharge ensures that they do die even after the threat is eliminated.


  • And this is why Digit wanted a clarification. Let’s make a quick split between “Tech Bro” and Technology Enthusiast.

    I’d maybe label myself a “tech guy”, and forego the “bro”, but I could see other people calling me a “tech bro”. I like following tech trends and innovations, and I’m often a leading adopter of things I’m interested in if not bleeding edge. I like talking about tech trends and will dive into subjects I know. I’ll be quick to point out how machine learning can be used in certain circumstances, but am loudly against “AI”/LLMs being shoved into everything. I’m not the CEO or similar of a startup.

    Your specific and linked definition requires low critical thinking skills, big ego and access to “too much” money. That doesn’t describe me and probably doesn’t describe Digit’s network.

    Their whole point seemed to be that the tech-aware people in their sphere are antagonistic to the idea of “AI” being added to everything. That doesn’t deserve derision.


  • You cannot ‘ironically’ wear a symbol of hate.

    I’m not against having a maga hat as a relic, since it hopefully has historical context if we still study history in 10 years, but wearing it endorses the movement regardless of your intent.

    Maga people will see you wearing the hat and have no context. They will see the hat as validation, even if you’re just doing it for the lulz.


  • Hell, I don’t submit help requests without a confident understanding of what’s wrong.

    Hi Amazon. My cart, ID xyz123, failed to check out. Your browser javascript seems to be throwing an error on line 173 of “null is not an object”. I think this is because the variable is overwritten in line 124, but only when the number of items AND the total cart price are prime.

    Generally, by the time I have my full support request, I have either solved my problem or solved theirs.


  • I agree that this is a problem.

    “Responsible disclosure” is a thing where an organization is given time to fix their code and deploy before the vulnerability is made public. Failing to fix the issue in a reasonable time, especially a timeline that your org has publicly agreed to, will cause reputational harm and is thus an incentive to write good code that is free of vulns and to remediate ones when they are identified.

    This breaks down when the “organization” in question is just a few people with some free time who made something so fundamentally awesome that the world depends on it and have never been compensated for their incredible contributions to everyone.

    “Responsible disclosure” in this case needs a bit of a redesign when the org is volunteer work instead of a company making profit. There’s no real reputational harm to ffmpeg, since users don’t necessarily know they use it, but the broader community recognizes the risk, and the maintainers feel obligated to fix issues. Additionally, a publicly disclosed vulnerability puts tons of innocent users at risk.

    I don’t dislike AI-based code analysis. It can theoretically prevent zero-days when someone malicious else finds an issue first, but running AI tools against that xkcd-tiny-block and expecting that the maintainers have the ability to fit into a billion-dollar-company’s timeline is unreasonable. Google et al. should keep risks or vulnerabilities private when disclosing them to FOSS maintainers instead of holding them to the same standard as a corporation by posting issues to a git repo.

    A RCE or similar critical issue in ffmpeg would be a real issue with widespread impact, given how broadly it is used. That suggests that it should be broadly supported. The social contract with LGPL, GPL, and FOSS in general is that code is released ‘as is, with no warranty’. Want to fix a problem, go for it! Only calling out problem just makes you a dick: Google, Amazon, Microsoft, 100’s of others.

    As many have already stated: If a grossly profitable business depends on a “tiny” piece of code they aren’t paying for, they have two options: pay for the code (fund maintenance) or make their own. I’d also support a few headlines like “New Google Chrome vulnerability will let hackers steal you children and house!” or “watching this youtube video will set your computer on fire!”



  • I’ll admit that I should have been more clear that I was paraphrasing and interpreting instead of actually quoting you. The previous message was right above mine, though, so I though it was pretty clear.

    Just as you have written me off, I’ve done the same for you. I’m just responding for anyone else who reads this far down and finds this thread, and only because I’m in a waiting room and this is more interesting than HGTV.

    I said, and I quote:

    I still don’t get your angle. Why are you defending this…

    I assume the lack of a defense is clear enough proof that you don’t have one.

    Palantir scouring the internet, cloud cameras like Flock, Facebook and Google retaining your data forever to maximize profit. None of that is defensible. We should be sounding alarms like OP did and making sure people are aware. Putting others down for ‘not having caught on yet’ (interpreted, you can still correct me if I’m misunderstanding) is counterproductive. We can still resist or reverse the power these huge companies have… but there might be a point where it becomes too late.

    Would you prefer to be someone who helped fight, or someone who complained it was futile until it was?

    Call your Senators and Representatives. Demand privacy. Elect and support people who are against these kinds of overreach if the current ones won’t.

    Love you!

    edit: dammit, the quote formatting ate a line break. fixed.




  • I’m going to say that this is actually spooky.

    Not that it’s unreasonable, but that the scale of what AI can surveil is so vast that there’s no more personal security-via-obscurity.

    It used to be that unless someone had a reason to start looking at you, anything you did online or off was effectively impossible to search. You might be caught on some store’s CCTV, Or your cell provider might have location pings, but that wasn’t online for anyone and needed a warrant to have the police use it to track your activities. Now cities are using Flock and similar tools to enable tracking vehicles across the country without any reason, and stores are using cloud-service AI cameras to attempt to track your mood as you move through the store. These tools can and have been abused.

    Now, due to the harvesting of this data for AI, anything that’s ever been recorded (video footage, social media posts, etc) and used as training data can be correlated much more easily, long after it occurred, and without needing to be law enforcement with a warrant.

    I’d call that spooky.


  • Thanks for your reply, and I can still see how it might work.

    I’m curious if you have any resources that do some end-to-end examples. This is where I struggle. If I have an atomic piece of code I need and I can maybe get it started with a LLM and finish it by hand, but anything larger seems to just always fail. So far the best video I found to try a start-to-finish demo was this: https://www.youtube.com/watch?v=8AWEPx5cHWQ

    He spends plenty of time describing the tools and how to use them, but when we get to the actual work, we spend 20 minutes telling the LLM that it’s doing stuff wrong. There’s eventually a prototype, but to get there he had to alternate between ‘I still can’t jump’ and ‘here’s the new error.’ He eventually modified code himself, so even getting a ‘mario clone’ running requires an actual developer and the final result was underwhelming at best.

    For me, a ‘game’ is this tiny product that could be a viable unit. It doesn’t need to talk to other services, it just needs to react to user input. I want to see a speed-run of someone using LLMs to make a game that is playable. It doesn’t need to be “fun”, but the video above only got to the ‘player can jump and gets game over if hitting enemy’ stage. How much extra effort would it take to make the background not flat blue? Is there a win condition? How to refactor this so that the level is not hard-coded? Multiple enemy types? Shoot a fireball that bounces? Power Ups? And does doing any of those break jump functionality again? How much time do I have to spend telling the LLM that the fireball still goes through the floor and doesn’t kill an enemy when it hits them?

    I could imagine that if the LLM was handed a well described design document and technical spec that it could do better, but I have yet to see that demonstrated. Given what it produces for people publishing tutorials online, I would never let it handle anything business critical.

    The video is an hour long, and spends about 20 minutes in the middle actually working on the project. I probably couldn’t do better, but I’ve mostly forgotten my javascript and HTML canvas. If kaboom.js was my focus, though, I imagine I could knock out what he did in well under 20 minutes and have a better architected design that handled the above questions.

    I’ve, luckily, not yet been mandated that I embed AI into my pseudo-developer role, but they are asking.