• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle






  • The funny thing is we hallucinate all our answers too. I don’t know where these words are coming from and I am not reasoning about them other than construction of a grammatically correct sentence. Why did I type this? I don’t have a fucking clue. 😂

    We map our meanings onto whatever words we see fit. If I had a dollar for every time I’ve heard a Republican call Obama a Marxist still blows my mind.

    Thank you for saying something too. Better than I could do. I’ve been thinking about AI since I was a little kid. I’ve watched it go from at best some heuristic pathfinding in video games all the way to what we have now. Most people just weren’t ever paying attention. It’s been incredible to see that any of this was even possible.

    I watched Two Minute Papers from back when he was mostly doing light transport simulation (raytracing). It’s incredible where we are, but baffling people can’t see the tech as separate from good old capitalism and the owner class. It just so happens it takes a fuckton of money to build stuff like this, especially at first. This is super early.



  • We should understand that 99.9% of what wee say and think and believe is what feels good to us and we then rationalize using very faulty reasoning, and that’s only when really challenged! You know how I came up with these words? I hallucinated them. It’s just a guided hallucination. People with certain mental illnesses are less guided by their senses. We aren’t magic and I don’t get why it is so hard for humans to accept how any individual is nearly useless for figuring anything out. We have to work as agents too, so why do we expect an early days LLM to be perfect? It’s so odd to me. Computer is trying to understand our made up bullshit. A logic machine trying to comprehend bullshit. It is amazing it even appears to understand anything at all.




  • Asking the chat models to have self-disccusion and use/simulate metacognition really seems to help. Play around with it. Often times I am deep in a chat and I learn from its mistakes, it kinda learns from my mistakes and feedback. It is all about working with and not against. Because at this time LLMs are just feed forward neural networks trained on supercomputer clusters. We really don’t even know what they are capable of fully because it is so hard to quantify, especially when you don’t really know what exactly has been learned.

    Q-learning in language is also an interesting methodology I’ve been playing with. With an imagine generator for example though, you can just add (Q-learning quality) and you may get more interesting and quality results. Which itself is very interesting to me.


  • I’ve used LLMs a lot over the post couple years. Pro tip. Use it a lot and learn the models. Then they look much more intelligent as you the user becomes better. Obviously if you prompt “Write me a shell script to calculate the meaning of life, make my coffee, and scratch my nuts before 9AM” it will be a grave disappointment.

    If you first design a ball fondling/scratching robot, use multiple instances of LLMs to help you plan it out, etc. then you may be impressed.

    I think one of the biggest problems is that most people interacting with llms forget they are running on computers and that they are digital and not like us. You can’t make assumptions like you can with humans. Usually even when you do that with us you just get stuff you didn’t want because you weren’t clear enough. We are horrible at instructions and this is something I hope AI will help us learn how to do better. Because ultimately bad instructions or incomplete information doesn’t lead to being able to determine anything real. Computers are logic machines. If you tell a computer to go ride a bike at best it’ll go out and do all the work to embody itself in a robot and buy a bike and ride it. Wait, you don’t even know it did it though because you never specified for it to record the ride…

    A very few of us are pretty good at giving computers clear instructions some of the time. Also though, I have found just forcing models to reason in context is powerful. You have to know to tell it to “use a drill down tree style approach to problem solving. Use reflection and discussion to explore and find the optimal solution to reasoning through the problem.” Might still give you bad results. That is why you have to experiment. It is a lot of fun if you really just let your thoughts run wild. It takes a lot of creative thinking right now to really get the most out of these models. They should all be 110% open source and free for all. BTW Gemini 1.5 and Claude and Llama 3.1 are all great, nd Llama you can run locally or on a rented GPU VM. OpenAI I’m on the fence about but given who all is involved over there I wouldn’t say I would trust them. Especially since they want to do a regulatory capture.


  • Mostly true before, now 99.99%. The charades are so silly because obviously as a worker all I care about is how much I get paid. That’s it.

    All the company organization will care about. Is that work gets done to their standards or above and at the absolute lowest price possible.

    So my interests are diametrically opposed to their interests because my interest is to work as little as possible for as much money as possible. Their goal is to get as much work out of me as possible for as little money as possible. We could just be honest about it and stop the stupid games. I don’t give a shit about my employer anymore than they give a shit about me. If I care about the work that just means I’m that much more pissed they’re relying on my good will towards people who use their products and or services.




  • Just look at AlphaProof. Lol we’re all about to be outclassed. I’m sure everyone will still derrid the bots. They could be actual ASI and especially here in the US we’d say “I don’t see any intelligence.” I wish or society and all of us at individualsc would reflect on our limitations and tiny tiny insignificance on the grand scale. Our egos may kill us.

    P.S… I give us a 10% to make it to 2100 in any numbers or quality of life we’d consider remotely acceptable today. Pretty grim, but I think that’s the weight of the challenges we’re facing. Without AI I’d probably just say it was fucking hopeless. Because we’ve had all the time we needed and all the tech we needed and hardly ever fix anything. Always running a day late and a dollar short. This species has dreams to big for our collective britches. It’s always been a foolish endeavor and full of suffering and horrors. We’re here though so I hope we at least give it a good go. Would be super lame to go out in a putter and take must lifev on earth with us.

    So now the question is if we can use all these access models to actually do something about our problems. Even LLMs seem quite good at pointing out how we are really bad at using the tools we already have and know exactly how to use because we’re always too busy arguing while the ship sinks!


  • I feel like everyone who isn’t really heavily interacting or developing don’t realize how much better they are than human assistants. Shit, for one it doesn’t cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we’ll know it ain’t an LLM, because I don’t know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.