Lots of good comments here. I think there’s many reasons, but AI in general is being quite hated on. It’s sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here’s a few perspectives:
I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.
I think there is a lot of reasons to hate on it, but I think it’s because the reasons to use it effectively are still being figured out.
Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don’t really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It’ll take time to adopt and adapt.
Everything changed when I found the most understanding teachers at the end of my school. I switched schools and had a teacher recognize I was smart and bored and distracted, and she tested me out of the classes and let me spend my time on other random things that were tangentially related and still work with the other students. Game changer compared to where I was where I’d get deductions for doing problems early or reading ahead.
The point on the way to many interests and things, and loving yourself beyond the meds, very important! I found o was regulating myself too much for the first while after diagnosing, and the most relaxation wasn’t what people might typically find relaxing, it was letting the (healthy enough) chaos flow in a safer environment than I was previously prepared to setup.
100%. Great way of putting it. I bounce back forth on occasion, but the trend line is always toward accepting that old part of me, and realizing it’s okay to move on because it’s a very closed chapter that’s been outstaying its welcome. Like any death, you still have those same neural patterns, and they’re slowly getting overwritten, and it’s confusing and disorienting when your muscle memory reaches for something and it’s not there.
It’s extra confusing when what’s reached for is that feeling of not grabbing anything, but you do. When you’ve been falling for decades the ground feels weird for a while when you land.
Hah, yeah, I’m sure I could have taken time to phrase it better.
I definitely feel like a big part of what I’ve grieved is the childhood that I never had, moreso than the future I won’t. It was a big relief, and I felt like I could do well and cut myself slack. I’m just trying to do the same with past me; cut myself that slack, give my past self that love and understanding now that I didn’t get then, accept it was a brutal time, and that it was unfair, but that I’ve grown and learned and stopped rejecting that person was me, and we’re doing all right.
I do think stigma is a part, both your expectations of others and the expectations on yourself. I had a psychiatrist tell me years before my diagnosis that I was “too successful” for ADHD and that pretty much derailed the acceptance for a long time, heh.
Absolutely! Important to recognize you’re not “weird” for not going through this, sometimes it just aligns so well you’re already prepped for it.
I agree to an extent, but also that the parents need to take time to understand how to “gas them up” appropriately. It’s not everyone’s case, but it became very apparent to me when I was young that my parents would cheer me on over anything, and never take any time to learn about the things they were cheering me on over, and that led to disbelieving pretty much any positive feedback from anyone long-term. The only feedback of substance growing up was the very rare negative feedback, because they would only pull it out when they understood it enough to know it needed improving. That, and emphasizing their efforts as the thing to cheer on, not just the end results.
I’ve learned to work through that, and maybe it goes without saying for most people, but being a genuine and substantive cheerleader is important.
I know this is in response to a post saying your ADHD is not other people’s ADHD, but I’m pretty sure your ADHD is my ADHD.
As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it’s sad for me to see the direction it’s been marketed, but not surprised. I’m personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they’re best at.
The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It’s outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.
But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone’s IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people’s data to compete against them, which is dubious at best.
Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.
Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.
Mainly learning that I did, in fact, have ADHD, Then: medication (Vyvanse); drastically reducing or cutting weed, alcohol, and caffeine; therapy to help deal with childhood issues (which exacerbate symptoms); taking time away from work to start recovering from ADHD-driven burnout and building some structures to support my ADHD in the workplace.
Systems to externalize things. I’ve accepted that if I don’t see something, it isn’t happening, so I try to arrange and organize things in a way that it’s physically out in the world for me. Digital doesn’t work extremely well for me for the most part, except for some work things where it’s all in one place, because digital disappears from existence when the screen turns off.
I hate it, but regular exercise, eating more healthy, and the nights where I can actually sleep are probably the biggest factors in whether I have a good day or not. Not that knowing that is enough, of course.
Oh, and just generally learning what my weaknesses are. I’m still hugely struggling with ADHD overall, but knowing the big weaknesses helps. It’s not about doing what’s easy, it’s about facing what’s hard head-on and accepting it sucks, but you have to go on.
My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that “Spatial Computing” is the next paradigm for work.
I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.
It’s just limited. Streaming apps aren’t very good, there’s no great source for 3D movies (which are great, when Bigscreen had them anyways), they’re still a bit too hot and heavy for long-term use, the game library isn’t very broad and there haven’t been many killer app games/products that distinct it from other modalities, and it’s going to need a critical amount of adoption to get used in remote meetings.
I really do think it’s huge for given a sense of remote presence, and I’d love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.
They did try, though, and I think they’re on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they’re first selling the idea, and then maybe there will be a break. I’ll admit the industry is moving much slower than I’d anticipated back in 2012 when I was starting VR research.
Yeah! Not beating yourself up over this is really important, same with not overthinking it. Some days are hard, some are less hard, some, I’ve heard, are easy.
Some days the best progress/discipline is noticing it’s a day where you need your own compassion to admit you need to let yourself off the hook for a bit.
It’s a good start to a long path :) I’m not a doctor of medicine, and not medical advice, but I know it was really helpful for me when I started recognizing I was on a path to helping myself, not the ADHD, not the trauma, not whatever else it may be diagnosed as, but me, my experiences, my patterns, my brain.
The labels can be helpful for seeing, noticing, understanding, approaching, and getting medical support where needed, but ultimately it’s great that the symptoms were validated, and congrats on taking the steps! It’s hard work to identify the need, hard work to reach out and get support, and it means you’re very likely on a good path.
The remembral is really smart! I might need to find a way that works for me for that one.
Being really open is also great; radical authenticity and openness (with those it’s appropriate and comfortable) has helped me learn and help others, and gotten acceptance from people I’d struggled with. “Let’s assume I’ve been living underground for a while, how exactly do you go about X, if you’re comfortable answering?” Also great for those with absent/developmentally lacking childhood experiences.
Yeah, a lot of my systems have been built up by noticing bad patterns and finding easier alternatives. A frozen curry that takes 10 minutes of effort tops, with pre-made masala paste - it may not be the most satisfying, but it’s costing me about $4, I’ll be eating in less time than ordering in, and I won’t get stuck looking at menus for an hour.
Yeah, I fortunately had a magic bullet (not great for it, but works) from years ago I received as a gift. The other comment nailed it; any time I’ve added water, it’s been bland. While milk, some yogurts, and a healthy mix of fruits is really flavourful, and it might throw the texture, but the oats and spinach add a night nutrient punch.
Insane compute wasn’t everything. Hinton helped develop the technique which allowed more data to be processed in more layers of a network without totally losing coherence. It was more of a toy before then because it capped out at how much data could be used, how many layers of a network could be trained, and I believe even that GPUs could be used efficiently for ANNs, but I could be wrong on that one.
Either way, after Hinton’s research in ~2010-2012, problems that seemed extremely difficult to solve (e.g., classifying images and identifying objects in images) became borderline trivial and in under a decade ANNs went from being almost fringe technology that many researches saw as being a toy and useful for a few problems to basically dominating all AI research and CS funding. In almost no time, every university suddenly needed machine learning specialists on payroll, and now at about 10 years later, every year we are pumping out papers and tech that seemed many decades away… Every year… In a very broad range of problems.
The 580 and CUDA made a big impact, but Hinton’s work was absolutely pivotal in being able to utilize that and to even make ANNs seem feasible at all, and it was an overnight thing. Research very rarely explodes this fast.
Edit: I guess also worth clarifying, Hinton was also one of the few researching these techniques in the 80s and has continued being a force in the field, so these big leaps are the culmination of a lot of old, but also very recent work.