

Can you share sample code I can try or documentation I can follow of using an AMD GPU in that way (shared, virtualized, using only open source drivers)?
I am also @lsxskip@mastodon.social
Can you share sample code I can try or documentation I can follow of using an AMD GPU in that way (shared, virtualized, using only open source drivers)?
You really piqued my interest. I use docker/podman.
W/ an AMD graphics card, eglinfo on the host shows the card is AMD Radeon and driver is matching that.
In the container, without --gpus=all, it shows the card is unknown and the driver is “swrast” (so just CPU fallback).
To make --gpus=all work, it gives the error
docker: Error response from daemon: could not select device driver “” with capabilities: [[gpu]
I was doing a bad job searching before. I found that AMD can share the GPU, it just works a little differently in terms of how to launch the container. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html#amdgpu-install-dkms
But sadly my AMD GPU is too old/junk to have current driver support.
Anyways, appreciate the reply! Now I can mod my code to run on cheaper cloud instances.
(Note I’m an OpenGL/3D app developer, but probably OpenCL works about the same architecturally)
AFIK it’s only NVIDIA that allows containers shared access to a GPU on the host.
With the majority of code being deployed in containers, you end up locked into the NVIDIA ecosystem even if you use OpenCL. So I guess people just use CUDA since they are limited by the container requirement anyways.
That’s from my experience using OpenGL headless. If I’m wrong please correct me; I’d prefer being GPU agnostic.
I bet the people you work with are very happy to have you as a lead.
I’ve been in this scenario and I didn’t wait for layoffs. I left and applied my skills where shit code is not tolerated, and quality is rewarded.
But in this hypothetical, we got this shit code not by management encouraging the right behavior, and giving time to make it right. They’re going to keep the yes men and fire the “unproductive” ones (and I know fully, adding to the pile is not, in the long run, productive, but what does the management overseeing this mess think?)
To be fair, if you give me a shit code base and expect me to add features with no time to fix the existing ones, I will also just add more shit on the pile. Because obviously that’s how you want your codebase to look.
In my current role, I mostly hire “senior” roles. So the applicants (which are pre screened before I see them) typically have 5+ years experience. I ask about the code they’ve written, and then I ask some questions about how they would extend the code (to meet some new requirements). What I’m looking for is not so much a specific answer, but more so “can we think through this problem together.”
That said, I’ve been the interviewer for “junior” roles…and there isn’t as much correlation between ability and experience as you might think. So no reason to feel imposter syndrome. I’ve worked with extremely smart/talented developers without any formal training.
I think all the stuff you’re doing sets a really good foundation for a career in software, if that’s where you want to go. One thing I might suggest is making a few contributions to open source or team projects. It can be useful to learn about how to read code, and present code to others (or to fit your idea into an existing code base).
I have to do many interviews.
I don’t care if the applicant uses AI, or any other tool available to them. I just care about whether they can explain, debug, and modify/extend code (which they wrote, or at least composed somehow and are presenting as their work).
I’ve definitely been suspicious of AI use, and also had some applicants admit to it. And I don’t count that against them any more than using a web resource.
But, there is a very high correlation between using AI and failing at the explain/debug/modify part.
I feel like this could be a Columbo episode
True, but they were still resource constrained, which might be why they ended up with a model with lower resource requirements.
The scary part to me (noted in the article as well) is less the technical hack but more so the amount of data they are collecting.
Subaru had/has an ongoing issue where the telematics drains the battery while the car is parked, especially if it’s parked out of reach of cell towers. With the amount of data they are sending, it’s not surprising.
There is no need for the car to report its position whatsoever unless I request assistance.
Smiling on the outside…
Should be a nice salary boost for developers in a year or two when all these companies desperately need to rehire to fix whatever AI slop mess they have created.
And I hope every developer demands 2x their current salary if they are tasked with re-engineering that crap.
+1 for feeder
Those mega corporations have intentionally misused the term “algorithm” which implies an unbiased method of ranking or sorting. What they are actually using is more like a human curated list of items to promote that supports their self serving goals.
Hi Cookie! That’s a handsome loaf you have there.
SO is rapidly fading into irrelevance, but we’re all still writing code anyways. Seems like the problem will solve itself.
Ph-trees can do range and closest queries across N dimensions very quickly. I have not used it for 1 dimension, but I’d imagine it would work fine.
https://github.com/tzaeschke/phtree