• 0 Posts
  • 111 Comments
Joined 3 years ago
cake
Cake day: June 20th, 2023

help-circle

  • You’re not wrong, but in my personal experience AI that I’ve used is already at the level of a decent intern, maybe fresh junior level. There’s no reason it can’t improve from there. In fact I get pretty good results by working incrementally to stay within its context window.

    I was around for the dotcom bubble and I expect this to go similarly: at first there is a rush to put AI into everything. Then they start realizing they have to actually make money and the frivolous stuff drops by the wayside and the useful stuff remains.

    But it doesn’t go away completely. After the dotcom bust, the Internet age was firmly upon us, just with less hype. I expect AI to follow a similar trend. So, we can hope for another AI winter or we can figure out where we fit in. I know which one I’m doing.


  • I’m a senior working with junior developers, guiding them through difficult tasks and delegating work to them. I also use AI for some of the work. Everything you say is correct.

    However, that doesn’t stop a) some seniors from spinning up several copies of AI and test them like a group of juniors and b) management from seeing this as a way to cut personnel.

    I think denying these facts as a senior is just shooting yourself in the foot. We need to find the most productive ways of using AI or become obsolete.

    At the same time we need to ensure that juniors can develop into future seniors. AI is throwing a major wrench in the works of that, but management won’t care.

    Basically, the smart thing to do is to identify where AI, seniors, and juniors all fit in. I think the bubble needs to pop before that truly happens, though. Right now there’s too much excitement to cut cost/salaries with the people holding the purse strings. Until AI companies start trying to actually make a profit, that won’t happen.


  • Very true. I’ve been saying this for years. However, the flip side is you get the best results from AI by treating it as a junior developer as well. When you do, you can in fact have a fleet of virtual junior developer working for you as a senior.

    However, and I tell this to the junior I work with: you are responsible for the code you put into production, regardless if you write it yourself or you used AI. You must review what it creates because you’re signing off on it.

    That in turn means you may not save as much time as you think, because you have to review everything, and you have to make sure you understand everything.

    But understanding will get progressively harder the more code is written by other people or AI. It’s best to try to stay current with the code base as it develops.

    Unfortunately this cautious approach does not align with the profit motives of those trying to replace us with AI, so I remain cynical about the future.




    1. Some kind of monitoring software, like the Grafana stack. I like email and Discord notifications.
    2. The Dockerfile will have a HEALTHCHECK statement, but in my experience this is pretty rare. Most of the time I set up a health check in the docker compose file or I extended the Dockerfile and add my own. You sometimes need to add a tool (like curl) to do the health check anyway.
    3. It’s a feature of the container, but the app needs to support some way of signaling “health”, such as through a web API.
    4. It depends on your needs. You can do all of the above. You can do so-called black box monitoring where you’re just monitoring whether your webapp is up or down. Easy. However, for a business you may want to know about problems before they happen, so you add white box monitoring for sub-components (database, services), timing, error counts, etc.

    To add to that: health checks in Docker containers are mostly for self-healing purposes. Think about a system where you have a web app running in many separate containers across some number of nodes. You want to know if one container has become too slow or non-responsive so you can restart it before the rest of the containers are overwhelmed, causing more serious downtime. So, a health check allows Docker to restart the container without manual intervention. You can configure it to give up if it restarts too many times, and then you would have other systems (like a load balancer) to direct traffic away from the failed subsystems.

    It’s useful to remember that containers are “cattle not pets”, so a restart or shutdown of a container is a “business as usual” event and things should continue to run in a distributed system.


  • I speak standing on a hill if my own dead projects. Just remember personal projects are supposed to be fun and educational, maybe with a little resume padding for good measure. Scratch that itch you can’t get to at work. It’s great when other people enjoy them, but as soon as they become a commitment, they start feeling like work. To me, at least.

    That’s why I think games or little tools are great. They small enough so you can throw them out and start over. People won’t get (too) mad if you stop maintaining them (if you open source them) because it’s easy for someone else to take over.



  • It’s a pretty common assumption in software, especially on Linux, that if anyone can access your home directory, then you can’t have any expectation of privacy. Some apps make the explicit statement that secrets are stored in plain text because obfuscation would just give you a false sense of security.

    The solution is to encrypt the data on a system level, e.g., with encrypted home directories. You could also create an encrypted volume in a file and store the profile in there. Make sure to protect your private keys with good passphrases.




  • True open source products are your best bet. TruNAS and Proxmox are popular options, but you can absolutely set up a vanilla Debian server with Samba and call it a NAS. Back in the old days we just called those “file servers”.

    Most importantly, just keep good backups. If you have to choose between investing in a raid or a primary + backup drive, choose the latter every time. Raid will save you time to recover, but it’s not a backup.





  • Remote, because my commute would be 140 miles round-trip again. Otherwise I mostly enjoy working in an office with people and I don’t mind going in every few months or so.

    Remote is also nice because it actually makes it easier to collaborate with other developers when we can both be at our own keyboards and share screens.

    I work well alone, but I spend a lot in time in calls, either work meetings or collaborating on code. I do enjoy the social aspect of that as well.

    I use AI pretty much every day, but mostly as a search engine/SO replacement. I rarely let it write my code for me, since I’ve had overall poor results with that. Besides, I have to verify the code anyway. I do use it for simple refactoring or code generation like “create a c# class mapped to this table with entity framework”.