- 6 Posts
- 179 Comments
litchralee@sh.itjust.worksto
Programming@programming.dev•Projects are shutting down due to Microslop's Github CoPilot making AI contributions easy and plentifulEnglish
2·3 days agoThis is indeed practical advice on how to do a transition, but my answer was primarily in response to the OP’s question, which was about the reasons why people don’t even try to transition. I don’t at all suggest that a slow transition is in any way invalid. But to even get to that, the reasons for not transitioning have to be overcome. And for any sizable project, community cohesion is going to come first, or else the result will be less than the sum of its parts.
litchralee@sh.itjust.worksto
Programming@programming.dev•Projects are shutting down due to Microslop's Github CoPilot making AI contributions easy and plentifulEnglish
1·5 days agoIn a nutshell, the network effect. At an individual level, if someone wants to leave GitHub, they absolutely can. But unless they’re a repo owner or a BDFL, the project(s) they were working on would still be on GitHub. And that means they can’t access the GitHub PR process for development, or open tickets for new issues, or any other number of interactions, except for maybe pulling code from the repo.
On the flip side, at a project level, if the project owners agree that it’s time to leave GitHub, they absolutely can. And while they could convince the primary developers to also leave with them, the occasional contributors might still be left behind on GitHub. Moving away from GitHub could potentially cut the number of contributors down by a lot. And what’s guaranteed is that the project will have to retool for the new space they move to. And if it’s self-hosted, that’s even more work to do, all of which is kinda a distraction from whatever the project was meant to do.
The network effect is the result of the sum being more useful than its parts. When the telephone was invented, a single telephone on its own is entirely useless, because nobody else has one to use. But with ten telephone, one person has the potential to call any of 9 other people. With 10,000 telephones, that’s over 9000 people they could call, or those people calling them. At a million phones, the telephone is well entrenched into common usage. Even when more and more people despise making phone calls, the telephone is still around, having changed forms since the 1980s into the modern smartphone.
Why? Because networks are also stable: if a few thousand people give up their smartphones per year, the utility of the telephone is not substantially changed for the grand majority of telephone users. The threshold to break the network effect varies, but I hazard a guess that if 1/3 of telephone users gave up their numbers, then the telephone’s demise would be underway. Especially in the face of modern replacements.
I would regard GitHub as having a network effect, in the same way that Twitter should have collapsed but hasn’t. Too many local governments are invested into it as their sole social media presence, and in doing so, also force their citizens to also subscribe to Twitter. GitHub is not a monopoly in the sense that anti-trust laws would apply. But they are a monopoly in that they own the platform, and thus own the network.
But there’s an upside: communities of people are also networks. Depending on how cohesive the contributors to a particular GitHub repo are, enough people can make the move away and would sway the unwilling to also move with them. This is no different than convincing family members to move to Signal, for example. Yes, it’s hard. But communities look out for their common interests. And if AI slop is affecting a community, then even though they didn’t want to deal with it, they have to make a choice.
Be the community member you want to see. Advocate for change in your network of people, however big or small. Without critical mass, a community will only splinter when acting unilaterally.
litchralee@sh.itjust.worksto
Programming@programming.dev•Are there programmers that still don't use AI?English
10·13 days agoHaving spent much of my software engineering career training and mentoring interns, new-hires, and transfers from other departments, and having toiled with some of their truly inexplicable questions that reveal shaky technical foundations, I can understand why so-called AI would be appealing: inexhaustible, while commanding the full battery of information stores that I could throw at it.
And yet, the reason I don’t use AI is precisely because those very interns, new-hires, and transfers invariably become first-class engineers that I have no problem referring to as my equals. It is my observation that I’ve become better at training these folks up with every passing year, and that means that if I were to instead spend my time using AI, I would lose out on even more talented soon-to-be colleagues.
I have only so much time of my mortal coil remaining, and if the dichotomy is between utilizing inordinate energy, memory, and compute for AI, or sharing my knowledge and skills to even just 2 people per year for the rest of my career, I’ll happily choose the latter. In both circumstances, I will never own the product of their labor, and I don’t really care to. What matters to me is that value is being created, and I know there is value in bringing up new software engineers into this field. Whereas the value of AI pales in comparison, if it’s even a positive value at all.
If nothing else, the advent of AI has caused me to redouble my efforts, to level-up more engineers to the best of my ability. It is a human legacy that I can contribute to, and I intend to.
Fair, though I personally don’t let my ISP indirectly dictate what I do with my LAN. If I didn’t already have a v6-enabled WAN, I would still manage my LAN using IPv6 private range addresses. There are too many benefits to me, like having VMs and containers be first-class citizens on my LAN, rather than sitting behind yet another layer of NAT. That lets me avoid port forwarding at the border of my home Kubernetes cluster (or formerly, my Docker Swarm), and it means my DNS names correctly resolve to a valid IP address that’s usable anywhere on my network (because no NAT when inside the LAN).
I will admit that NAT64 is kinda a drag to access v4-only resources like GitHub, but that’s only necessary because they’ve not lit up support for v6 (despite other parts of their site supporting v6).
This is my idea of being future-ready: when the future comes, I’m already there.
The approach isn’t invalid, but seeing as you already have the framework set up to deny all and log for IPv4, the same could be done with IPv6.
That is to say, your router advertises an IPv6 gateway to the global internet, but you then reject it because your VPN doesn’t support v6 (sadly). I specifically say reject, rather than drop, because you want that ICMP Unreachable (administratively prohibited) message to get returned to any app trying to use v6. That way, Happy Eyeballs will gracefully and quickly fall back to v6. Unless your containers have some exceptionally weird routing rules, v6 connections will only be attempted once, and will always use the route advertised. So if your router denies this attempt, your containers won’t try again in a way that could leak. v6 leaks are more likely when there isn’t even a route advertised.
This makes your apps able to use v6, for that day when your VPN supports it, and so it’s just a question of when the network itself can be upgraded. IMO, apps should always try for v6 first and the network (if it can’t support it) will affirmatively reply that it can’t, and then apps will gracefully fall back.
This also benefits you by logging all attempted v6 traffic, to know how much of your stuff is actually v6-capable. And more data is always nice to have.
litchralee@sh.itjust.worksto
Ask Electronics@discuss.tchncs.de•How do I get my oscilloscope to read voltage correctly?English
2·21 days agoI wish you the best of luck in your automotive endeavors. But specific to that field, be advised that automobile power can have a lot of voltage spikes, most notably right after the starter motor shuts off after ignition. This could be as high at 15v or around there. So if you’re not probing during this dynamic event, then your scope will likely still be useful.
I will also note that a used benchtop scope can be had for about $200 USD, often with good tactile controls and acceptable bandwidth and voltage capabilities. A cursory search on eBay shows a 2-channel 50 MHz Siglent SDS1052DL with 400 volt inputs. For general technician and hobbyist diagnostics work, that’s a good deal for an instrument that is one step above what a competent DMM can provide.
litchralee@sh.itjust.worksto
Ask Electronics@discuss.tchncs.de•How do I get my oscilloscope to read voltage correctly?English
18·22 days agoI read your question and was wondering how an oscilloscope could be giving such widely-differing values, with the widest being 0.4 volts against itself and nearly 0.8 volts against a separate instrument. Then it dawned upon me that this oscilloscope is a PC-attached scope with some unique operating limits. I say this having come from a background of using only benchtop digital scopes.
The first limitation is that your scope has a very narrow input voltage range, with the manual listing it as +/- 5 volts but damage would only occur at +/- 35 volts. This is voltage measured at the input BNC connector, so it’s before any probe multiplication is accounted for. Whereas if we look at an inexpensive benchtop oscilloscope like the now-fairly-old Rigol DS1052E, it has an input voltage range of +/- 40 volts. The practical result is that to measure something like a laptop power supply, the Hantek must use attenuation probes, whereas the Rigol can measure that voltage directly. Slightly more expensive oscilloscopes have wider ranges, with some being +/- 400 volts.
Attenuation probes are great for measuring wider voltage ranges, but they come at the cost of both precision and accuracy. The loss of precision comes from the fact that the resolution of the oscilloscope is unchanged, but the voltage range is wider. In concrete terms, both the Hantek and Rigol use an 8-bit ADC, meaning that the span of input voltages visible on the display are mapped to 256 discrete values. If the ADC is imprecise by 1 bit, then that will amount to the reading being off by a certain number of millivolts. But something like a 20:1 attenuation probe causes that millivolt error value to be multiplied by 20x. Whereas the Rigol doesn’t need attenuation probes, and thus doesn’t suffer this penalty.
Furthermore, the Rigol has a neat trick: it uses a separate, more-precise internal attenuation circuit for voltages smaller than +/- 2 volts, and then uses its normal-precision input circuit for all other voltages up to +/- 40 volts. The ADC is unchanged in both modes, and the scope switches seamlessly between the two (though usually with an audible click), but this means that a 20:1 probe measuring a laptop charger would actually cause the Rigol to switch into its precision circuit, which means the Rigol might never pay the precision penalty that the Hantek might. Perhaps the Hantek has a similar feature, but it is not listed in the manual.
As for accuracy loss due to attenuation probes, this is not affected by the amount of attenuation, but rather is a function of how accurate the attenuation is. When a probe is marked as 20:1, it could actually be 19:1 of 21:1 or anywhere around there, depending on the manufacturing tolerances. However, accuracy issues can be resolved through calibration, which you’ve done.
Overall, it seems that you are operating at the very limits of what your Hantek scope can deliver, with its 8-bit ADC and limited input range. Yet your test calls for a voltage 4x higher, so some error is to be expected from the 20:1 probe. With the 10:1 probe, the error is a bit smaller, but now you’re outside the affirmative safe voltage range of the scope. Calibration can only fix accuracy issues, but I think your error is now predominantly due to loss of precision, which cannot be resolved after-the-fact.
If your intended use is to measure signals in the range of a laptop charger and require faithful analog voltage measurements, I’m afraid that you may need to find a different instrument.
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•Element/Matrix Official Docker Install Method?English
232·27 days agoFirstly, I wish you the best of luck in your community’s journey away from Discord. This may be a good time to assess what your community needs from a new platform, since Discord targeted various use-cases that no single replacement platform can hope to replace in full. Instead, by identifying exactly what your group needs and doesn’t need, that will steer you in the right direction.
As for Element, bear in mind that their community and paid versions do not exactly target a hobbyist self-hosting clientele. Instead, Element is apparently geared more for enterprise on-premises deployment (like Slack, Atlassian JIRA, Asterisk PBX) and that’s probably why the community version is also based on Kubernetes. This doesn’t mean you can’t use it, but their assumptions about deployments are that you have an on-premises cloud.
Fortunately, there are other Matrix homeservers available, including one written in Rust that has both bare metal and Docker deployment instructions. Note that I’m not endorsing this implementation, but only know of it through this FOSDEM talk describing how they dealt with malicious actors.
As an aside, I have briefly considered Matrix before as a group communications platform, but was put off by their poor E2EE decisions, for both the main client implementation and in the protocol itself. Odd as it sounds, poor encryption is worse than no encryption, because of the false assurance it gives. If I did use Matrix, I would not enable E2EE because it doesn’t offer me many privacy guarantees, compared to say, Signal.
litchralee@sh.itjust.worksto
Programming@programming.dev•Using an engineering notebookEnglish
4·29 days agoI don’t currently have any sort of notebook. Instead, for general notes, I prefer A3-sized loose sheets of paper, since I don’t really want to use double the table surface to have both verso and recto in front of me, I don’t like writing on spiral or perfect bound notebooks, and I already catalog my papers into 3-ring binders.
if I’m debugging something, and I’m putting silly print statements to quickly troubleshoot, should I document that?
My read of the linked post is that each discrete action need not be recorded, but rather the thought process that leads to a series of action. Rather than “added a printf() in constructor”, the overall thrust of that line of investigation might be “checking the constructor for signs of malformed input parameters”.
I don’t disagree with the practice of “printf debugging”, but unless you’re adding a printf between every single operative line in a library, there’s always going to be some internal thought that goes into where a print statement is placed, based on certain assumptions and along a specific line of inquiry. Having a record of your thoughts is, I think, the point that the author is making.
That said, in lieu of a formal notebook, I do make frequent Git commits and fill in the commit message with my thoughts, at every important juncture (eg before compiling, right before logging off or going to lunch).
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•I made a way to remotely control my homelab without any internet access requiredEnglish
4·1 month agoObligatory mention: !keming@lemmy.world
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•I made a way to remotely control my homelab without any internet access requiredEnglish
2·1 month agoAre ham radio operators in the EU able to use LoRa radios and be exempt from duty cycle limitations?
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•I made a way to remotely control my homelab without any internet access requiredEnglish
9·1 month agoAdmittedly, I haven’t finished reflashing my formerly-Meshtastic LoRA radios with MeshCore yet, so I haven’t been able to play around with it yet. Although both mesh technologies are decent sized near me, I was swayed to MeshCore because I started looking into how the mesh algorithm works for both. No extra license, since MeshCore supports roughly the same hardware as Meshtastic.
And what I learned – esp from following the #meshtastic and #meshcore hashtags on Mastodon – is that Meshtastic has some awful flooding behavior to send messages. Having worked in computer networks, this is a recipe for limiting the max size and performance of the mesh. Whereas MeshCore has a more sensible routing protocol for passing messages along.
My opinion is that mesh networking’s most important use-case should be reliability, since when everything else (eg fibre, cellular, landlines) stops working, people should be able to self organize and build a working communications system. This includes scenarios where people are sparsely spaced (eg hurricane disaster with people on rooftops awaiting rescue) but also extremely dense scenarios (eg a protest where the authorities intentionally shut off phone towers, or a Taylor Swift concert where data networks are completely congested). Meshtastic’s flooding would struggle in the latter scenario, to send a distress message away from the immediate vicinity. Whereas MeshCore would at least try to intelligently route through nodes that didn’t already receive the initial message.
litchralee@sh.itjust.worksto
Programming@programming.dev•Help learning low-level devEnglish
4·1 month agoI personally started learning microcontrollers using an Arduino dev kit, and then progressed towards compiling the code myself using GCC and loading it directly to the Atmel 328p (the microcontroller from the original Arduino dev kits).
But nowadays, I would recommend the MSP430 dev kit (which has excellent documentation for its peripherals) or the STM32 dev kit (because it uses the ARM32 architecture, which is very popular in the embedded hardware industry, so would look good on your resume).
Regarding userspace drivers, because these are outside of the kernel, such drivers are not kept in the repositories for the kernel. You won’t find any userspace drivers in the Linux or FreeBSD repos. Instead, such drivers are kept in their own repos, maintained separately, and often does unusual things that the kernel folks don’t want to maintain, until there is enough interest. For example, if you’ve developed an unproven VPN tunnel similar to Wireguard, you might face resistance to getting that into the Linux kernel. But you could write a userspace driver that implements your VPN tunnel, and others can use that driver without changing their kernel. If it gets popular enough, other developers might put the effort into getting it reimplemented as a mainline kernel driver.
For userspace driver development, a VM running the specific OS is fine. For kernel driver development, I prefer to run the OS within QEMU, since that allows me to attach a debugger to the VM’s “hardware”, letting me do things like adding breakpoints wirhin my kernel driver.
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•I made a way to remotely control my homelab without any internet access requiredEnglish
17·1 month agoVery interesting! Im no longer pursuing Meshtastic – I’m changing over my hardware to run MeshCore now – but this is quite a neat thing you’ve done here.
As an aside, if you later want to have full networking connectivity (Layer 2) using the same style of encoding the data as messages, PPP is what could do that. If transported over Meshtastic, PPP could give you a standard IP network, and on top of that, you could use SSH to securely access your remote machine.
It would probably be very slow, but PPP was also used for dial-up so it’s very accommodating. The limiting factor would be whether the Meshtastic local mesh would be jammed up from so many messages.
litchralee@sh.itjust.worksto
Programming@programming.dev•Help learning low-level devEnglish
3·1 month agoThis answer is going to go in multiple directions.
If you’re looking for practice on using C to implement ways to talk to devices and peripherals, the other commenter’s suggested to start with an SBC (eg Raspberry Pi, Orange Pi) or with a microcontroller dev kit (eg Arduino, MSP430, STM32) is spot-on. That gives you a bunch of attached peripherals, the datasheet that documents the register behavior, and so you can then write your own C functions that fill in and read those registers. In actual projects, you would probably use the provided libraries that already do this, but there is educational value in trying it yourself.
However, just because you write a C function named “put_char_uart0()”, that isn’t enough to prepare for writing full-fledged drivers, such as those in the Linux and FreeBSD kernel. This next step is more about software design, where you structure your C code so that rather than being very hardware-specific (eg for the exact UART peripheral in your microcontroller) you have code which works for a more generic UART (which abstracts general details) but is common-code to all the UARTs made by the same manufacturer. This is about creating reusable code, about creating abstraction layers, and about writing extensible code. Not all code can be reusable, not every abstraction layer is desirable, and you don’t necessarily want to make your code super extensive if it starts to impact your core requirements. Good driver design means you don’t ever paint yourself into a corner, and the best way to learn how to avoid this is through sheer experience.
For when you do want to write a full-and-proper driver for any particular peripheral – maybe one day you’ll create one such device, such as by using an FPGA attached via PCIe to a desktop computer – then you’ll need to work within an existing driver framework. Linux and FreeBSD drivers use a framework so that all drivers have access to what they need (system memory, I/O, helper functions, threads, etc), and then it’s up to the driver author to implement the specific behavior (known in software engineering as “business logic”). It is a learned skill – also through experience – to work within the Linux or FreeBSD kernels. So much so that both kernels have gone through great lengths to enable userspace drivers, meaning the business logic runs as a normal program on the computer, saving the developer from having to learn the strange ways of kernel development.
And it’s not like user space drivers are “cheating” in any way: they’re simply another framework to write a device driver, and it’s incumbent on the software engineer to learn when a kernel or user space driver is more appropriate for a given situation. I have seen kernel drivers used for sheer computational performance, but have also seen userspace drivers that were developed because nobody on that team was comfortable with kernel debugging. Those are entirely valid reasons, and software engineering is very much about selecting the right tool from a large toolbox.
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•Can you help me adapt the Signal TLS Proxy to be used behind Nginx Proxy Manager?English
31·1 month agoSadly, I’m not familiar enough with Nginx Proxy Manager to know. But I would imagine that there must be a different way to achieve the same result.
BTW, when I read “NPM”, I first think of Node.JS Package Manager. The title of your post may be confusing, and you might consider editing it to spell out the name of Nginx Proxy Manager.
litchralee@sh.itjust.worksto
Selfhosted@lemmy.world•Can you help me adapt the Signal TLS Proxy to be used behind Nginx Proxy Manager?English
5·1 month agoI’ll take a stab at the question. But I’ll need to lay some foundational background information.
When an adversarial network is blocking connections to the Signal servers, the Signal app will not function. Outbound messages will still be encrypted, but they can’t be delivered to their intended destination. The remedy is to use a proxy, which is a server that isn’t blocked by the adversarial network and which will act as a relay, forwarding all packets to the Signal servers. The proxy cannot decrypt any of the messages, and a malicious proxy is no worse than blocking access to the Signal servers directly. A Signal proxy specifically forwards only to/from the Signal servers; this is not an open proxy.
The Signal TLS Proxy repo contains a Docker Compose file, which will launch Nginx as a reverse proxy. When a Signal app connects to the proxy at port 80 or 443, the proxy will – in the background – open a connection to the Signal servers. That’s basically all it does. They ostensibly wrote the proxy as a Docker Compose file, because that’s fairly easy to set up for most people.
But now, in your situation, you already have a reverse proxy for your selfhosting stack. While you could run Signal’s reverse proxy in the background and then have your main reverse proxy forward to that one, it would make more sense to configure your main reverse proxy to directly do what the Signal reverse proxy would do.
That is, when your main proxy sees one of the dozen subdomains for the Signal server, it should perform reverse proxying to those subdomains. Normally, for the rest of your self hosting arrangement, the reverse proxy would target some container that is running on your LAN. But in this specific case, the target is actually out on the public Internet. So the original connection comes in from the Internet, and the target is somewhere out there too. Your reverse proxy simply is a relay station.
There is nothing particularly special about Signal choosing to use Nginx in reverse proxy mode, in that repo. But it happens to be that you are already using Nginx Proxy Manager. So it’s reasonable to try porting Signal’s configuration file so that it runs natively with your Nginx Proxy Manager.
What happens if Signal updates that repo to include a new subdomain? Well, you wouldn’t receive that update unless you specifically check for it. And then update your proxy configuration. So that’s one downside.
But seeing as the Signal app demands port 80 and 443, and you already use those ports for your reverse proxy, there is no way to avoid programming your reverse proxy to know the dozen subdomains. Your main reverse proxy cannot send the packets to the Signal reverse proxy if your main proxy cannot even identify that traffic.
litchralee@sh.itjust.worksto
Programming@programming.dev•FOSDEM 2026, one of the world's largest software meeting will start in 2 days in Brussels 🇧🇪. This edition features 1176 speakers and 1063 eventsEnglish
5·1 month agoSince that whole vibe-coded Cloudflare Matrix nonsense and associated attempted retcon – see here for context – I am looking forward to a talk on how Matrix actually works.
Specifically, I’d like to know what aspects of a secure, decentralized message platform are particularly hard. That’s in the context of whether Matrix can ever grow into a bona fide Signal competitor (nb: Signal remains the gold standard), and also whether Matrix would function well as a Discord replacement, even if it doesn’t have as strong of group chat privacy and encryption protections.
litchralee@sh.itjust.worksto
Programming@programming.dev•How useful are functional programming languages?English
2·1 month agoThere can be, although some parts may still need to be written in assembly (which is imperative, because that’s ultimately what most CPUs do), for parts like a kernel’s context switching logic. But C has similar restrictions, like how it is impossible to start a C function without initializing the stack. Exception: some CPUs (eg Cortex M) have a specialized mechanism to initialize the stack.
As for why C, it’s a low-level language that maps well to most CPU’s native assembly language. If instead we had stack-based CPUs – eg Lisp Machines or a real Java Machine – then we’d probably be using other languages to write an OS for those systems.



For cleaning, I used a spray can of Easy Off oven cleaner, specifically one with a yellow cap and specifically says that it contains lye (aka sodium hydroxide). I preheated the pan to 200 F (~90 C) in the oven for 20 minutes, and the withdrew it and immediately sprayed it with Easy Off. The pan then went into a plastic garbage bag, the bag set inside of a 5 gallon bucket for support, and the bag wrapped shut. This will keep the vapors circulating within the bag, exposing more deposits to its chemical effects.
After a day, I removed the pan from the bag and washed it down with generous water, to dilute the sodium hydroxide. The pan was then scrubbed down with a nylon brush to physically remove crusty material. To dry off the water, I put the pan into the oven again at 200 F for 20 minutes. The bag should also be washed with generous water before reusing it for standard trash service. Wear gloves.
Once I got the pan suitably stripped, I followed my original process, described here: https://sh.itjust.works/comment/15774888