• 5 Posts
  • 145 Comments
Joined 2 年前
cake
Cake day: 2023年7月2日

help-circle

  • https://ipv6now.com.au/primers/IPv6Reasons.php

    Basically, Legacy IP (v4) is a dead end. Under the original allocation scheme, it should have ran out in the early 1990s. But the Internet explosion meant TCP/IP(v4) was locked in, and so NAT was introduced to stave off address exhaustion. But that caused huge problems to this day, like mismanagement of firewalls and the need to do port-forwarding. It also broke end-to-end connectivity, which requires additional workarounds like STUN/TURN that continue to plague gamers and video conferencing software.

    And because of that scarcity, it’s become a land grab where rich companies and countries hoard the limited addresses in circulation, creating haves (North America, Europe) and have-nots (Africa, China, India).

    The want for v6 is technical, moral, and even economical: one cannot escape Big Tech or American hegemony while still having to buy IPv4 space on the open market. Czechia and Vietnam are case studies in pushing for all-IPv6, to bolster their domestic technological familiarity and to escape the broad problems with Business As Usual.

    Accordingly, there are now three classes of Internet users: v4-only, dual-v4-and-v6, and v6-only. Surprisingly, v6-only is very common now on mobile networks for countries that never had many v4 addresses. And it’s an interop requirement for all Apple apps to function correctly in a v6-only environment. At a minimum, everyone should have access to dual-stack IP networks, so they can reach services that might be v4-only or v6-only.

    In due course, the unstoppable march of time will leave v4-only users in the past.


  • You might also try asking on !ipv6@lemmy.world .

    Be advised that even if a VPN offers IPv6, they may not necessarily offer it sensibly. For example, some might only give you a single address (aka a routed /128). That might work for basic web fetching but it’s wholly inadequate if you wanted the VPN to also give addresses to any VMs, or if you want each outbound connection to use a unique IP. And that’s a fair ask, because a normal v6 network can usually do that, even though a typical Legacy IP network can’t.

    Some VPNs will offer you a /64 subnet, but their software might not check if your SLAAC-assigned address is leaking your physical MAC address. Your OS should have privacy-extensions enabled to prevent this, but good VPN software should explicitly check for that. Not all software does.


  • Connection tracking might not be totally necessary for a reverse proxy mode, but it’s worth discussing what happens if connection tracking is disabled or if the known-connections table runs out of room. For a well-behaved protocol like HTTP(S) that has a fixed inbound port (eg 80 or 443) and uses TCP, tracking a connection means being aware of the TCP connection state, which the destination OS already has to do. But since a reverse proxy terminates a TCP connection, then the effort for connection tracking is minimal.

    For a poorly-behaved protocol like FTP – which receives initial packets in a fixed inbound port but then spawns a separate port for outbound packers – the effort of connection tracking means setting up the firewall to allow ongoing (ie established) traffic to pass in.

    But these are the happy cases. In the event of a network issue that affects an HTTP payload sent from your reverse proxy toward the requesting client, a mid-way router will send back to your machine an ICMP packet describing the problem. If your firewall is not configured to let all ICMP packets through, then the only way in would be if conntrack looks up the connection details from its table and allows the ICMP packet in, as “related” traffic. This is not dissimilar to the FTP case above, but rather than a different port number, it’s an entirely different protocol.

    And then there’s UDP tracking, which is relevant to QUIC. For hosting a service, UDP is connectionless and so for any inbound packet we received on port XYZ, conntrack will permit an outbound packet on port XYZ. But that’s redundant since we presumably had to explicitly allow inbound port XYZ to expose the service. But in the opposite case, where we want to access UDP resources on the network, then an outbound packet to port ABC means conntrack will keep an entry to permit an inbound packet on port ABC. If you are doing lots of DNS lookups (typically using UDP), then that alone could swamp the con track table: https://kb.isc.org/docs/aa-01183

    It may behoove you to first look at what’s filling conntrack’s table, before looking to disable it outright. It may be possible to specifically skip connection tracking for anything already explicitly permitted through the firewall (eg 80/443). Or if the issue is due to numerous DNS resolution requests from trying to look up spam sources IPs, then perhaps either the logs should not do a synchronous DNS lookup, or you can also skip connection tracking for DNS.


  • https://github.com/Overv/vramfs

    Oh, it’s a user space (FUSE) driver. I was rather hoping it was an out-of-tree Linux kernel driver, since using FUSE will: 1) always pass back to userspace, which costs performance, and 2) destroys any possibility of DMA-enabled memory operations (DPDK is a possible exception). I suppose if the only objective was to store files in VRAM, this does technically meet that, but it’s leaving quite a lot on the table, IMO.

    If this were a kernel module, the filesystem performance would presumably improve, limited by how the VRAM is exposed by OpenCL (ie very fast if it’s just all mapped into PCIe). And if it was basically offering VRAM as PCIe memory, then this potentially means the VRAM can be used for certain RAM niche cases, like hugepages: some applications need large quantities of memory, plus a guarantee that it won’t be evicted from RAM, and whose physical addresses can be resolved from userspace (eg DPDK, high-performance compute). If such a driver could offer special hugepages which are backed by VRAM, then those application could benefit.

    And at that point, on systems where the PCIe address space is unified with the system address space (eg x86), then it’s entirely plausible to use VRAM as if it were hot-insertable memory, because both RAM and VRAM would occupy known regions within the system memory address space, and the existing MMU would control which processes can access what parts of PCIe-mapped-VRAM.

    Is it worth re-engineering the Linux kernel memory subsystem to support RAM over PCIe? Uh, who knows. Though I’ve always like the thought of DDR on PCIe cards. All technologies are doomed to reinvent PCIe, I think, said someone from Level1Techs.



  • For my own networks, I’ve been using IPv6 subnets for years now, and have NAT64 translation for when they need to access Legacy IP (aka IPv4) resources on the public Internet.

    Between your two options, I’m more inclined to recommend the second solution, because although it requires renumbering existing containers to the new subnet, you would still have one subnet for all your containers, but it’s bigger now. Whereas the first solution would either: A) preclude containers on the first bridge from directly talking to containers on the second bridge, or B) you would have to enable some sort of awful NAT44 translation to make the two work together.

    So if IPv6 and its massive, essentially-unlimited ULA subnets are not an option, then I’d still go with the second solution, which is a bigger-but-still-singular subnet.


  • I see. Given those constraints then, I don’t see any option besides a new heater. Ideally, the new heater would be built with less circuitry, so there would be fewer things to break.

    Looking at the Adax Clea product description, it seems overly complicated for a radiator, IMO. I’m not sure I’d want triac switching for something like a heating appliance. Resistive heating doesn’t strictly require silicon switches, when a relay should work. But I suspect an equally-svelt radiator that’s also simple may be hard to find.



  • My experience is mostly with repairing lower voltage devices (eg 12v to 54v PoE). In your case, a phase to phase short has made quite the mark on that PCB, and being a much higher energy event than low-voltage DC, its possible that some delamination has occurred, with downstream effects on expected trace resistance, capacitance, and leakage/creepage.

    Were this a low-voltage board, I personally wouldn’t be worried about those downstream effects. But for AC line voltage, I’d rather buy myself the peace of mind. Do keep parts from the dead board that are salvageable, but IMO, a thermal event on the AC side of a 400vac board would disqualify it from continued service.

    P.S. does that circuit not have an onboard fuse? I’m not seeing one and I’m kinda surprised. Presumably an upstream circuit breaker or fuse was what tripped to stop this turning into a fire?


  • I’m taking a guess that perhaps the fridge makes similar assumptions that automobiles make for their lamps. Some cars that were designed when incandescent bulbs were the only option will use the characteristics resistance as an integral part of the circuit. For example, turn signals will often blink faster when either the front or left corner bulb is not working, and this happens to be useful as an indicator to the motorist that a bulb has gone bust.

    For other lamps, such as the interior lamp, the car might do a “soft start” thing where upon opening the car door, the lamp ramps up slowly to full brightness. If an LED bulb is installed here, the issues are manifold: some LEDs don’t support dimming, but all incandescent bulbs do. And the circuit may require the exact resistance of an incandescent bulb to control the rate of ramping up to fill brightness. An LED bulb here may malfunction or damage the car circuitry.

    Automobile light bulbs are almost always supplied with 12 volts, so an aftermarket LED replacement bulb is designed to also expect 12 volts, then internally convert down to the native voltage of the LEDs. However, in the non-trivial circuits described above, the voltage to the bulb is intentionally varying. But the converter in the LED still tries to produce the native LED voltage, and so draws more current to compensate. This non-linear behavior does not follow Ohm’s Law, whereas all incandescent bulbs do.

    So my guess is that your fridge could possibly be expecting certain resistance values from the bulb but the LED you installed is not meeting those assumptions. This could be harmless, or maybe either the fridge or the LED bulb have been damaged. Best way to test would be installing a new, like-for-like OEM incandescent bulb and seeing if that will work in your fridge.


  • I like vimdiff, since it’s fair quick to collapse and expand code chunks if you know the keyboard shortcuts. Actually, since it’s vim, knowing the keyboard shortcuts is the entire game lol.

    I usually have vimdiff open in a horizontal pane in tmux, then use the other horizontal pane to look at other code that the change references. Could I optimize and have everything in a single vim session? Sure, but at that point, I’d also want cscope set up to find references within vim, and I’m now trivial steps away from a full IDE in vim.

    … which people do have, and more power to them. But alas, I don’t have the luxury of fastidious optimization of my workflow to that degree.


  • To start, the idea of charging in parallel while discharging in series is indeed valid. And for multicell battery packs such as for electric automobiles and ebikes, it’s the only practical result. That said, the idea can sometimes vary, with some solutions providing the bulk of charging current through the series connection and then having per-cell leads to balance each cell.

    In your case, you would have a substantial number of cells in series, to the point that series charging would require high voltage DC, beyond the normal 50-60 VDC that constitutes low-voltage.

    But depending on if charging and discharge are mutually exclusive operations, one option would be to electrically break the pack into smaller groups, so that existing charge controllers can charge each group through normal means (ie balancing wires). Supposing that you used 12s charger ICs, that would reduce the number of ICs to about 9 for a pack with a nominal series voltage ~400vdc. You would have to make sure these ICs are isolated once the groups are reconstituted into the full series arrangement.

    Alternatively, you could float all the charging ICs, by having 9 rails of DC voltage to supply each of the charging ICs. And this would allow continuous charging and battery monitoring during discharge. Even with the associated circuitry to provide these floating rails, the part count is still lower than having each cell managed by individual chargers and MOSFETs.

    It’s not clear from your post what capacity or current you intend for this overall pack, but even in small packs, I cannot possibly advise using anything but a proper li-ion charge controller for managing battery cells. The idea of charging a capacitor to 4.2v and then blindly dumping voltage into a cell is fraught with issues, such as lacking actual cell temperature monitoring or even just charging the cell in a healthy manner. Charge IC are designed specifically designed for the task, and are just plain easier to build into a pack while being safer.


  • In my personal workflow, I fork GitHub and Codeberg repos so that my local machine’s “origin” points to my fork, not to the main project. And then I also create an “upstream” remote to point to the main project. I do this as a precursor before even looking at a code on my local machine, as a matter of course.

    Why? Because if I do decide to draft a change in future, I want my workflow to be as smooth as possible. And since the norm is to push to one’s own fork and then create a PR from there to the upstream, it makes sense to set my “origin” to my fork; most established repos won’t allow pushing to a new topic branch.

    If I decide that there’s no commit to do, then I’ll still leave the fork around, because it’s basically zero-cost.

    TL;DR: I fork in preparation of an efficient workflow.


  • For a link of 5.5 km and with clear LoS, I would reach for 802.11 WiFi, since the range of 802.11ah HaLow wouldn’t necessarily be needed. For reference, many WISPs use Ubiquiti 5 GHz point-to-point APs for their backhaul links for much further distances.

    The question would be what your RF link conditions look like, whether 5 GHz is clear in your environment, and what sort of worst-case bandwidth you can accept. With a clear Fresnel zone, you could probably be pushing something like 50 Mbps symmetrical, if properly aimed and configured.

    Ubiquiti’s website has a neat tool for roughly calculating terrain and RF losses.





  • Tbf, can’t the other party mess it up with signal too?

    Yes, but this is where threat modeling comes into play. Grossly simplified, developing a threat model means to assess what sort of attackers you reasonably expect to make an attempt on you. For some people, their greatest concern is their conservative parents finding out that they’re on birth control. For others, they might be a journalist trying to maintain confidentiality of an informant from a rogue sheriff’s department in rural America. Yet others face the risk of a nation-state’s intelligence service trying to find their location while in exile.

    For each of these users, they have different potential attackers. And Signal is well suited for the first two, and only alright against the third. After all, if the CIA or Mossad is following someone around IRL, there are other ways to crack their communications.

    What Signal specifically offers is confidentiality in transit, meaning that all ISPs, WiFi networks, CDNs, VPNs, script skiddies with Wireshark, and network admins in the path of a Signal convo cannot see the contents of those messages.

    Can the messages be captured at the endpoints? Yes! Someone could be standing right behind you, taking photos of your screen. Can the size or metadata of each message reveal the type of message (eg text, photo, video)? Yes, but that’s akin to feeling the shape of an envelope. Only through additional context can the contents be known (eg a parcel in the shape of a guitar case).

    Signal also benefits from the network effect, because someone trying to get away from an abusive SO has plausible deniability if they download Signal on their phone (“all my friends are on Signal” or “the doctor said it’s more secure than email”). Or a whistleblower can send a message to a journalist that included their Signal username in a printed newspaper. The best place to hide a tree is in a forest. We protect us.

    My main issue for signal is (mostly iPhone users) download it “just for protests” (ffs) and then delete it, but don’t relinquish their acct, so when I text them using signal it dies in limbo as they either deleted the app or never check it and don’t allow notifs

    Alas, this is an issue with all messaging apps, if people delete the app without closing their account. I’m not sure if there’s anything Signal can do about this, but the base guarantees still hold: either the message is securely delivered to their app, or it never gets seen. But the confidentiality should always be maintained.

    I’m glossing over a lot of cryptographic guarantees, but for one-to-one or small-group private messaging, Signal is the best mainstream app at the moment. For secure group messaging, like organizing hundreds of people for a protest, that is still up for grabs, because even if an app was 100% secure, any one of those persons can leak the message to an attacker. More participants means more potential for leaks.