• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • On one hand, I’m pleased that C++ is answering the call for what I’ll call “safety as default”, since as The Register and everyone else since pointed out, if safety constructs are “bolted on” like an afterthought, then of course it’s not going to have very high adoption. Contrast this to Rust and its “unsafe” keyword that marks all the places where the minimum safety of the language might not hold.

    On the other hand, while this Safe C++ proposal adopts a similar notion of an “unsafe” context, it also adds a “safe” keyword, to specify that a function will conform to compile-time safety checks. But as the proposal readily admits:

    Rust’s functions are safe by default. C++’s are unsafe by default.

    While the proposal will surely continue to evolve before being implemented, I forsee a similar situation as in C where code that lacked initial const-correctness will struggle to work with newer code and libraries. In this case, it would be the “unsafe” keyword that proliferates everywhere just to call older, unsafe code from newer, safe callers.

    Rust has the advantage that there isn’t much/any legacy Rust to upkeep, and that means the volume of unsafe code in Rust proframs is minimal, making them safer overall today. But for Safe C++ code, there’s going to be a lot of unsafe legacy C++ code and that reduces the safety benefit for programs overall, for the time being

    Even as this proposal progresses, the question of whether to start rewriting some code anew in Rust remains relevant. But this is still exciting as a new option to raise the bar in memory safety in C++.


  • commercial appliances didn’t take any stand-by measures to avoid “keeping the wires warm”

    Generally speaking, the amount of standby current attributable to the capacitors has historically paled in comparison to the much higher standby current of the active electronics therein. The One Watt Initiative is one such program that shed light on “vampire draw” and posed a tangible target for what standby power draw for an appliance should look like: 1 Watt.

    A rather infamous example of profligate standby power was TV set-top boxes, rented from the satellite or cable TV company, at some 35 Watts. Because these weren’t owned by customers, so-called free-market principles couldn’t apply and consumers couldn’t “vote with their feet” for less power-hungry set-top boxes. And the satellite/cable TV companies didn’t care, since they weren’t the ones paying for the electricity to keep those boxes powered. Hence, a perverse scenario where power was being actively wasted.

    It took both carrots (eg EnergyStar labels) and sticks (eg EU and California legislation) to make changes to this sordid situation. But to answer your question in the modern day, where standby current mostly is now kept around 1 Watt or lower, it all boils down to design tradeoffs.

    For most consumer products, a physical power-switch has gone the way of the dodo. The demand is for products which can turn “off” but can start up again at a moment’s notice. Excellent electronics design could achieve low-power consumption in the milliwatts, but this often entails an entirely separate circuit and supply which is used to wake up the main circuit of the appliance. That’s extra parts and thus more that can go wrong and cause warranty claims. This is really only pursued if power consumption is paramount, such as for battery-powered devices. And even with all that effort, the power draw will never be zero.

    So instead, the more common approach is to reuse the existing supply and circuitry, but try to optimize it when not in active operation. That means accepting that the power supply circuitry will have some amount of always-on draw, and that the total appliance will have a standby power draw which is deemed acceptable.

    I would also be remiss if I didn’t mention the EU Directives since 2013 which mandate particular power-factor targets, which for most non-motor appliances can only be achieved with active components, ie Active Power Factor Correction (Active PFC). While not strictly addressing standby power, this would be an example of a measure undertaken to avoid the heating caused by apparent power, both locally and through the grid.


  • How were you measuring the current in the power cable? Is this with a Kill-o-watt device or perhaps with a clamp meter and a line splitter?

    As for why there is a capacitor across the mains input, a switching DC power supply like an ATX PSU draws current in a fairly jagged fashion. So to stabilize the input voltage, as well as preventing the switching noise from propagating through the mains and radiating everywhere, some capacitors are placed across the AC lines. This is a large oversimplification, though, as the type and values of these capacitors are the subject of careful design.

    Since a capacitor charges and discharges based on the voltage across it, and because AC power changes voltage “polarity” at 50 or 60 Hz, the flow of charge into and out of the capacitor will be measurable as a small current.

    Your choice of measuring instrument will affect how precisely you can measure this apparent power, which will in-turn affect how your instrument reports the power factor. It can also be that the current in question also includes some of the standby current for keeping the PSU’s logic ICs in a ready state, for when the computer starts up. So that would also explain why the power factor isn’t exactly zero.


  • A few months ago, my library gained a copy of Cybersecurity For Small Networks by Seth Enoka, published by No Starch Press in 2022. So I figured I’d have a look and see if it it included modern best-practices for networks.

    It was alright, in that it’s a decent how-to guide for a novice to set up sensible, minimum network fortifications. But it only includes an overview of how those fortifications work, without going into the additional depth needed to fine-tune or optimize them for specific environments. So if the reader has zero experience with network security, it’s a worthwhile read. But if you’ve already been operating a network with defenses for a while, there’s not much to gain from this particular text.

    Also, the author suggests that IPv6 should be disabled, which is a terrible idea. Modern best-practice is not to pretend IPv6 doesn’t exist, but to assure that firewalls and other defenses are configured to handle this traffic. There’s a vast difference between “administratively reject IPv6 traffic in/out of the WAN” and “disable IPv6 on all devices and pray no one ever connects an IPv6-enabled device”.

    You might have a look at other books available from No Starch Press, though.



  • It’s also worth noting that switching from ANSI to ISO 216 paper would not be a substantial physical undertaking, as the short-side of even-numbered ISO 216 paper (eg A2, A4, A6, etc) is narrower than for ANSI equivalents. And for the odd-numbered sizes, I’ve seen Tabloid-size printers in America which generously accommodate A3.

    For comparison, the standard “Letter” paper size (aka ANSI A) is 8.5 inches by 11 inches. (note: I’m sticking with American units because I hope Americans read this). Whereas the similar A4 paper size is 8.3 inches by 11.7 inches. Unless you have the rare, oddball printer which takes paper long-edge first, this means all domestic and small-business printers could start printing A4 today.

    In fact, for businesses with an excess stock of company-labeled #10 envelopes – a common size of envelope, measuring 4.125 inches by 9.5 inches – a sheet of A4 folded into thirds will still (just barely) fit. Although this would require precision folding, that’s no problem for automated letter mailing systems. Note that the common #9 envelope (3.875 inches by 8.875 inches) used for return envelopes will not fit an A4 sheet folded in thirds. It would be advisable to switch entirely to A series paper and C series envelopes at the same time.

    Confusingly, North America has an A-series of envelopes, which bear no relation to the ISO 216 paper series. Fortunately, the overlap is only for the less-common A2, A6, and A7.

    TL;DR: bring reams of A4 to the USA and we can use it. And Tabloid-size printers often accept A3.



  • I will admit that my familiarity with private law outside the USA is almost non-existent, except for what I skimmed from the Wikipedia article for the Inquisitorial system. So I had assumed that private law in European jurisdictions would follow the same judge-intensive approach. Rereading the article more closely, I do see that it really only talks about criminal proceedings.

    But I did some more web searching, and found this – honestly, extremely convenient – article comparing civil litigation procedure in Germany and California (the jurisdiction I’m most familiar with; IANAL). The three most substantial differences I could identify were the judge’s involvement in: serving papers, discovery, and depositions.

    Serving legal notice is the least consequential difference between California and Germany, but it seems that the former allows any qualified adult to chase down the respondent (ie person being sued) and deliver the notice of a lawsuit – hence the trope of yelling “you have been served” and then throwing a stack of papers at someone’s porch – on behalf of the complainant (person who filed the lawsuit). Whereas German courts take up the role themselves for notifying the complainant. Small difference, but notable.

    In Germany, the court, and not the plaintiff, is required to serve the complaint on the defendant without undue delay, which is usually immediately after it has been filed with the court.

    Next, discovery and pleadings in Germany appear to be different from the California custom. It seems that German courts require parties to thoroughly plead their positions first, and only afterwards will discovery begin, with the court deciding what topics can be investigated. Whereas California allows parties to make broad assertions that can later be proven or disproven during discovery. This is akin to throwing spaghetti at the wall and seeing what sticks, and a big reason this is done is because any argument that isn’t raised during trial cannot be reargued during a later appeal.

    I believe that discovery in California and other US States can get rather invasive, as each party’s lawyers are on a fact-finding mission where the truth will out. The general limitation on the pleadings in California is that they still must be germane to the complaint and at least be colorable. This obviously leads to a lot of pre-trial motions, as the targeted party will naturally want to resist a fishing expedition during discovery.

    Lastly, depositions in Germany involve the judge(s) a lot more than they would in California. Here, depositions are off-site from the court and conducted by the deposing party, usually video-taped and with all attorneys present, plus a privately hired stenographer, with the deposing attorney asking questions. Basically, after a deposition order is granted by the judge, the judge isn’t involved unless during the deposition, the process is interrupted in a way that would violate the judge’s order. But the solution to that is to simply phone the judge and ask for clarification or a new order to force the deposition to continue.

    Whereas that article describes the German deposition process as always occuring in court, during trial, and with questions asked by the judge(s). The parties may suggest certain questions by way of constructing arguments which require the judge(s) to probe in a particular direction. But it’s not clear that the lawyers get to dictate the exact questions asked.

    In contrast, depositions in Germany are conducted by the judge or the panel of judges and only during trial.

    I grant you that this is just an examination of the German court proceedings for private law. And perhaps Germany may be an outlier, with other European counterparts adopting civil law but with a more adversarial flavor for private law. But I would say that for Germany, these differences indicate that their private law is more inquisitorial overall, in stark contrast to the California or USA adversarial procedure for private litigation.



  • I am usually not wont to defend the dysfunction presently found in the USA federal (and state-level) judiciary, but I think this comparison to the German courts requires a bit more context. Generally speaking, the USA federal courts and US States adopt the adversarial system, originally following the English practice in both common law and equity. This means the judge takes on a referee role, and a plaintiff and a defendant will make their best, most convincing arguments.

    I should clarify that “common law” in this context refers to the criminal matters (akin to public law), and “equity” refers to person-versus-person disputes (akin to private law), such as contracts.

    For the adversarial system to work, the plaintiff and defendant need to be sufficiently motivated (and nowadays, well-monied) to put on good arguments, or else they’re just wasting the court’s time. Hence, there is a requirement (known as “standing”) where – grossly oversimplifying – the plaintiff must be the person with the most to gain, and the defendant must be the person with the most to lose. They are interested parties who will argue vigorously.

    Of course, that’s legal fiction, because oftentimes, a defendant might be unable to able to afford excellent legal counsel. Or plaintiffs will half-ass or drag out a lawsuit, so that it’s more an annoyance to the opposite party.

    In an adversarial system, it is each party’s responsibility to obtain subject-matter experts and their opinions to present to the court. The judge is just there to listen and evaluate the evidence – exception: criminal trials leave the evaluation of evidence to the jury.

    Why is the USA like this? For the USA federal courts, it’s because it’s part of our constitution, in the Case or Controversy Clause. One of the key driving forces for drafters of the USA Constitution was to restrict the powers of government officials and bureaucrats, after seeing the abuses committed during the Colonial Era. The Clause above is meant to constrain the unelected judiciary – which otherwise has awe-inducing powers such as jailing people, undoing legislation, and assigning wardship or custody of children – from doing anything unless some controversy actually needed addressing.

    With all that history in mind, if the judiciary kept their own in-house subject-matter experts, then that could be viewed as more unelected officials trying to tip the scale in matters of science, medicine, computer science, or any other field. Suddenly, landing a position as the judiciary’s go-to expert could have broad reaching impacts, despite no one in the federal judiciary being elected.

    In a sense, because of the fear of officials potentially running amok, the USA essentially “privatizes” subject matter experts, to be paid by the plaintiff or defendant, rather than employed by the judiciary. The adversarial system is thus an intentional value judgement, rather than “whoopsie” type of thing that we walked into.

    Small note: the federal executive (the US President and all the agencies) do keep subject matter experts, for the limited purpose of implementing regulations (aka secondary legislation). But at least they all report indirectly to the US President, who is term-limited and only stays 4 years at a time.

    This system isn’t perfect, but it’s also not totally insane.




  • This is an interesting application of so-called AI, where the result is actually desirable and isn’t some sort of frivolity or grift. The memory-safety guarantees offered by native Rust code would be a very welcome improvement over C code that guarantees very little. So a translation of legacy code into Rust would either attain memory safety, or wouldn’t compile. If AI somehow (very unlikely) manages to produce valid Rust that ends up being memory-unsafe, then it’s still an advancement as the compiler folks would have a new scenario to solve for.

    Lots of current uses of AI have focused on what the output could enable, but here, I think it’s worth appreciating that in this application, we don’t need the AI to always complete every translation. After all, some C code will be so hardware-specific that it becomes unwieldy to rewrite in Rust, without also doing a larger refactor. DARPA readily admits that their goal is simply to improve the translation accuracy, rather than achieve perfection. Ideally, this means the result of their research is an AI which knows its own limits and just declines to proceed.

    Assuming that the resulting Rust is: 1) native code, and 2) idiomatic, so humans can still understand and maintain it, this is a project worth pursuing. Meanwhile, I have no doubt grifters will also try to hitch their trailer on DARPA’s wagon, with insane suggestions that proprietary AI can somehow replace whole teams of Rust engineers, or some such nonsense.

    Edit: is my disdain for current commercial applications of AI too obvious? Is my desire for less commercialization and more research-based LLM development too subtle? :)


  • Oh wow, my comment made it here to c/bestoflemmy. I’m both flattered and also donning my flak helmet lol

    I do have two things I want to mention: 1) please don’t form an opinion (good or bad) on the American health care situation solely from a comment from some rando on the Internet. If you’re an American affected by the problems of the health care situation, write to your state and federal representatives, and remind them that you will vote accordingly in November, even if you’re in a state that is ardently one political color or another.

    And 2) I wouldn’t necessarily say I wrote an “objective” summary, as a fair number of the links and examples I used reference the ails caused by automobile culture, which has set up such massive-yet-impressive institutions like a well-oiled auto insurance system exactly to continue perpetuating harms upon urban environments, pedestrian and cyclist safety, municipal budgets, and energy security. All this in the pursuit of an out-modded 1960s utopic vision where private automobiles and suburban/exurban single-family homes provide quality of life for the masses. History has shown that this vision has failed, either of its own success (if it ever had any) or because it threw away the natural human settlement pattern proven over centuries.

    If you’re an American and are starting to see why maybe automobiles and single-family homes shouldn’t be placed on an undeserved pedestal, have a look at Strong Towns, the people seeking to right-size the automobile’s influence in small and middle America. Not by banishing cars, but by building the conditions for a healthy set of realistic alternatives, to strengthen municipal finances, grow deeper connections amongst the citizenry, and avoid the fate of ghost towns.

    They also have a YT channel and are comprised from local chapters, with maybe one near you.



  • A commenter already provided a fairly comprehensive description of low-level computer security positions. But I also want to note that a firm foundation in low-level implementations is also useful for designing embedded software and firmware.

    As in, writing or deploying against custom BIOS/UEFI images, or for real-time devices where timing is of the essence. Most anyone dealing with an RTOS or kernel drivers or protocol buses will necessarily require an understanding of both the hardware architecture plus the programming language available to them. And if that appeals to you, you might consider looking into embedded software development.

    The field spans anything from writing the control loop for washing machines, to managing data exchange between multiple video co-processors onboard a flying drone to identify and avoid collisions, to negotiating the protocol to set up a 400 Gbps optical transceiver to shoot a laser down 40 km of fibre.

    If something “thinks” but doesn’t have a monitor and keyboard, it’s likely to have one or more processors running embedded software. Look around the room you’re in and see what this field has enabled.



    1. The return value of time.time() is actually a floating-point number … It’s also not guaranteed to be monotonically increasing, which is a whole other thing that can trip people up, but that will have to be a separate blog post.

    Oh god, I didn’t realize that about Python and the POSIX spec. Cautiously, I’m going to guess that GPS seconds are one of the few reliable ways to uniformly convey a monotonically-increasing time reference.

    Python has long since deprecated the datetime.datetime.utcnow() function, because it produces a naive object that is ostensibly in UTC.

    Ok, this is just a plainly bad decision then and now by the datetime library people. What possible reason could have existed to produce a TZ-naive object from a library call that only returns a reference to UTC?


  • If not code or documentation contributions, then well-written bug reports. Seriously, the quality of bug reports sometimes leaves a lot to be desired. And I don’t necessarily mean a full back-trace attached – and please, if you ever send a back-trace, copy-and-paste the text, never a screenshot – but just details like: system details, OS, version, step-by-step instructions to reproduce that a non-coder could also understand, plus what you expected to happen versus what actually happened.

    This stuff (usually) comes naturally to programmers and engineers, but users don’t necessarily see things this way. I sometimes think bug reports need to adopt a “so tell me what happened?” approach, where reporters are encouraged to describe free-form what they think of the software, then providing the specific details that developers need. That at least would collect all the relevant details, plus extra details that no developers thought to ask.

    Even just having folks that help gather and distill details from user reporters on a forum is easing a burden off of developers, and that effort should be welcomed by any competently-organized project. Many projects already have a template for reports, although it often gets mistaken for boilerplate. Helping reports recognize that they need to fill in all the details is a useful activity that isn’t code or docs.


  • This isn’t quite an ELI5, but ARRL has a 2004 article on FM fundamentals; it’s five pages intended for a beginner ham radio operator, but applicable to all FM applications nevertheless. It also discusses four different ways to receive FM.

    But to answer your question directly:

    The frequency of the FM signal at any instant in time is called the instantaneous frequency. The variations back and forth around the carrier frequency are known as deviation

    FM can also be detected by a PLL. As shown in Figure 6, the PLL’s natural function of tracking a changing input frequency can be employed to generate a voltage that varies as the input frequency change

    In a nutshell, FM only ever has one instantaneous frequency at a time, which dances around the nominal center frequency (aka carrier). So the receiver has to detect the instantaneous frequency, relative to the carrier.

    To actually recover the original signal, the receiver must also account for the modulation index used by the transmitter, which describes how much the output will deviate for a given input frequency. The modulation index is usually standardized for the application, such as FM broadcasting, amateur radio FM, walkie talkie FM, etc.

    Because a larger modulation index means the same input signal will result in wider deviations, more RF bandwidth is used, spreading the signal wider and generally improving noise immunity.