• 6 Posts
  • 193 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • Was this question also posted a few weeks ago?

    In any case, what exactly are the requirements here? You mentioned encrypted journaling app, but also gave an example of burning a handwritten sheet. Do you need to recover the text after it is written, or can it simply be discarded into the void once it’s been fully written out?

    If encryption is to protect the document while it’s still a draft, then obviously that won’t work for handwritten pages.


  • 128 MB (1024 Mb) of RAM, 32 MB (256 Mb) of Flash

    FYI, RAM and flash sold to consumers is always in Bytes (big B); it’s only RAM manufacturers (and EEPROMs) that use the bit (small b) designation for storage volume, I think. If you’re using both to avoid any confusion, I would suggest the following instead: 128 MByte. No one will ever get that confused with megabits, and it’s the same style used for data transfer, which does still use bits: Mbit/sec.

    I wish you the best of luck in your search.


  • The only way I’m able to reconcile the author’s title and article to any applicability to software engineers (ostensibly the primary audience in this community) is to assume that the author wants software engineers to be involved further “upstream” of the software product development process.

    Code review answers: “Should this be part of my product?” That’s a judgment call, and it’s a fundamentally different question than “does it work.”

    No, but yes. Against the assertion from the title, bug-finding is very much a potential answer to “does this bug belong in the codebase?”. After all, some bugs aren’t bugs; they’re features! Snide remarks aside, I’m not sure that a code review is the time to be making broader choices about product architecture or market viability. Those should already have been done-and-settled a good while ago.

    Do software engineers make zero judgement calls? Quite the opposite! Engineers are tasked with pulling out the right tool from the toolbox to achieve the given objective. Exactly how and which tools are used is precisely a judgement call: the benefit of experience and wisdom will lean towards certain tools and away from others. But a different group of engineers with different experiences may choose differently. Such judgement calls are made in the here-and-now, and I’m not exactly keen on going back in time to berate engineers for not using tech that didn’t yet exist for them.

    If the author is asking for engineer involvement earlier, well before a code review, then that’s admirable and does in-fact happen. That’s what software architects spend their time doing, in constant (and sometimes acrimonious) negotiation with non-engineering staff such as the marketing/sales team.

    That said, some architectural problems only become apparent when the rubber meets the road, when the broader team is engaged to implement the design. And if a problem is found during their draft work or during code review, that’s precisely the right time to have found that issue, given the process described above where the architects settle on the design in advance.

    If that outcome is not desirable, as the author indicates, then it’s the process that must change. And I agree in that regard. But does that necessarily change the objective of what “code review” means? I don’t think so, because the process change would be adding architectural review ahead of implementation.

    If we’re splitting hairs about whether a broad “review” procedure does or doesn’t include “review of code”, then that’s a terminological spat. But ultimately, any product can only be as good as its process allows. See aviation for examples of excellent process that makes flying as safe as it is.

    Making the process better is obviously a positive, but it’s counterbalanced by the cost to do so, the overhead, and whether it’s worthwhile for the given product. Again, see aviation for where procedural hurdles do in-fact prevent certain experimental innovations from ever existing, but also some fatal scenarios that fortunately no longer happen.

    In closing, I’m not entirely sure what the author wants to change. A rebrand for “code reviews”? Just doing something different so that it feels like we’re “meeting the crisis” that is AI? That’s not exactly what I would do to address the conondrums presented by the rapid, near-uncontrolled adoption of LLMs.


  • It’s hard for me to agree with this premise. Specifically, the motion that companies will abdicate having their own space, in the form of a mobile app and UI. The author seems to suggest that the future will be API-driven, as more people want to “do things” rather than “go somewhere”. That is to say, if I may further summarize the author’s claims, the future of mobile computing is less about creating a digital storefront to invite potential customers into, but to be as transactional as possible.

    And while it is exceedingly enticing for me to think that one day, we could have a way to instantly cancel a Netflix or Comcast subscription, without the need to interact with any service agent, skipping over the upsell or retention attempts, and getting straight to the point, that just seems too far-fetched and anti-capitalist to actually happen in the near future.

    Why would it be that at this particular moment in history, when corporations seek to own more capital, would they then seek to abandon their digital storefronts? At the moment, they have sole control over that space, and the present abandonment of anti-trust enforcement means they can force people into their storefronts against their will. In an environment where arbitration agreements are forced upon consumers, why would large companies want mobile apps that don’t hold their customers as hostage? Having an open API to do the same thing as their app is tantamount to freeing the consumer.

    And that’s precisely why I can’t see why they would do that. I don’t like it, but that’s the present reality. But even more to the point, abandoning apps would be bending the knee to AI companies like Google or OpenAI, since it establishes the AI agents as the kingmakers. What sort of a Game of Thrones is this?

    For each app that exists now, their corporate owner is a king in their own kingdom. In this supposed new world, those kings are now mere nobles that pay tithe to their new emperor, from the treasuries of their kingdoms. An entertaining fiction, yes. But as a non-fiction? I might pick a different book.


  • but what if for example an issue is internal to a WIP feature?

    I forgot to answer this. The question is always: will this materially impact the deliverable? Will the customer be unhappy if they hit this bug?

    If the WIP feature isn’t declared to be fully working yet, then sure, let it on the branch and create a ticket to fix this particular bug. But closed-loop requires making this ticket, as a reminder to follow it up later, when the feature is almost complete.

    If instead the bug would be catastrophic but is exceptionally rare, then that’s a tough call. But that’s precisely why the call should involve more people, not less. A single person making a tough call is always a risky endeavor. Better to get more people’s input and hopefully make a collective choice. Also, humans too often play the blame-game if there isn’t a joint, transparent decision making process.

    But where would all these people convene to make a collective choice? How about during code review?


  • people that work on the same things can need each other['s] changes to move on

    If this is such a regular occurrence, then the overarching design of the code is either: 1) not amenable to parallelized team coding at all, or 2) the design has not properly divided the complexity into chunks that can be worked on independently.

    I find that the latter is more common than the former. That is to say, there almost always exists a better design philosophy that would have allowed more developers to work without stepping on each other’s toes. Consider a small group designing an operating system. Yes, there have to be some very deep discussions about the overall design objectives at the beginning, but once the project is rolling, the people building the filesystem won’t get in the way of the UI people. And even the filesystem people can divide themselves into logical units, with some working on the actual storage of bits while others work on implementing system calls.

    And even when a design has no choice but to have two people working in lock-step – quite a rarity – there are ways to deal with this. Pair programming is the most obvious, since it avoids the problem of having to swap changes with each other.

    I’ve seen pair programming done well, but it was always out of choice (such as to train interns) rather than being a necessary mandate from the design. Generally, I would reject designs that cannot be logically split into person-sized quantities of work. After all, software engineering is ultimately going to be performed using humans; the AIs and LLMs can figure out their own procedures on their own, if they’re as good as the pundits say (I’m doubtful).

    TL;DR: a design that requires lock-step development with other engineers probably is a bad design


  • Ah, I see that OP added more details while I was still writing mine. Specifically, the detail about having only a group of 5 fairly-experienced engineers.

    In that case, the question still has to focus on what is an acceptable risk and how risk decisions are made. After all, that’s the other half of code reviews: first is to identify something that doesn’t work, and second is to assess if it’s impactful or worth fixing.

    As I said before, different projects have different definitions of acceptability. A startup is more amenable to shipping some rather ugly code, if their success criteria is simply to have a working proof of concept for VCs to gawk at. But a military contractor that is financially on the hook for broken code would need to be risk-adverse. Such a contractor might impose a two-person rule (ie all code must have been looked at by at least two pairs of eyeballs, the first being the author and the second being someone competent to review it).

    In your scenario, you need to identify: 1) what your success criteria is, 2) what sort of bugs could threaten your success criteria, 3) which person or persons can make the determination that a bug falls into that must-fix category.

    On that note, I’ve worked in organizations that extended the two-person rule to also be a two-person sign-offs: if during review, both persons find a bug but also agree that the bug won’t impact the success criteria, they can sign off on it and it’ll go in.

    Separately, I’ve been in an organization that allows anyone to voice a negative opinion during a code review, and that will block the code from merging until either that person is suitably convinced that their objections are ameliorated, or until a manager’s manager steps in and makes the risk decision themselves.

    And there’s probably all levels in between those two. Maybe somewhere has a 3-person sign-off rule. Or there’s a place that only allows people with 2+ years of experience to block code from merging. But that’s the rub: the process should match how much risk is acceptable for the project.

    Boeing, the maker of the 737 MAX jetliner that had a falty MCAS behavior, probably should use a more conservative process than, say, a tech startup that makes IoT devices. But even a tech startup could be on the hook for millions if their devices mishandle data in contravention to data protection laws like the EU’s GDPR or California’s CCPA. So sometimes certain parts of a codebase will be comparmentalized and be subject to higher scrutiny, because of bugs that are big enough to end the organization.


  • With regards to the given list, I think #2 would be the most forgiving, in the sense that #1 suggests that code reviews are viewed solely negatively and are punishable if undertaken. But that minor quibble aside, I have some questions about what each of these would even look like.

    For example, #3 seems to be that code can be committed and pushed, and then review sought after-the-fact, but any results of the code review would not be binding for the original commit author to fix, nor apparently tracked for being fixed later. If that’s a correct description, I would describe that as the procedurally worst of the bunch, since it expends the effort to do reviews but then has such an open-loop process that the results of the review can be swept under the rug.

    On the note of procedure, it is always preferable to have closed loops, where defects are: a) found, b) described, c) triaged, d) assigned or deferred, e) eventually fixed, and f) verified and closed out. At least with your examples #1 and #2, they don’t even bother to undertake any of the steps for a closed loop. But #3 is the worst because it stops right after the first step.

    Your example #4 is a bit better, since in order to keep to a specified timeframe, the issues found during review have to at least be recorded in some fashion, which can satisfy the closed-loop steps, however abbreviated. If the project timeline doesn’t allow for getting all the code review issues fixed, then that’s a Won’t Fix and can be perfectly reasonable. The issue has been described and a risk decision (hopefully) was made to ship with the issue anyway. All things in life are a balance of expected risk to expected benefit. Ideally, only the trivial issues would be marked as Won’t Fix.

    But still, this means that #4 will eventually accumulate coding debt, probably quite quickly. And as with all debt, interest will accrue until it is either paid down or the organization succumbs to code insolvency, paralyzed because the dust under the rug is so large that it jams the door shut. I hope you’ll allow me to use two analogies in tandem.

    Finally, there is #5 which is the only one that prevents merging code that still has known issues. No doubt, code that is merged can still have further bugs that weren’t immediately obvious during code review. But the benefit is that #5 maintains a standard of functionality on the main branch. Whereas #4 would wilfully allow the main branch to deteriorate, in the name of expediency.

    No large organization can permit any single commit to halt all forward progress on the project, so it becomes imperative to keep the main branch healthy. At a minimum, that means the branch can be built. A bit higher would be to check for specific functionality as part of automated checks that run alongside the code review. Again, #4 would allow breaking changes onto the branch due to expediency, whereas #5 will block breaking changes until either addressed, abandoned, or a risk decision is made and communicated to everyone working on the project to merge the code anyway.

    TL;DR: software engineering processes seek to keep as many people working and out of each other’s way as possible, but it necessarily requires following steps that might seem like red-tape and TPS reports


  • I’ve even seen people vibe code ethernet drivers for freeBSD.

    Please make sure to read what considerations that developer had before undertaking that effort using an LLM: https://github.com/Aquantia/aqtion-freebsd/issues/32#issuecomment-3997341698

    Specifically, they (the human) were kept in the loop for the entire process, which included referencing the working Linux driver to do a clean-room reimplementation. This already means they have some experience with software engineering to spot any issues in the specifications that the LLM might generate.

    Also, Aquantia (before the merger) already had a published FreeBSD driver but it hasn’t been updated. So this port wouldn’t have to start from zero, but would be a matter of addition support for new NICs that have been released since, but Aquatia hadn’t updated the driver.

    This is very much not an example of an Ethernet NIC driver being “vibe coded” from scratch, but a seasoned engineer porting Linux support over to FreeBSD, a kernel that already has a lot of support for easily adding new drivers in a fairly safe manner, and then undertaking a test plan to make sure the changes wouldn’t be abject slop. That’s someone using their tools with reasonable care. In the industry, this is called engineering.

    Admiration for what people can do with the right tools must always be put into the right context. Even with the finest tools, it’s likely that neither you nor I could build a cathedral.


  • I don’t think they can be force applied to everyone who contributes

    This is certainly an opinion, but here is a list of major projects that have a code of conduct: https://opensourceconduct.com/projects . How well those projects enforce their CoCs, idk. But they are applied, otherwise they wouldn’t bother writing out a CoC.

    it’s not fair to hold people to standards they didn’t personally agree to

    Software development is not the only place which holds people to standards. The realm of criminal and civil law, education, and business all hold people to standards, whether those people like it or not. In fact, it’s hard to think of any realm that allows opt-out for standards, barring the incel-ridden corners of the web.

    this guy might have just decided to make a project

    Starting any project – as in, inviting other people to join in – is distinct from just publishing a public Git repo. I too can just post my random pet projects to Codeberg, but that does not mean I will necessarily accept PRs or bug reports, let alone even responding to those. But to actually announce something, that where the project begins. And to do so recklessly does reflect poorly upon the maintainer.


  • I’ve not heard of Booklore or the critiques against it until seeing this post, but I don’t think this take is correct, in parts. And I think much of the confusion has to do with what “open source” means to you, versus that term as a formal definition (ie FOSS), versus the culture that surrounds it. In so many ways, it mirrors the term “free speech” and Popehat (Ken White) has written about how to faithfully separate the different meanings of that term.

    Mirroring the same terms from that post, and in the identical spirit of pedantry in the pursuit of tractable discussion, I posit that there are 1) open source rights, 2) open source values, and 3) community decency. The first concerns those legal rights conferred from an open-source (eg ACSL) or Free And Open Source (FOSS, eg MIT or GPL) license. The details of the license and the conferred rights are the proper domain of lawyers, but the choice of which license to release with is the province of contributing developers.

    The second concerns “norms” that projects adhere to, such as not contributing non-owned code (eg written on employer time and without authorization to release) or when projects self-organize a process for making community-driven changes but with a supervising BDFL (eg Python and its PEPs). These are not easy or practical to enforce, but represent a good-faith action that keeps the community or project together. These are almost always a balancing-act of competing interests, but in practice work – until they don’t.

    Finally, the third is about how the user-base and contributor-base respect (or not) the project and its contributors. Should contributors be considered the end-all-be-all arbiters for the direction of the project? How much weight should a developer code-of-conduct carry? Can one developer be jettisoned to keep nine other developers onboard? This is more about social interactions than about software (ie “political”) but it cannot be fully divorced from any software made by humans. So long as humans are writing software, there will always be questions about how it is done.

    So laying that foundation, I address your points.

    Open source should mean that anyone can write anything for fun or seriously, and we all have the choice to use it or not. It doesn’t matter if it’s silly or useful or nonsense or horrible, open source means open. Instead we shut down/closed out someone who was contributing.

    This definition of open-source is mixing up open-source rights (“we al have the choice to use it or not” and “anyone can write anything”) with open-source values (“for fun or seriously” and “doesn’t matter if it’s silly or useful”). The statement of “open source means open” does not actually convey anything. The final sentence is an argument in the name of community decency.

    To be abundantly clear, I agree that harassing someone to the point that they get up and quit, that’s a bad thing. People should not do that. But a candid discussion recognizes that there has been zero impact to open source rights, since the very possibility that “Some contributors are working together on an unnamed replacement project” means that the project can be restarted. More clearly, open-source rights confer an irrevocable license. Even if the original author exits via stage-left, any one of us can pick up the mic and carry on. That is an open-source right, and also an open-source value: people can fork whenever they want.

    How they were contributing is irrelevant

    This is in the realm of community decency because other people would disagree. Plagiarism would be something that violates both the values/norms of open-source and also community decency. AI/LLMs can and do plagiarize. LLMs also produce slop (ie nonfunctioning code), and that’s also verbotten in most projects by norm (PRs would be rejected) or by community decency (PRs would be laughed out).

    We should all feel ashamed that an open source project was shuttered because of how our community acted.

    I would draw the focus much more narrowly: “We should all feel ashamed that an open source project was shuttered because of how our community acted”. Open-source rights and open-source values will persevere beyond us all, but how a community in the here-and-now governs itself is of immediate concern. There are hard questions, just like all community decency questions, but apart from Booklore happening to be open-source, this is not specific at all to FOSS projects.

    To that end, I close with the following: build the communities you want to see. No amount of people-pleasing will unify all, so do what you can to bring together a coalition of like-minded people. Find allies that will bat for you, and that you would bat for. Reject those who will not extend to you the same courtesy. Software devs find for themselves new communities all the time through that wonderful Internet thing, but they are not without agency to change the course of history, simply by carefully choosing whom they will invest in a community with. Never apologize for having high standards. Go forth and find your place in this world.


  • Just like fast fashion replaced tailors with factory workers

    I’m not sure this is right. If I wanted cheap clothes in the 1980s, I would go to a thrift store, not a tailor. If I wanted to hem up some pants I bought, I go to a tailor. In the 2020s, the former might have changed to online fast fashion behemoths, but there’s no replacement for a tailor to do up some pants.

    If I generously assume “tailors” is shorthand for a fashion designer that can also sew their own designs from fabric, then it’s still wrong because fast fashion has never been about enabling designers that have no hand-sewing skills. Instead, it’s about churning out mind-boggling amounts of product, irrespective of demand. Post-scarcity capitalism theory says that any product will sell at the right price, and the price for fast fashion is rock bottom.

    fast software will replace programmers with AI operators

    If “fast software” is going to mean shoddy software that’s churned out just for the sake of it, then this is the only apt comparison to fast fashion. Even without AI, I don’t think most modern software engineering or programming is comparable to tailoring or even fashion design.

    When the opening comparison is so deeply flawed, I’m not exactly keen on reading the rest of the article.



  • For cleaning, I used a spray can of Easy Off oven cleaner, specifically one with a yellow cap and specifically says that it contains lye (aka sodium hydroxide). I preheated the pan to 200 F (~90 C) in the oven for 20 minutes, and the withdrew it and immediately sprayed it with Easy Off. The pan then went into a plastic garbage bag, the bag set inside of a 5 gallon bucket for support, and the bag wrapped shut. This will keep the vapors circulating within the bag, exposing more deposits to its chemical effects.

    After a day, I removed the pan from the bag and washed it down with generous water, to dilute the sodium hydroxide. The pan was then scrubbed down with a nylon brush to physically remove crusty material. To dry off the water, I put the pan into the oven again at 200 F for 20 minutes. The bag should also be washed with generous water before reusing it for standard trash service. Wear gloves.

    Once I got the pan suitably stripped, I followed my original process, described here: https://sh.itjust.works/comment/15774888




  • In a nutshell, the network effect. At an individual level, if someone wants to leave GitHub, they absolutely can. But unless they’re a repo owner or a BDFL, the project(s) they were working on would still be on GitHub. And that means they can’t access the GitHub PR process for development, or open tickets for new issues, or any other number of interactions, except for maybe pulling code from the repo.

    On the flip side, at a project level, if the project owners agree that it’s time to leave GitHub, they absolutely can. And while they could convince the primary developers to also leave with them, the occasional contributors might still be left behind on GitHub. Moving away from GitHub could potentially cut the number of contributors down by a lot. And what’s guaranteed is that the project will have to retool for the new space they move to. And if it’s self-hosted, that’s even more work to do, all of which is kinda a distraction from whatever the project was meant to do.

    The network effect is the result of the sum being more useful than its parts. When the telephone was invented, a single telephone on its own is entirely useless, because nobody else has one to use. But with ten telephone, one person has the potential to call any of 9 other people. With 10,000 telephones, that’s over 9000 people they could call, or those people calling them. At a million phones, the telephone is well entrenched into common usage. Even when more and more people despise making phone calls, the telephone is still around, having changed forms since the 1980s into the modern smartphone.

    Why? Because networks are also stable: if a few thousand people give up their smartphones per year, the utility of the telephone is not substantially changed for the grand majority of telephone users. The threshold to break the network effect varies, but I hazard a guess that if 1/3 of telephone users gave up their numbers, then the telephone’s demise would be underway. Especially in the face of modern replacements.

    I would regard GitHub as having a network effect, in the same way that Twitter should have collapsed but hasn’t. Too many local governments are invested into it as their sole social media presence, and in doing so, also force their citizens to also subscribe to Twitter. GitHub is not a monopoly in the sense that anti-trust laws would apply. But they are a monopoly in that they own the platform, and thus own the network.

    But there’s an upside: communities of people are also networks. Depending on how cohesive the contributors to a particular GitHub repo are, enough people can make the move away and would sway the unwilling to also move with them. This is no different than convincing family members to move to Signal, for example. Yes, it’s hard. But communities look out for their common interests. And if AI slop is affecting a community, then even though they didn’t want to deal with it, they have to make a choice.

    Be the community member you want to see. Advocate for change in your network of people, however big or small. Without critical mass, a community will only splinter when acting unilaterally.


  • Having spent much of my software engineering career training and mentoring interns, new-hires, and transfers from other departments, and having toiled with some of their truly inexplicable questions that reveal shaky technical foundations, I can understand why so-called AI would be appealing: inexhaustible, while commanding the full battery of information stores that I could throw at it.

    And yet, the reason I don’t use AI is precisely because those very interns, new-hires, and transfers invariably become first-class engineers that I have no problem referring to as my equals. It is my observation that I’ve become better at training these folks up with every passing year, and that means that if I were to instead spend my time using AI, I would lose out on even more talented soon-to-be colleagues.

    I have only so much time of my mortal coil remaining, and if the dichotomy is between utilizing inordinate energy, memory, and compute for AI, or sharing my knowledge and skills to even just 2 people per year for the rest of my career, I’ll happily choose the latter. In both circumstances, I will never own the product of their labor, and I don’t really care to. What matters to me is that value is being created, and I know there is value in bringing up new software engineers into this field. Whereas the value of AI pales in comparison, if it’s even a positive value at all.

    If nothing else, the advent of AI has caused me to redouble my efforts, to level-up more engineers to the best of my ability. It is a human legacy that I can contribute to, and I intend to.


  • Fair, though I personally don’t let my ISP indirectly dictate what I do with my LAN. If I didn’t already have a v6-enabled WAN, I would still manage my LAN using IPv6 private range addresses. There are too many benefits to me, like having VMs and containers be first-class citizens on my LAN, rather than sitting behind yet another layer of NAT. That lets me avoid port forwarding at the border of my home Kubernetes cluster (or formerly, my Docker Swarm), and it means my DNS names correctly resolve to a valid IP address that’s usable anywhere on my network (because no NAT when inside the LAN).

    I will admit that NAT64 is kinda a drag to access v4-only resources like GitHub, but that’s only necessary because they’ve not lit up support for v6 (despite other parts of their site supporting v6).

    This is my idea of being future-ready: when the future comes, I’m already there.


  • The approach isn’t invalid, but seeing as you already have the framework set up to deny all and log for IPv4, the same could be done with IPv6.

    That is to say, your router advertises an IPv6 gateway to the global internet, but you then reject it because your VPN doesn’t support v6 (sadly). I specifically say reject, rather than drop, because you want that ICMP Unreachable (administratively prohibited) message to get returned to any app trying to use v6. That way, Happy Eyeballs will gracefully and quickly fall back to v6. Unless your containers have some exceptionally weird routing rules, v6 connections will only be attempted once, and will always use the route advertised. So if your router denies this attempt, your containers won’t try again in a way that could leak. v6 leaks are more likely when there isn’t even a route advertised.

    This makes your apps able to use v6, for that day when your VPN supports it, and so it’s just a question of when the network itself can be upgraded. IMO, apps should always try for v6 first and the network (if it can’t support it) will affirmatively reply that it can’t, and then apps will gracefully fall back.

    This also benefits you by logging all attempted v6 traffic, to know how much of your stuff is actually v6-capable. And more data is always nice to have.