• 1 Post
  • 42 Comments
Joined 2 年前
cake
Cake day: 2023年11月21日

help-circle


  • I’m afraid to say that I too have been corrupted by VSCode.

    It’s widely used, easy to get into, has LOTS of extensions, and works mostly the same across OS’es meaning it’s easy to setup by and explain to others.

    The two extensions I’m missing most in other IDE/text editors would be the “Remote - SSH” extension by Microsoft, which gives unparalleled integration when working remote, and PlatformIO which, while it can be used independently in its core form, just works way better in VSCode.

    Besides this, I’ll use Nano for small tasks and vi on embedded devices where Nano is unavailable, though, I’ll need a vi cheatsheet for anything more advanced than basic editing.






  • The claim above was off the top of my head, but I’ve found multiple pages of results describing the panic that ensued.

    Now, Microsoft (Copilot and Github) are less than clear on what exactly is used for training, but the general consensus seems to be, that they don’t train on private repositories. Though there appears to be some confusion about this, especially regarding Microsoft’s honesty about not using loopholes (this article might be faked, I haven’t tried confirming it, though, this topic is a shit show ripe with miscommunication, misinformation, and quite a lot of confusion and fear regardless).

    It appears that the specific issue I was referring to required a human error for copilot being able to train on the private repositories. Namely, some unfortunate fool temporarily making the repository public (in which case it obviously isn’t private anymore, and therefore free for grabs by scrapers). Usually this wouldn’t be a problem, since no indexer or scraper can check all of Github all at once all the time, so the chance of a briefly exposed repository being cached is rather small, albeit always there.

    That said, Copilot, Bing, and Github are likely better integrated than Bing simply wasting resources on continuously scraping Github for new repositories. I personally imagine that Github saving resources by sending a signal to Bing when a repository is made public isn’t entirely unlikely (that’s something I might do, harboring no ill intentions), meaning that it is possible (though in no way confirmed) that Bing punishes briefly exposed Github repositories instantly by forever caching them.

    Is this 100% Microsoft being predatory? No, obviously not, since it requires a user error to happen in the first place, and since Copilot is technically only trained on public or exposed data. Though, Microsoft learning about this rather scammy behavior and simply classifying it a “low-impact-severity” and disabling the Bing cache for humans (but apparently not Copilot) doesn’t sit right with me. I’m sure that they knew exactly which kind of data they were working with during dataset sanitation, so they could have chosen not to use sensitive data or at least inform exposed clients that they are adding their cached secrets to Copilot.


  • Wasn’t it revealed that Microsoft was training their Copilot on Github repositories, including private ones such as paying coorporations believing their source code to be safe and secure, resulting in secrets suddenly being made semi-public?

    I feel that there were other incidents too, though I can’t remember them off the top of my head. Definitely not a place I’d recommend anyone to keep anything they love, even if they keep to best practices and don’t store secrets in their repositories.


  • Ekky@sopuli.xyztoComic Strips@lemmy.worldISO 8601
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    2 个月前

    That’s what we Europeans call a “petty answer to the disgrace that is Amarican military time” (not the be confused with regular Amarican time and dates, which don’t allow overflow, as far as I’m aware). The date described above is clearly “the second of March, 2015” or 2015-03-02.


  • Well, I got that, but that’s also pretty much the only thing it mentions. What were the results? Was it better then the last generation? How will it change warfare in the future (beyond Gaza)?

    I’m gonna ignore the deeply unethical application under which this mysterious and barely named new rocket was tested, since that hardly is relevant to this community and better discussed elsewhere.

    EDIT: Sorry, that last paragraph should have an “I think” in there, since I’m no mod and am purely voicing my opinion about low quality and (what I find to be) barely relevant posts in this community.


  • Hmm, this seems more about economics and politics than technology.

    Like, what exactly is the new type of Bar rocket and how does it compare to the older rockets? I see it being mentioned as a replacement for Rumach rockets, but the only details are that it’s got some unnamed “guidance mechanism specifically designed for difficult combat environments” and that it’s rapid fire (compared to some other unnamed rocket?).




  • I’d love to see a modern mmofps, I can’t think of anything coming close to Planetside on that front.

    That said, I mostly did TR flash zergs when running solo, and VS C4 fairies when my friends were online.

    We all kinda dropped the game after the Combined Arms Initiative was rolled out and removed the need for vehicles, as heavies were stronger than MAXes and could solo pretty much all armor without breaking a sweat, and we could break up a hours-long tank line stalement using a sunderer and 3 Archer-equipped engineers. (Multiple tanks and some infantry peeled off to stop us, but we could kill pretty much anything with 2-3 salvos, usually before they found us. The sunderer was mostly just to get there).


  • I returned them. And I did indeed get the name wrong as they are a series of WiFi mesh towers named ‘Deco X20’ and not ‘Deca’.

    I do already use DD-WRT in my home network, but these were meant to provide a network-on-a-budget out in the field, aka. a stand-in for professional solutions which other people should be able to set up too, so I wanted to modify them as little as possible.

    WiFi extenders do technically fit my requirements (and I’ve got them working mostly successful), but, as far as I’m aware, mesh is specifically made for the purpose of having a seamless WiFi device transfer from one tower to another, and where one can form a circle or “spiderweb” pattern with the signal taking the best (distance/speed/reliability) route back to the router - which is what I need.

    Ubiquity seems to have gained traction lately, so I’ll throw them an E-Mail whether their devices are too smart to be usable too.


  • Yeah, I even wrote TP-Link an E-mail about this, but they wrote back that that was just how the device worked, that they could not recommend any of their mesh solutions which could provide a stable WiFi connection even without internet, and that they obviously couldn’t recommend any devices from competitors.

    My image of TP-Link might have taken a hit as result as I believed this to be a fundamental and implied feature.