Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 173 Posts
  • 3.59K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • https://en.wikipedia.org/wiki/Ericsson

    Number of employees

    • Decrease 94,000 (2024)

    It doesn’t say what those employees worked on, but purely in terms of percentage of headcount, it’s not a massive cut.

    What I’d be more concerned about is if it means that non-Chinese 5G is in trouble.

    https://www.msn.com/en-us/money/human-resources/5g-freeze-hits-hard-ericsson-rocked-by-fresh-layoffs-as-sweden-s-telecom-giant-tightens-the-axe/ar-AA1UjGRp

    Ericsson has announced that it will be laying off employees as part of a broader effort to improve its financial situation and boost efficiency. The company has been facing a tough telecom equipment market, with carriers spending less than expected on 5G technology. This shift in spending has impacted Ericsson’s revenue growth and profitability.

    That does sound like it’s related to the 5G market, though that article doesn’t have particulars.

    I remember that a few years back, the US had talked about buying Ericsson or Nokia if they weren’t getting adequate support because they did not want China to have control over the (security-sensitive) 5G infrastructure market.

    5G infrastructure is one notable technology area where the US doesn’t have top-tier players, so it got really twitchy about the idea that China might take over the market. It’s also why the US was running around the world a few years back trying to get parties to buy Ericsson or Nokia product rather than Huawei.

    If Ericsson is really in trouble on 5G infrastructure, I wonder if that might be reconsidered.

    searches

    It looks like I’m not the only one thinking about that.

    https://www.ft.com/content/2834381f-7a21-4c51-b45f-b4b9dd38c818

    But what eye-catching proposals has Trump yet to resurrect from his first stint in office? There is one in particular that strikes me as intriguing now as it was then: that the US should buy Nokia or Ericsson, or even both.

    William Barr, attorney-general under Trump, suggested in 2020 that the US should actively consider taking a “controlling stake” in either or both of the Finnish and Swedish telecoms equipment makers “either directly or through a consortium of private American and allied companies”.

    Like many Trump proposals, the idea was first met by gasps of disbelief. The US government doesn’t tend to buy foreign companies. But as is the case in some of the president’s outlandish schemes, there was a kind of rationale to the proposed purchase — and one that has not gone away in the meantime.

    Telecoms equipment manufacturing is one of the very few areas of technology where the US is not just behind but not present at all. Reliable networks are vital for business and consumers alike, as well as becoming increasingly essential in warfare, as Ukraine is demonstrating with its drone warfare.

    “They have not solved that issue in the US,” says Anna Wieslander, Northern Europe director at US think-tank the Atlantic Council.

    Nokia and Ericsson have an effective duopoly in much of the western world thanks to American pressure on allies not to use Huawei, their main rival, which has close ties to the Chinese state. But they have struggled to draw as much benefit out of that as many might have expected, with both experiencing disappointing profitability in recent years.

    What is more, Ericsson and Nokia have failed to garner full-blooded support from the EU — all the more strange for being perhaps the one sector where Europe has technology dominance.

    Like, I’d think that one of several things probably needs to happen:

    • Nokia dominates. I don’t think that the US cares that much about consolidation in the market.

    • The US creates some kind of domestic competitor. Maybe Cisco or someone moves into the market (I understand that they do sell some 5G infrastructure, but not on the level that Ericsson and Nokia and Huawei do).

    • The US buys one of Ericsson or Nokia and provides support.

    • The US decides that it isn’t worried about 5G from a security standpoint (e.g. say that we decide that the real future is in some other system).




  • https://stackoverflow.com/questions/30869297/difference-between-memfree-and-memavailable

    Rik van Riel’s comments when adding MemAvailable to /proc/meminfo:

    /proc/meminfo: MemAvailable: provide estimated available memory

    Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up “free” and “cached”, which was fine ten years ago, but is pretty much guaranteed to be wrong today.

    It is wrong because Cached includes memory that is not freeable as page cache, for example shared memory segments, tmpfs, and ramfs, and it does not include reclaimable slab memory, which can take up a large fraction of system memory on mostly idle systems with lots of files.

    Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree, Active(file), Inactive(file), and SReclaimable, as well as the “low” watermarks from /proc/zoneinfo.

    However, this may change in the future, and user space really should not be expected to know kernel internals to come up with an estimate for the amount of free memory.

    It is more convenient to provide such an estimate in /proc/meminfo. If things change in the future, we only have to change it in one place.

    Looking at the htop source:

    https://github.com/htop-dev/htop/blob/main/MemoryMeter.c

       /* we actually want to show "used + shared + compressed" */
       double used = this->values[MEMORY_METER_USED];
       if (isPositive(this->values[MEMORY_METER_SHARED]))
          used += this->values[MEMORY_METER_SHARED];
       if (isPositive(this->values[MEMORY_METER_COMPRESSED]))
          used += this->values[MEMORY_METER_COMPRESSED];
    
       written = Meter_humanUnit(buffer, used, size);
    

    It’s adding used, shared, and compressed memory, to get the amount actually tied up, but disregarding cached memory, which, based on the above comment, is problematic, since some of that may not actually be available for use.

    top, on the other hand, is using the kernel’s MemAvailable directly.

    https://gitlab.com/procps-ng/procps/-/blob/master/src/free.c

    	printf(" %11s", scale_size(MEMINFO_GET(mem_info, MEMINFO_MEM_AVAILABLE, ul_int), args.exponent, flags & FREE_SI, flags & FREE_HUMANREADABLE));
    

    In short: You probably want to trust /proc/meminfo’s MemAvailable, (which is what top will show), and htop is probably giving a misleadingly-low number.




  • There might be some way to make use of it.

    Linux apparently can use VRAM as a swap target:

    https://wiki.archlinux.org/title/Swap_on_video_RAM

    So you could probably take an Nvidia H200 (141 GB memory) and set it as a high-priority swap partition, say.

    Normally, a typical desktop is liable to have problems powering an H200 (600W max TDP), but that’s with all the parallel compute hardware active, and I assume that if all you’re doing is moving stuff in and out of memory, it won’t use much power, same as a typical gaming-oriented GPU.

    That being said, it sounds like the route on the Arch Wiki above is using vramfs, which is a FUSE filesystem, which means that it’s running in userspace rather than kernelspace, which probably means that it will have more overhead than is really necessary.

    EDIT: I think that a lot will come down to where research goes. If it turns out that someone figures out that changing the hardware (having a lot more memory, adding new operations, whatever) dramatically improves performance for AI stuff, I suspect that current hardware might get dumped sooner rather than later as datacenters shift to new hardware. Lot of unknowns there that nobody will really have the answers to yet.

    EDIT2: Apparently someone made a kernel-based implementation for Nvidia cards to use the stuff directly as CPU-addressable memory, not swap.

    https://github.com/magneato/pseudoscopic

    In holography, a pseudoscopic image reverses depth—what was near becomes far, what was far becomes near. This driver performs the same reversal in compute architecture: GPU memory, designed to serve massively parallel workloads, now serves the CPU as directly-addressable system RAM.

    Why? Because sometimes you have 16GB of HBM2 sitting idle while your neural network inference is memory-bound on the CPU side. Because sometimes constraints breed elegance. Because we can.

    Pseudoscopic exposes NVIDIA Tesla/Datacenter GPU VRAM as CPU-addressable memory through Linux’s Heterogeneous Memory Management (HMM) subsystem. Not swap. Not a block device. Actual memory with struct page backing, transparent page migration, and full kernel integration.

    I’d guess that that’ll probably perform substantially better.

    It looks like they presently only target older cards, though.


  • This world is getting dumber and dumber.

    Ehhh…I dunno.

    Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.

    searches

    https://www.belfasttelegraph.co.uk/news/internet-killed-my-daughter/28397087.html

    Internet killed my daughter

    https://archive.ph/pJ8Dw

    Were Simon and Natasha victims of the web?

    https://archive.ph/i9syP

    Predators tell children how to kill themselves

    And before that, I remember video games.

    It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.

    https://en.wikipedia.org/wiki/Moral_panic

    A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is “the process of arousing social concern over an issue”,[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]

    Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”.[6] While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm”.[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen’s model of moral panic, below).

    Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]

    Media technologies

    Main article: Media panic

    The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]

    According to media studies professor Kirsten Drotner:[42]

    [E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.

    Recent manifestations of this kind of development include cyberbullying and sexting.[8]

    I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.



  • tal@lemmy.todaytoComic Strips@lemmy.worldThere's enough people on the planet
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    4
    ·
    edit-2
    1 day ago

    https://en.wikipedia.org/wiki/We_Didn't_Start_the_Fire

    “We Didn’t Start the Fire” is a song written by American musician Billy Joel.

    Joel conceived the idea for the song when he had just turned 40. He was in a recording studio and met a 21-year-old friend of Sean Lennon who said “It’s a terrible time to be 21!”. Joel replied: “Yeah, I remember when I was 21 – I thought it was an awful time and we had Vietnam, and y’know, drug problems, and civil rights problems and everything seemed to be awful”. The friend replied: “Yeah, yeah, yeah, but it’s different for you. You were a kid in the fifties and everybody knows that nothing happened in the fifties”. Joel retorted: “Wait a minute, didn’t you hear of the Korean War or the Suez Canal Crisis?” Joel later said those headlines formed the basic framework for the song.[4]

    https://www.youtube.com/watch?v=eFTLKWw542g

    🎵 We didn’t start the fire 🎵
    🎵 It was always burning since the world’s been turning 🎵
    🎵 We didn’t start the fire 🎵
    🎵 No, we didn’t light it, but we tried to fight it 🎵











  • tal@lemmy.todaytoWikipedia@lemmy.worldMundaneum
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Prior to the shift to computers — where you typically have computer programmers designing data structures and such — my understanding is that a lot of people worked on designing filing systems for humans to use, which was somewhat analogous.


  • The point I’m making is that bash is optimized for quickly writing throwaway code. It doesn’t matter if the code written blows up in some case other than the one you’re using. You don’t need to handle edge cases that don’t apply to the one time that you will run the code. I write lots of bash code that doesn’t handle a bunch of edge cases, because for my one-off use, that edge case doesn’t arise. Similarly, if an LLMs is generating code that misses some edge case, if it’s a situation that will never arise, and that may not be a problem.

    EDIT: I think maybe that you’re misunderstanding me as saying “all bash code is throwaway”, which isn’t true. I’m just using it as an example where throwaway code is a very common, substantial use case.


  • I don’t know: it’s not just the outputs posing a risk, but also the tools themselves

    Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.

    it shouldn’t require additional tools, checking for such common flaws.

    Well, we are using them today for human programmers, so… :-)