• 0 Posts
  • 275 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • hedgehog@ttrpg.networktoComic Strips@lemmy.worldProtest vote
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    10 hours ago

    Illegal vote suppression elected Trump, but even if it hadn’t, you should blame Democrats before blaming people who voted for third party candidates. Now, if you’re talking about people who “protest voted” by voting for Trump (in both the primaries and the election), then sure. Those people did, in fact, play an instrumental part in electing him.

    Why blame Democrats? Well, beyond just kinda being Republican-lites:

    • for opposing ranked choice voting (and alternatives)
    • for not rallying around progressive candidates
    • for not choosing Kamala via primary elections in 2024

    Democrats are the bare minimum “harm reduction” party, and I don’t bare any ill will toward people who voted for them rather than a party that would actually try to effect change, but the opposite mindset - blaming third party voters for not voting for Democrats - is very shortsighted. And as third party voters have never had the power to enact RCV or STAR voting or otherwise improve the system, blaming them instead of the Democrats who have had that power is inane.

    I’ve voted for a Democrat every single presidential election that I’ve been able to, but I honestly wish I hadn’t. I’d much rather there be more visibility for third parties, and for more people to feel empowered to vote for third party candidates.





  • hedgehog@ttrpg.networktoComic Strips@lemmy.worldAss Ads
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    3 days ago

    Glaring doesn’t imply a negative meaning. In this case it’s used to mean “obvious”.

    Unless you’re suggesting that “glaring” means “obviously staring” (it doesn’t - that would be “glaringly staring”) this doesn’t make any sense.

    “[He’s] glaring at [direct object]” is an example of a sentence that uses the present participle form of the verb “glare,” which explicitly communicates anger or fierceness.

    If you’re not convinced, read on.

    —————

    The verb form that takes an object is:

    Glare (verb with object): to express with a glare. They glared their anger at each other

    The noun form the above definition references is:

    Glare (noun): a fiercely or angrily piercing stare.

    “Glaring” can be an adjective and one of those definitions does mean “obvious” or “conspicuous,” but the use of that form of the word doesn’t make sense in her sentence. Think about a comparable sentence like “The undercover operative is conspicuous at the bar,” where the bar is the location. (Even then, most people wouldn’t use “glaring” in that sentence, as “conspicuous” or “obvious” are much less ambiguous; the operative could be staring piercingly or angrily at the bar rather than being glaring while being at the bar.) Another example that makes a bit more sense is “The effect of the invasive plants is glaring at the park.”

    But for that interpretation to be valid here, you’d have to:

    • believe that the dude is trying to hide/blend in, or otherwise explain how he - not what he’s doing, but the dude himself - is conspicuous
    • believe that the woman’s referring to her own ass as a location
    • assume that she isn’t commenting on how the guy is looking at her ass, even though the joke depends on giving him something different to look at

    That’s a bit of a stretch.


  • This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile

    x-logging:
      &default-logging
      options:
        max-size: '10m'
        max-file: '5'
      driver: json-file
    
    services:
      mongo:
        image: mongo:4.4
        volumes:
          - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
        logging: *default-logging
    
      nightscout:
        image: nightscout/cgm-remote-monitor:latest
        container_name: nightscout
        restart: always
        depends_on:
          - mongo
        logging: *default-logging
        ports:
          - 1337:1337
        environment:
          ### Variables for the container
          NODE_ENV: production
          TZ: [removed]
    
          ### Overridden variables for Docker Compose setup
          # The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
          # and manage TLS certificates
          INSECURE_USE_HTTP: 'true'
    
          # For all other settings, please refer to the Environment section of the README
          ### Required variables
          # MONGO_CONNECTION - The connection string for your Mongo database.
          # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
          # The default connects to the `mongo` included in this docker-compose file.
          # If you change it, you probably also want to comment out the entire `mongo` service block
          # and `depends_on` block above.
          MONGO_CONNECTION: mongodb://mongo:27017/nightscout
    
          # API_SECRET - A secret passphrase that must be at least 12 characters long.
          API_SECRET: [removed]
    
          ### Features
          # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
          # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
          ENABLE: careportal rawbg iob
    
          # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
          # When readable, anyone can view Nightscout without a token. Setting it to denied will require
          # a token from every visit, using status-only will enable api-secret based login.
          AUTH_DEFAULT_ROLES: denied
    
          # For all other settings, please refer to the Environment section of the README
          # https://github.com/nightscout/cgm-remote-monitor#environment
    
    

  • To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,

    services:
      nightscout:
        ports:
          - 3000:3000
    

    You can remove the labels as those are used by Traefik, as well as the Traefik service itself.

    Then just point Nginx to that port (e.g., 3000) on your local machine.

    —-

    Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.




  • There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.

    Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”

    Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).

    In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).

    https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.

    The author also quotes Jeff Hawkins’ book On Intelligence:

    Now we can see the entire picture. Nature first created animals such as reptiles with sophisticated senses and sophisticated but relatively rigid behaviors. It then discovered that by adding a memory system and feeding the sensory stream into it, the animal could remember past experiences. When the animal found itself in the same or a similar situation, the memory would be recalled, leading to a prediction of what was likely to happen next. Thus, intelligence and understanding started as a memory system that fed predictions into the sensory stream. These predictions are the essence of understanding. To know something means that you can make predictions about it. …

    The human cortex is particularly large and therefore has a massive memory capacity. It is constantly predicting what you will see, hear, and feel, mostly in ways you are unconscious of. These predictions are our thoughts, and, when combined with sensory input, they are our perceptions. I call this view of the brain the memory-prediction framework of intelligence.

    If Searle’s Chinese Room contained a similar memory system that could make predictions about what Chinese characters would appear next and what would happen next in the story, we could say with confidence that the room understood Chinese and understood the story. We can now see where Alan Turing went wrong. Prediction, not behavior, is the proof of intelligence.

    Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.









  • Fair point, I should have asked about commercial games in general

    That said I didn’t mean that the game studio itself would do the AI training and own their models in-house; if they did, I’d expect it to go just as poorly as you would. Rather, I’d expect the model to be created by an organization specialized in that sort of thing.

    For example, “Marey” is one example I found of a GenAI model that its creators are saying was trained ethically.

    Another is Adobe Firefly, where Adobe says they trained only on licensed and public domain content. It also sounds like Adobe is paying the artists whose content was used for AI training. I believe that Canva is doing something similar.

    StabilityAI is also doing something similar with Stable Audio 2.0, where they partnered with a music licensing company, AudioSparx, to ensure that artists are compensated, AI opt outs are respected, etc…

    I haven’t dug into any of those too deep, but they seem to be heading in the right direction at the surface level, at least.

    One of the GenAI scenarios that’s the most terrifying to me is the idea of a company like Disney using all the material they have copyright for to train their own, proprietary GenAI image, audio, and video tools… not because I think the outputs would be bad, but because of the impact that would have on creators in that industry.

    Fortunately, as long as copyright doesn’t apply to purely AI generated outputs, even if trained entirely on your own content, then I don’t think Disney specifically will do this.

    I mention that as an example because that usage of AI, regardless of how ethically the model was trained, would still be unethical, in my opinion. Likewise in game creation, an ethically trained and operated model could still be used unethically to eliminate many people’s jobs in the interest solely of better profits.

    I’d be on board with AI use (in game creation or otherwise) if a company were to say, “We’re not changing the budget we have for our human workforce, including for contractors, licensed art, and so on, other than increasing it as inflation and wages increase. We will be using ethical AI models to create more content than we otherwise would have been able to.” But I feel like in a corporate setting, its use is almost always going to result in them cutting jobs.