• Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    2 days ago

    This is a really good video about how good DLSS5 works… on backgrounds, without altering artistic intent. If nvidia allows devs to tailor “no go zones” for DLSS, this would be a great thing.

    https://youtu.be/rtiynhjWPWo

    • redwattlebird@thelemmy.club
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      But… Why though? As a dev, why would I go through the ideation process only to have it filtered through TWO GPUs? For what benefit? This type of filtering is completely out of my control as a developer, and I wouldn’t want my game to be attached to third party parasite companies and basically split my player base into two classes.

      • Fmstrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        I agree on two things:

        • DLSS5 should not be run where an artist does not want it to be run.
        • DLSS5 requiring two GPUs is horrible, and can only push us further towards the dreaded “game in the cloud”

        But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?

        As artists learn to predict what these tools do, and where to take advantage of them (such as in backgrounds or on specific textures), I think they will become useful. At least I hope. If nvidia doesn’t provide tooling to do that, then I’m 100% on the same page as you.

        • redwattlebird@thelemmy.club
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          But, again, why? All this is applied post production, so there’s no control from the artist’s perspective on what the player sees on their end. I’d much rather a static pipeline where I’m in control of the look and feel, while also providing the player with options for accessibility like gamma adjustment.

          But if a tool enhances a texture in a specific way, for instance sharpening lines along a garment, or adding shadows to an object under a lamp, how is that different than existing texture mapping algs?

          We already have all that. This ‘feature’ literally adds nothing of value to our pipeline because it is all applied after the product is shipped and on the player’s computer.

          Further, because it’s a filter, it obfuscates what’s actually happening underneath. Why learn to predict what the filter will do when you can just not work with it and create scenes exactly how you want it?

          This whole thing is providing a solution to a problem that doesn’t exist simply to recoup their investments. It’s a complete waste of energy, materials, processing power etc. Absolutely unnecessary.

    • nightlily@leminal.space
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      This guy is clearly speaking from a place of technical ignorance. It can’t do any of that because it’s a screen-space post-processing effect that only works on final pixel colours and motion vectors. It does not have depth, material, or lighting information. It is purely a generative AI filter and in the demo gets so much wrong with the lighting and material properties. There’s one scene from the Hogwarts game where it turns a cast-iron cauldron into flat ceramic or plastic. It makes up reflections that are effectively screen-space because it can’t „see“ detail off screen and overrides actual RT reflections with them. It’s bad for faces and bad for backgrounds.