• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 day ago

    You can get a Coral TPU for 40 bucks or so.

    You can get an AMD APU with a NN-inference-optimized tile for under 200.

    Training can be done with any relatively modern GPU, with varying efficiency and capacity depending on how much you want to spend.

    What price point are you trying to hit?

    • WorldsDumbestMan@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      I just use pre-made AI’s and write some detailed instructions for them, and then watch them churn out basic documents over hours…I need a better Laptop

    • boonhet@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      What price point are you trying to hit?

      With regards to AI?. None tbh.

      With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        24 hours ago

        With regards to AI?. None tbh.

        TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven’t tried, not interested). And the reason I’m eyeing a 9070 XT isn’t AI it’s finally upgrading my GPU, still would be a massive fucking boost for AI workloads.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 day ago

        You’re willing to pay $none to have hardware ML support for local training and inference?

        Well, I’ll just say that you’re gonna get what you pay for.

        • bassomitron@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.

              • boonhet@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                I mean the image generators can be cool and LLMs are great for bouncing ideas off them at 4 AM when everyone else is sleeping. But I can’t imagine paying for AI, don’t want it integrated into most products, or put a lot of effort into hosting a low parameter model that performs way worse than ChatGPT without a paid plan. So you’re exactly right, it’s not being sold to me in a way that I would want to pay for it, or invest in hardware resources to host better models.