• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    2 days ago

    You’re willing to pay $none to have hardware ML support for local training and inference?

    Well, I’ll just say that you’re gonna get what you pay for.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.

          • boonhet@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            I mean the image generators can be cool and LLMs are great for bouncing ideas off them at 4 AM when everyone else is sleeping. But I can’t imagine paying for AI, don’t want it integrated into most products, or put a lot of effort into hosting a low parameter model that performs way worse than ChatGPT without a paid plan. So you’re exactly right, it’s not being sold to me in a way that I would want to pay for it, or invest in hardware resources to host better models.