Valve quietly not publishing games that contain AI generated content if the submitters can’t prove they own the rights to the assets the AI was trained on

  • Lengsel@latte.isnot.coffee
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    That sounds like a positive thing as a way to verify that the content was designed by humans, but concerning that AI has any input at all, unless it’s for finding issues with the gameplay mechanics and nothing to do with game designing.

    Possible, with AI the single player campaigns might closer to playing with real people but AI can never duplicate human behaviour and instinct, only imitate it.

    • Ronno@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      1 year ago

      I’m not sure this is to ensure the content is made by humans, that isn’t the goal. Valve just wants to ensure that the game dev owns the rights of the content created for the game. Using AI, you can still own the rights in some scenario’s as long as the AI doesn’t use inputs that it doesn’t have the rights to.

      This is a very good development, it ensures that creators and owners of content are safeguarded, while at the same time ensuring that gamers get fresh and new content.

      • mack123@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I have to agree here. Generative AI has so much potential for games. Especially RPG style games for believable NPC characters. But the rights environment is very murky.

        I expect it to be resolved relatively soon though. a combination of generally trained AI with subject specific training should do the trick. In the same way we would train a helpdesk bot on company specific information.

        The remaining question though is what of the original broad dataset the source model was trained on. There things are less clear.

        • Ronno@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          I think that it is quite feasible to do though. Take for example Lord of the Rings. If the game dev has the rights to Lord of the Rings and its books, then it can be completely fine to write prompts for NPC text as: Produce a response to question X as if the NPC is living in Mordor, with his background as a blacksmith etc… AI can then generate that text under the IP of Lord of the Rings just fine. And it will always be the right tone of voice. Same can be done using dynamic events etc.

          • tal@kbin.social
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            KoboldAI and similar do that today, but it’ll soak up all the capacity on a computer and then some just for the text generation. Needs to be more efficient than it is today if one’s going to be generating text on the player’s computer.

            If you mean the studio using it to generate static text, then sure.

            • JackGreenEarth@kbin.social
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              @tal Or it could make an API call to a server, the way ChatGPT does today. Unfortunately, that will mean the player has to be online to use the text generation, but the tech of it isn’t what we’re discussing anyway. We’re talking about the ethics of it, not the means.

              It’s like we’re talking about whether robbing a bank is OK or not, and then someone goes and talks about how hard it is to rob a bank. It’s a non sequiter, it’s not what we’re talking about.

              @birlocke_ @lengsel @mack123 @Ronno

              • mack123@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                And that is where things gets interesting. The ethics of the situation. Even beyond copyright issues. Was your AI trained on data that you have the rights for, or not?

                We then have to think of the base model. How was that trained? I have not formed a well reasoned opinion yet as to the ethics of training on social media and forum style data.

                For me, personally, I don’t have an issue with my own posts and responses ending up as AI training data. We can also argue that those posts were made on public forums, therefor in public. But does that argument hold true for everyone. Underlying that question, we have to consider the profit motif off the companies. There is a major difference between training for academic purposes and for corporate purposes.

                Valve is probably smart in steering clear of the entire mud bog at this time. Not enough is known of how it will play out in both the courts and in public opinion.

                • Ronno@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Yeah that’s what I mean, if the game devs can show that the AI language model is fully trained on its own IP, then it should be fine.