Any tool can be a hammer if you use it wrong enough.

A good hammer is designed to be a hammer and only used like a hammer.

If you have a fancy new hammer, everything looks like a nail.

  • 0laura@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    4 months ago

    you not liking it doesn’t make it any less ai. I don’t remember that many people complaining when we called the code controlling video game characters ai.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        Except cell phones or cellular phones refer to the structure a mobile network is built on: a mesh of cell towers.

    • macniel@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      pretty sure that they were and still are called Bots though, atleast in the context of first person shooter.

          • 0laura@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            I showed you proof that AI is sometimes used to mean artificial intelligence when describing code that controls video game enemies.

                • macniel@feddit.org
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player’s input.

                  In general, game AI does not, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC in the manner of the Turing test or an artificial general intelligence.

                  Look further down to see what currently understood AI, aka generative AI, is used for in video games

                  • 0laura@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 months ago

                    I don’t really see your argument? it seems that you agree with me. ai doesn’t always refer to AGI. sometimes it refers to AGI, sometimes it refers to the code controlling the little ghosts in pacman, or the code controlling the bats in Minecraft. sometimes it refers to the machine learning algorithm that can detect numbers in an image, and sometimes it refers to generative AI like stable diffusion. my point is that ai is a very broad term that refers to many different things.

    • kescusay@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      4 months ago

      Software developer, here.

      It’s not actually AI. A large language model is essentially autocomplete on steroids. Very useful in some contexts, but it doesn’t “learn” the way a neural network can. When you’re feeding corrections into, say, ChatGPT, you’re making small, temporary, cached adjustments to its data model, but you’re not actually teaching it anything, because by its nature, it can’t learn.

      I’m not trying to diss LLMs, by the way. Like I said, they can be very useful in some contexts. I use Copilot to assist with coding, for example. Don’t want to write a bunch of boilerplate code? Copilot is excellent for speeding that process up.

      • celliern@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        LLMs are part of AI, which is a fairly large research domain of math/info, including machine learning among other. God, even linear regression can be classified as AI : that term is reeeally large

        • kescusay@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          4 months ago

          I mean, I guess the way people use the term “AI” these days, sure, but we’re really beating all specificity out of the term.

          • celliern@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            This is a domain research domain that contain statistic methods and knowledge modeling among other. That’s not new, but the fact that this is marketed like that everywhere is new

            AI is really not a specific term. You may refer as global AI, and I suspect that’s what you refer to when you say AI?

          • 0laura@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            it’s always been this broad, and that’s a good thing. if you want to talk about AGI then say AGI.

      • 0laura@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        I know that they’re “autocorrect on steroids” and what that means, I don’t see how that makes it any less ai. I’m not saying that LLMs have that magic sauce that is needed to be considered truly “intelligent”, I’m saying that ai doesn’t need any magic sauce to be ai. the code controlling bats in Minecraft is called ai, and no one complained about that.

      • Zos_Kia@lemmynsfw.com
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        Very useful in some contexts, but it doesn’t “learn” the way a neural network can. When you’re feeding corrections into, say, ChatGPT, you’re making small, temporary, cached adjustments to its data model, but you’re not actually teaching it anything, because by its nature, it can’t learn.

        But that’s true of all (most ?) neural networks ? Are you saying Neural Networks are not AI and that they can’t learn ?

        NNs don’t retrain while they are being used, they are trained once then they cannot learn new behaviour or correct existing behaviour. If you want to make them better you need to run them a bunch of times, collect and annotate good/bad runs, then re-train them from scratch (or fine-tune them) with this new data. Just like LLMs because LLMs are neural networks.