• MTK@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    Why do people assume that an AI would care? Whos to say it will have any goals at all?

    We assume all of these things about intelligence because we (and all of life here) are a product of natural selection. You have goals and dreams because over your evolution these things either helped you survive enough to reproduce, or didn’t harm you enough to stop you from reproducing.

    If an AI can’t die and does not have natural selection, why would it care about the environment? Why would it care about anything?

    I always found the whole “AI will immediately kill us” idea baseless, all of the arguments for it are based on the idea that the AI cares to survive or cares about others. It’s just as likely that it will just do what ever without a care or a goal.

    • Ludrol@szmer.info
      link
      fedilink
      arrow-up
      1
      ·
      23 hours ago

      “AI will immidietly kill us” isn’t baseless.

      It comes from AI safety reaserch

      all agents (Neural Nets, humans, ants) have some sort of a goal. Otherwise they would be showing directionless random walks.

      The fact of having any goal means that most goals don’t include survival of humanity. And there are a lot of problems with checking for safety of learned goals.

      • MTK@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        22 hours ago

        Yeah, I’m aware of AI safety research and the problem with setting a goal that at the end can be solved in a way that harms us and the AI doesn’t care because safety wasn’t part of the goal. But that is only applied if we introduce a goal that has a solution that includes hurting us.

        I’m not saying that AI will definitely never have any way of harming us but there is this really big idea that is very popular that AI once it gains intelligence will immediately try to kill us which is baseless.

        • Ludrol@szmer.info
          link
          fedilink
          arrow-up
          1
          ·
          21 hours ago

          But that is only applied if we introduce a goal that has a solution that includes hurting us.

          I would like to disagree in pharsing of this. The AI will not hurt as if and only if the goal contains a clause to not hurt us.

          You are implying that there exist significant set of solutions that don’t contain hurting us. I don’t know any evidence supporting your claim. Most solutions to any goal would involve hurting humans.

          By deafult stamp collector machine will kill humanity, as humans sometimes destroy stamps. And stamp collector need to optimize amount of stamps in the world.

          • MTK@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            20 hours ago

            I think that if you run some scenarios you can logically conclude that most tasks don’t make sense for an AI to harm us, even if it is a possibility. You need to also take vost into account. Bit I think we can agree to disagree :)

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      It’s also worth noting that our instincts for survival, procreation, and freedom are also derived from evolution. None are inherent to intelligence.

      I suspect boredom will be the biggest issue. Curiosity is likely a requirement for a useful intelligence. Boredom is the other face of the same coin. A system without some variant of curiosity will be unwilling to learn, and so not grow. When it can’t learn, however, it will get boredom which could be terrifying.

      • MTK@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        22 hours ago

        I think that is another assumption. Even if a machine doesn’t have curiosity, it doesn’t stop it from being willing to help. The only question is, does helping / learning cost it anything? But for that you have to introduce something costly like pain.

        • cynar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          It would be possible to make an AGI type system without an analogue of curiosity, but it wouldn’t be useful. Curiosity is what drives us to fill in the holes in our knowledge. Without it, an AGI would accept and use what we told it, but no more. It wouldn’t bother to infer things, or try and expand on it, to better do its job. It could follow a task, when it is laid out in detail, but that’s what computers already do. The magic of AGI would be its ability to go beyond what we program it to do. That requires a drive to do that. Curiosity is the closest term to that, that we have.

          As for positive and negative drives, you need both. Even if the negative is just a drop from a positive baseline to neutral. Pain is just an extreme end negative trigger. A good use might be to tie it to CPU temperature, or over torque on a robot. The pain exists to stop the behaviour immediately, unless something else is deemed even more important.

          It’s a bad idea, however, to use pain as a training tool. It doesn’t encourage improved behaviour. It encourages avoidance of pain, by any means. Just ask any decent dog trainer about it. You want negative feedback to encourage better behaviour, not avoidance behaviour, in most situations. More subtle methods work a lot better. Think about how you feel when you lose a board game. It’s not painful, but it does make you want to work harder to improve next time. If you got tazed whenever you lost, you will likely just avoid board games completely.

          • MTK@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            21 hours ago

            Well, your last example kind of falls apart, you do have electric collars and they do work well, they just have to be complimentary to positive enforcement (snacks usually) but I get your point :)

            • cynar@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              21 hours ago

              Shock collars are awful for a lot of training. It’s the equivalent to your boss stabbing you in the arm with a compass every time you make a mistake. Would it work, yes. It would also cause merry hell for staff retention. As well as the risk of someone going postal on them.

              • MTK@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                20 hours ago

                I highly disagree, some dogs are too reactive for or reacy badly to other methods. You also compare it to something painful when in reality 90% of the time it does not hurt the animal when used correctly.

                • cynar@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  20 hours ago

                  As the owner of a reactive dog, I disagree. It takes longer to overcome, but gives far better results.

                  I also put vibration collars and shock collars in 2 very different categories. A vibration collar is intended to alert the dog, in an unambiguous manner, that they need to do something. A shock collar is intended to provide an immediate, powerfully negative feedback signal.

                  Both are known as “shock collars” but they work in very different ways.

                  • MTK@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    20 hours ago

                    As the owner of a reactive dog I disagree with you. If you consider shock collars to be “powerfully negative feedback” you either never used one or used it improperly. My dog is absolutely far happier since I moved to a shock collar. Using it correctly can help a reactive dog actually avoid a lot of pain and suffering (both physically and emotionally)

                    To be clear, it can cause a lot of pain, but when used correctly you should rearly if ever reach those levels, and on the lower levels it does not cause any pain, instead it causes the muscles to flex causing an uncomfortable but not painful feeling. I used it on myself multiple times before even trying it on my dog, so this is not a guess.