The Singularity is a hypothetical future event where technology growth becomes uncontrollable and irreversible, leading to unpredictable transformations in our reality[1]. It’s often associated with the point at which artificial intelligence surpasses human intelligence, potentially causing radical changes in society. I’d like to know your thoughts on what the Singularity’s endgame will be: Utopia, Dystopia, Collapse, or Extinction, and why?

Citations:

  1. Singularity Endgame: Utopia, Dystopia, Collapse, or Extinction? (It’s actually up to you!)

  1. https://www.techtarget.com/searchenterpriseai/definition/Singularity-the ↩︎

  • mrmanager@lemmy.today
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Well, let me put it this way… Enjoy your days now, not later. :)

    And prepare to move to a country where tech is not very widespread. Try to gather money so you can move if you want to.

    Humans can be really nice on a individual level but society is run by evil people. I think it has always been that way. Good people don’t want any part of the power struggles and backstabbing, so they forfeit power to the people who are into that. By design, the system rewards evil people. And they are also the ones who really care about money, status, and so on.

    This means humanity is fucked. It’s pretty simple. Unless consciousness somehow changes in everybody at once, and everyone suddenly wants to do good instead of evil. Then we have a good chance. The tech can help build a paradise here for everyone.

    But that won’t happen unless good aliens somehow transforms our minds into something completely different.

  • bloodfart@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    There will not be a singularity. Global capitalism will absolutely collapse and on its way will become more dystopian. Humanity isn’t going extinct.

    E: the cause of this process is not human nature. Anyone who tells you it is has simply failed to study history. We can have a utopia but global capital has to collapse first to make space for it.

  • axtualdave@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    In the short term, a series of collapses as we reach ever closer toward that singularity. There’s a great many constraints on our ability to grow while on Earth, and it’s proving difficult to get off the planet in any reasonable method with our current technology. I suspect we’ll need to fall down and rebuild a couple times before we can reliably spread to other planets, or even simply exist in orbit.

    Once we get up there, though, and we’re no longer constrained by Earth’s resource limits, we’ll grow signficantly. I suspect we’ll move toward a machine-based society, both in automation and robotics, but also integrating technology into our bodies.

    At some point, someone is going to figure out how to do that mind to machine transfer, and we’ll diverge as a species. The organic humans and the composite AI / machine-based humanity.

    Knowing how stupid we are, though, we’ll probably end up becoming the Borg.

  • Adderbox76@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    All of the above.

    Humanity is, at it’s core, motivated by self interest. The singularity will be harnessed by those with the power and means to do so, while those who don’t will either suffer or die.

    The powerful few will adapt to the singularity; using it to craft their own utopia. The masses, without access to the same power that the upper class enjoyed, will fall into a dystopia while even more marginalized substrates of society go extinct completely unnoticed.

  • InternetPirate@lemmy.fmhy.mlOP
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    According to Connor Leahy, companies are currently engaged in a race to be the first ones to achieve AGI, prioritizing speed over security, as mentioned in his video (source). I firmly believe that unless significant changes occur, we are headed towards extinction. We may succeed in creating a highly powerful AGI, but it might disregard our existence and eventually destroy us—not out of malicious intent, but simply because we would be in its way. In the same way humans don’t consider ants when constructing a road. I wish more people were discussing it because it will be too late in a few years.

  • Candelestine@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Utopia or extinction, depending on the perspective of the person asking. Homo sapiens cannot exist forever, that would require a halting of DNA mutation and biological adaptation. Will “we” still be here even after we’ve begun to require a different classification term for ourselves, just for scientific clarity?

  • benjithedog@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I believe collapse is inevitable. More interesting is what comes after. If we reach true AI before the collapse, it could go either way afterwards but I’m hoping people will create a better society from the ashes.

    At least for the time we’ll have left, because AI or no AI, climate won’t be getting fixed any time soon.

  • erogenouswarzone@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I’ll do you one step better. What about when our ai meets another ai?

    Our existence is based on death and war. There is a lot of evidence to suggest we killed off all the other human-like species, such as neanderthals.

    And that is the reason we progressed to a state where we have developed our world and society we know today, and all the other species are just fossils.

    We were the most aggressive and bloodthirsty species of all the other aggressive and bloodthirsty alternatives, and even though we have domesticated our world, we have only begun to domesticate ourselves.

    Think about how we still have seen genocides in our own time.

    Our AI will hopefully pacify these instincts. Most likely not without a fight from certain parties that will consider their right to war absolute.

    Like the one ring, how much of the agressiveness will get poured into our AI?

    What if our AI, in the exploration of space, encounters another AI? Will it be like the early humanoid species, where we either wipe out or get wiped out ourselves?

    Will our AIs have completely abstracted away all the senseless violence?

    If you want a really depressing answer, read the second book of 3 body problem: The Dark Forest.

  • queermunist@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    There are too many structural problems with the extractive economy for our current society to survive. As resources dwindle and climate change gets worse the smaller countries will start to collapse and entire regions will go to war over resources. Billions of humans will be forced to migrate out of uninhabitable zones around the globe and they’ll do anything to escape. The ones that can’t escape will eat each other (metaphorically and literally).

    There won’t be a singularity. There probably won’t even be a global internet in 30 years.

  • 5 Card Draw@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Almost every comment I’ve seen sees the future as hopeless and I’m going to largely chalk that up to the postmodernism/realism consciousness in our society at this time period.

    I think the future will be a utopia, and there isn’t a long term (I mean centuries or millenia long developments) reason to think otherwise. The idea of utopia has pushed civilization to confront power structures and create new ones, to rethink what was impossible, too difficult to accomplish, etc. The many rights, freedoms, and ideas that many around the world take for granted today began as people envisioning a utopia and trying to make it happen. These ideas can’t be done away with as Alexis De Tocqueville saw.

    Right now there are problems for sure, and I personally think liberty and egality are only a parody of utopia at this point, but that’ll change over a long time.

    Human civilization is only 6000 years old! We’re still working with the brain of primitive humans, and we aren’t even toddlers yet in the grand lifespan of Earth. I think people tend to forget that sometimes.

    We’ll get to a better place, and our consciousness is always changing to confront the problems we face today (biosphere collapse, resource hoarding, infighting, etc).

    Democracy took centuries to develop coherently, and even then it failed MANY times at first. But look at it now.

    • OutOfMemory@vlemmy.net
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I think the Fermi paradox would suggest otherwise. If all civilizations succeed in the long term, we would have seen evidence of one by now.

  • redballooon@lemm.ee
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    The singularity already happened. We have corporations that are unregulatable. They create their own rules and use those rules to grow further, on the cost of our all resources. AI will be used by those corporations to grow further, but it won’t be the game changer towards the dystopia we’re already living and expanding.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    edit-2
    1 year ago

    I would imagine that the biological phase for intelligent life is rather short, and I expect that in the long run intelligence will transition to post biological substrates.

    I’d argue that inventions of language and writing are the landmark moments in human development. Before language was invented the only way information could be passed down from ancestors to offspring was via mutations in our DNA. If an individual learned some new idea it would be lost with them when they died. Language allowed humans to communicate ideas to future generations and start accumulating knowledge beyond what a single individual could hold in their head. Writing made this process even more efficient.

    When language was invented humans started creating technology, and in a blink of an eye on cosmological scale we went from living in caves to visiting space in our rocket ships. It’s worth taking a moment to really appreciate just how fast our technology evolved once we were able to start accumulating knowledge using language and writing.

    Our society today is utterly and completely unrecognizable to somebody from even a 100 years ago. If we don’t go extinct, I imagine that in another thousand years future humans or whatever succeeds us will be completely alien to us as well. It’s pretty hard to predict what that would look like given where we are now.

    With that caveat, I think we can make some assumptions such as that future intelligent life will likely exist in virtual environments running on computing substrates because such environments could operate at much faster speeds than our meat brains, and what we consider real time would be seem like geological scale from that perspective. Given that, I can’t see why intelligences living in such environments would pay much attention to the physical world.

    I also think that we’re likely to develop human style AIs within a century. It’s hard to predict such things, but I don’t think there’s anything magic regarding what our brains are doing. There are a few different paths towards producing a human style artificial intelligence.

    The simplest approach could be to simply evolve one. Given a rich virtual environment, we could run an evolutionary simulation that would select for intelligent behaviors. This approach doesn’t require us to understand how intelligence works. We just have to create a set of conditions that select for the types of intelligent behaviors we’re looking for. This is a brute force approach for creating AGI.

    Another approach could be to map out the human brain down to neuron level and create a physics simulation that would emulate a brain. We aren’t close to being able to do that technologically yet, but who knows what will happen in the coming decades and centuries.

    Finally, we might be able to figure out the algorithms that mimic what our brains do, and build AIs based on that. This could be the most efficient way to build an AI since we’d understand how and why it works which would facilitate rapid optimization and improvement.

    My view is that if we made an AI that had human style consciousness then it should be treated as a person and have the same rights as a biological human. While we could never prove that an AI has internal experience and qualia, I think that morally we have to err on the side of trusting the AI that claims to have consciousness and self awareness.

    I expect that post biologicals will be the ones to go out and explore the universe. Meat did not evolve to live in space because we’re adapted to gravity wells. An artificial life form could be engineered to thrive in space without ever needing to visit planets. This is the kind of life that’s most likely to be prolific in space.

    One of the best sci fi novels I’ve read on the subject would be Diaspora by Greg Egan. It seems like a plausible scenario for the future of humanity.