• MajorHavoc@programming.dev
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    3
    ·
    edit-2
    12 days ago

    I love that someone even bothered with a study.

    (Edit: To be clear, I am both amused, and also genuinely appreciate that the science is being done.)

    • jeansburger@lemmy.world
      link
      fedilink
      English
      arrow-up
      41
      ·
      12 days ago

      Confirmation of anecdotes or gut feelings is still science. At some point you need data rather than experience to help people and organizations change their perception (see: most big tech companies lighting billions of dollars on fire on generative AI).

      • GoodEye8@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 days ago

        Not to mention based on the numbers in the article I imagine the AI might actually do better than an average human would do. It wasn’t as much of a “duh” as I thought it would be.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 days ago

        You also need that stuff to shut up pseudo-sceptics. Like, random example, posture having an influence on mood, there were actually psychologists denying that, reason for that kind of attitude is usually either a) If there’s no study on some effect then it doesn’t exist, “literature realism” or b) some now-debunked theory of the past implied it, “incorrectness by association”. Just because you’re an atheist doesn’t mean that you should discount catholic opinions on beer brewing, they produce some good shit. And just because the alchemists talked about transmutation and the chemists made fun of it to distance themselves from their own history doesn’t mean that some nuclear physicist wasn’t about to rain on their parade, yes, you can turn lead into gold.

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      12 days ago

      For many hundreds of years, blood-letting was an obvious thing to do. As was just giving people leeches for medical ailments. And ingesting mercury. We thought having sex with virgins would cure STDs. We thought doses of radiation was good for us. And tobacco. We thought it was obvious that the sun revolved around Earth.

      It is enormously important to scientifically confirm things, even if they do seem obvious.

      • Flocklesscrow@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        11 days ago

        Uhh, we didn’t “think” a lot of those things- you’re describing marketing that some Company disseminated in order to shill their products. And many, many people paid the price in misery, or worse.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    3
    ·
    12 days ago

    Of course not, the whole point of disinformation is that it sounds correct, that’s AI’s bread and butter!

    • Semperverus@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      I dont think that the developers who came up with the processes for LLMs were really targeting that usecase. It just so happens that the limitations of LLMs also lend power to disinformatiom. LLMs don’t really know how to say “I don’t know”.

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    12 days ago

    Think about it this way: remember those upside-down answer keys in the back of your grade school math textbook? Now imagine if those answer keys included just as many incorrect answers as correct ones. How would you know if you were right or wrong without asking your teacher? Until a LLM can guarantee a right answer, and back it up with real citations, it will continue to do more harm than good.

    • tehn00bi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      Certainly plausible. I’m sure they are trying to figure out how to get it to understand relationships between information now that it’s pretty good at statistical inference.

  • Zerlyna@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 days ago

    Supposedly ChatGPT had an update in September, but it doesn’t agree that Trump was found guilty in may 34 times. When I give it sources it says ok, but it doesn’t upload correct learned information.

    • Jrockwar@feddit.uk
      link
      fedilink
      English
      arrow-up
      18
      ·
      11 days ago

      That’s because it doesn’t learn, it’s a snapshot of its training data frozen in time.

      I like Perplexity (a lot) because instead of using its data to answer your question, it uses your data to craft web searches, gather content, and summarise it into a response. It’s like a student that uses their knowledge to look for the answer in the books, instead of trying to answer from memory whether they know the answer or not.

      It is not perfect, it does hallucinate from time to time, but it’s rare enough that I use it way more than regular web searches at this point. I can throw quite obscure questions at it and it will dig the answer for me.

      As someone with ADHD with a somewhat compulsive need to understand random facts (e.g. “I need to know right now how the motor speed in a coffee grinder affects the taste of the coffee”) this is an absolute godsend.

      I’m not affiliated or anything, and if anything better comes my way I’ll be happy to ditch it. But for now I really enjoy it.

      • Nougat@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        11 days ago

        … t uses your data to craft web searches, gather content, and summarise it into a response.

        GPT 4-o does this, too.

        • Jrockwar@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 days ago

          Then that might not be the model the previous poster is talking about, because I have to press perplexity really hard to get it to hallucinate. Search-augmented LLMs are pretty neat.

      • Zerlyna@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 days ago

        Yes but I’m saying that snapshot in September is incorrect. Why is that. Is it rigged?

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    12 days ago

    I think the next step in AI is learning how to control and direct the speech, rather than just make computers talk.

    They are surprisingly good for being a mere statistical copycat of words on the internet. Whatever the second tier innovation is that jumps AI into true reasoning rather than pattern matching is going to be wild.

  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 days ago

    In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time

    70% of the instances where AI repeated falsehoods were in response to bad actor prompts, as opposed to leading prompts or innocent user prompts.