Supposedly ChatGPT had an update in September, but it doesn’t agree that Trump was found guilty in may 34 times. When I give it sources it says ok, but it doesn’t upload correct learned information.
That’s because it doesn’t learn, it’s a snapshot of its training data frozen in time.
I like Perplexity (a lot) because instead of using its data to answer your question, it uses your data to craft web searches, gather content, and summarise it into a response. It’s like a student that uses their knowledge to look for the answer in the books, instead of trying to answer from memory whether they know the answer or not.
It is not perfect, it does hallucinate from time to time, but it’s rare enough that I use it way more than regular web searches at this point. I can throw quite obscure questions at it and it will dig the answer for me.
As someone with ADHD with a somewhat compulsive need to understand random facts (e.g. “I need to know right now how the motor speed in a coffee grinder affects the taste of the coffee”) this is an absolute godsend.
I’m not affiliated or anything, and if anything better comes my way I’ll be happy to ditch it. But for now I really enjoy it.
Then that might not be the model the previous poster is talking about, because I have to press perplexity really hard to get it to hallucinate. Search-augmented LLMs are pretty neat.
Supposedly ChatGPT had an update in September, but it doesn’t agree that Trump was found guilty in may 34 times. When I give it sources it says ok, but it doesn’t upload correct learned information.
That’s because it doesn’t learn, it’s a snapshot of its training data frozen in time.
I like Perplexity (a lot) because instead of using its data to answer your question, it uses your data to craft web searches, gather content, and summarise it into a response. It’s like a student that uses their knowledge to look for the answer in the books, instead of trying to answer from memory whether they know the answer or not.
It is not perfect, it does hallucinate from time to time, but it’s rare enough that I use it way more than regular web searches at this point. I can throw quite obscure questions at it and it will dig the answer for me.
As someone with ADHD with a somewhat compulsive need to understand random facts (e.g. “I need to know right now how the motor speed in a coffee grinder affects the taste of the coffee”) this is an absolute godsend.
I’m not affiliated or anything, and if anything better comes my way I’ll be happy to ditch it. But for now I really enjoy it.
GPT 4-o does this, too.
Then that might not be the model the previous poster is talking about, because I have to press perplexity really hard to get it to hallucinate. Search-augmented LLMs are pretty neat.
Yes but I’m saying that snapshot in September is incorrect. Why is that. Is it rigged?
My experience is the same.