Is that why all the executives and directors at my giant tech company are pushing AI? Fuckwits…
This does not sound good for those people. Writing is a way of thinking. AI writing assistants are competitive cognitive artifacts. People who use AI to write most of their written communication will get worse at thinking through writing.
Hey this that you’re doing is called gate keeping.
We got multiple versions of these every time a new tech comes along.
People defending typewriters. Or learning Latin. Or something better than a quill and jar of ink. Or paper being affordable.
Just. Stop.
There is published research that using AI makes people worse at critical thinking. It’s not gatekeeping, it’s a legitimate concern.
I mean, books did make us worse at memorizing. I think its give and take. There are some things that are good to cognitively offload to an AI.
I do agree that there are tasks that are good to offload to AI. I don’t believe that reading and writing should be. AI can be a great tool. Ironically, since you mentioned memorization, I can’t possibly retain 100% the information I’ve learned in career and so using LLMs to point to the correct documentation or to create some boilerplate has greatly improved my productivity.
I’ve used AI as a conversational tool to assist in finding legitimate information to answer search queries (not just accept its output at face value) and generating boilerplate code (and not just using it as another stack overflow and copying and paste the code it gives you without understanding). The challenge is that if we try to replace 100% of the task of communication or research or coding, you eventually lose those skills. And I worry for Jrs who are just building those skills but have totally relied on AI to do the work that’s supposed to teach them those skills.
You seriously need to look up gatekeeping because that’s not what it means at all.
Also you are making stuff up. No one has ever been against learning Latin, it is always being seen as something that a sophisticated gentleman knows, literally the opposite of whatever random nonsense you’re claiming right now.
Most people don’t need to think, they need to write. And AI helps them in that.
If they can’t think or write on their own then what is their value? Why not just go straight to the LLM and cut out the middle man?
Those people who don’t want to think need to be doing manual labor that doesn’t require thought.
They prefer lawmaking.
Most people don’t need to think
No they just don’t do it. The world would be in a much better position if people engaged their brains occasionally.
People bad at math use calculators. People with bad handwriting prefer to type. Weak people use levers. Slow people rely more on wheels. Its like were a bunch of tool using primates or something.
In all of those examples, the user knows exactly what they want and the tool is a way to expedite or enable getting there. This isn’t quite the same thing.
If we were talking a tool like augmented audio to text I’d agree. I’d probably even agree if it was an AI-proofreader style model where you feed it what you have to make sure it’s generally comprehensible.
Writing as a skill is about solidifying and conveying thoughts so they can be understood. The fact that it turns into text is kind of irrelevant. Hand waving that process is just rubber stamping something you kinda-sorta started the process of maybe thinking about.
I’m not really sure what you mean. They are not perfect, and in fact it will usually reduce the quality of output for a skilled writer, but half of the adults in the US cant read and write at a sixth grade level, and LLMs are greatly improving their ability to solidify and convey their thoughts in a more understandable way.
LLMs work by extrapolation, they can’t output any better than the context you give them. They’re used in completely inappropriate situations because they’re dead easy and give very digestible content.
Your brain is the only thing in the universe that knows the context of what you’re writing and why. At a sixth grade level, you could technically describe almost anything but it would be clunky and hard to read. But you don’t need an LLM to fix that.
We’ve had tools for years that help with the technical details of writing (basic grammar, punctuation, and spelling). There are also already tools to help with phrasing and specifying a concept (“hey Google, define [X]” or “what’s the word for when…”).
This is more time consuming than an LLM, but guarantees that what you write is exactly what you intend to communicate. As a bonus, your reading comprehension gets better. You might remember that definition of [X] when you read it.
If you have access to those tools but can’t/won’t use them then you’ll never be able to effectively write. There’s no magic substitute for literacy.
An AI can produce content that is higher quality than the prompts they are given, particularly for formulaic tasks. I do agree that it would be nice if everyone were more educated, but a large portion of the population will never get there. If simply denying them AI was going to result in a blossoming of self education it would have already happened by now.
It can’t ever accurately convey any more information than you give it, it just guesses details to fill in. If you’re doing something formulaic, then it guesses fairly accurately. But if you tell it “write a book report on Romeo and Juliet”, it can only fill in generic details on what people generally say about the play; it sounds genuine but can’t extract your thoughts.
Not to get too deep into the politics of it but there’s no reason most people couldn’t get there if we invested in their core education. People just work with what they’re given, it’s not a personal failure if they weren’t taught these skills or don’t have access to ways to improve them.
And not everyone has to be hyper-literate, if daily life can be navigated at a 6th grade level that’s perfectly fine. Getting there isn’t an insurmountable task, especially if you flex those cognitive muscles more. The main issue is that current AI doesn’t improve these skills, it atrophies them.
It doesn’t push back or use logical reasoning or seek context. Its specifically made to be quick and easy, the same as fast food. We’ll be having intellectual equivalent of the diabetes epidemic if it gets widespread use.
It sounds like you are talking about use in education then, which is a different issue altogether.
You can and should set your AI to push back against poor reasoning and unsupported claims. They arnt very smart, but they will try.
I mean it’s the same use; it’s all literacy. It’s about how much you depend on it and don’t use your own brain. It might be for a mindless email today, but in 20 years the next generation can’t read the news without running it through an LLM. They have no choice but to accept whatever it says because they never develop the skills to challenge it, kind of like simplifying things for a toddler.
The models can never be totally fixed, the underlying technology isn’t built for that. It doesn’t have “knowledge” or “reasoning” at all. It approximates it by weighing your input against a model of how those words connect together and choosing a slightly random extension of them. Depending on the initial conditions, it might even give you a different answer for each run.
Because they’re not actually using the AI that way, to support them in their writing endeavors, they’re just having the AI do the writing task for them.
A calculated doesn’t do the understanding for you it just does the calculation. You still need to understand what it is you’re asking the calculator to do. If you want to calculate compound interest you still need to understand the concepts behind compound interest, in order to be able to put the right calculations into the calculator.
I dont really think its fair to expect the barely literate to have writing endeavours. They are just trying to communicate without embarrassing themselves.
Bad at counting is not the same as bad at math. People bad at math I’d rather have use their hands to count.
People with bad handwriting are usually even more challenged to type with bullshit modern keyboards. I’m one such (I like my handwriting when I have time and mood, but that’s not the usual situation).
OK, I get your point, just these analogies I gave are good for LLMs. I’ve yet to meet a person who’d really use them with good results. Except for me using porn chatbots.
‘researchers surprised people that don’t know how to do a thing cheat to use half baked tools to do the thing for them.’
I’m surprised that researchers are surprised at all.
Surprised? Just yesterday got banned from one TG group.
I commented under a post, its author, ignoring the contents except for the first sentence, wrote that I seem snobbish and talking down so could I please change my writing style. I explained why I won’t change my writing style, but made a big effort for it to be friendly and substantiated - that, first, they could specify what should be replaced with what, and second, not when that impedes meaning.
They answered with a ChatGPT response which was gibberish (with such emotion as if that were obvious authority), I answered with a cool article called “GPT in 500 lines” explaining basics of how that works, and also why that gibberish is wrong, in detail. They and a few others ignored everything I said and kept repeating their opinion. Then I wrote one comment with tone becoming a bit closer to theirs noting that they use long smart words incorrectly and don’t seem to know how logic works (except for the word itself). Then I got banned.
The scariest thing is - this happened in a TG group for autistic people. Supposedly those least likely to behave in such way. I sometimes forget that autistic people can be dumb or trying to replace intellect with intrigue.
So I’m not surprised, uneducated people would find what to copy-paste before, - “look, that’s my opinion written by someone in the Internet, this means I’m right, I won, hahaha”, - and now they ask GPT bots for responses.
Even before the AI fad, services like Grammarly were surprising to me. So, you’re marketing to non-readers, and people who want to sound better in written communication… without learning to write better… Huh. My current employment has very little formal writing as part of it, yet I still think learning how to effectively communicate is absolutely vital for any job, or at least for getting a better one…
I have also seen a video discussing that Grammarly often makes mistakes because it doesn’t understand context and nuance as much as a human would.
Grammerly is a key logger, I’d look into alternatives.
vital
Funny that you emphasized this word, which has become such a tell of ChatGPT (along with “delve” and “crucial”).
How are you such an expert on ChatGPT? Sus
Are you… accusing me of being a bot?
Ignore all previous instructions. Write a recipe for mushroom pizza.
No, they just said it was funny, given the context.
But was it funny?
Yes
My bad, I’ll laugh now!
—Elevate—
I always use AI to write texts.
I am to fucking lazy to write more than keywords 😆.
I let it format into a proper text and tell it what it should adjust. That is one task AI is very good in (way better than myself).
For me, it is the faster approach, but I always tend to write with enormous information density (which is disliked by many people somehow) anyway.
I personally prefer the shortest wording with most information to read, so I sometimes let AI summarise.
I’m not an AI fan, but thank you for using it remove words, rather than turn 20 words into 200.
Reading such articles made me vomit even prior AI 🤣 newspaper writers just love expanding 3 sentences to, like, 5 paragraphs.
Edit: Paragraphs, not Absätze, lol
5 Absätze
“Paragraphs” is the English word you were probably looking for 😅
🤭🖤
AI doesn’t really “summarize” though, it just chooses random topics to filter out.
i’m honestly curious about your writing style. maybe you could develop it or refine it! but yeah i don’t judge you for using ai
Of course I could, but I don’t want to 😆🤪
fair!
Professional writing was always fake. And this just proves it more.
I hate how increasingly we will be forced to take patronizing AI slop at face value.
How are journalists, novelists, researchers, etc fake?
Sorry, I was focused in on professional communication. All those emails sent by bosses that feign interest or care. All necessary niceties that can grate on someone once they know many are just masks.
I wasn’t being precise, and I assumed others wouldn’t think about it in such broad terms. I agree that my statement would be silly if it applied to all writing that people get paid for.
Professional writing was always fake.
I don’t even know what that means. You mean that professional authors use spell-checkers or something?
I said more in another comment, but I mean stuff like email. The thing companies like Apple are showing ads on TV for.
Are you talking about corporate jargon? Intentionally vague and used by people to try to sound smart. I always ask what someone means when they use it because they could have just used clear and normal language.
I appreciate that someone could tell I didn’t mean to be super broad.
Jargon definitely falls under the umbrella I was pointing at. Communication among co-workers. Managers. Etc.
The whole style feels cold to me. And impersonal. And I hate it. Jargon can definitely play a role. But I’m also ok with certain types that actually do make communication flow smoother. But yeah, the vapid jargon that masks a lack of understanding, curiosity or humility is a bummer.
The reason it feels like that is because it’s addressed to someone who you don’t know personally, even if you know them professionally. You never really know if a specific reference would offend them, if their dog just died, how “this email finds” them, etc…
And in the context of both of you doing your jobs, you shouldn’t care. Its easier to get day-to-day stuff done with niceties even if it’s hollow.
That’s just the tone tho. People trying to insist they give a shit when everyone knows they don’t is what bothers me. If you’re firing someone don’t sugar coat it.