“IQ benefits”? Lmao what fuckin nonsense. This shit aint making anyone smarter, if anything its robbing you of your ability to think critically.
It’s garbage software with zero practical use. Whatever you’re using AI for, just learn it yourself. You’ll be better off.
“And then I drink coffee for 58 minutes” instead of reading a book, like that’s a brag - just read a fuckin book, goddamn.
It’s garbage software with zero practical use.
AI is responsible for a lot of slop but it is wrong to say it has no use. I helped my wife with a VBScript macro for Excel. There was no way I was going to learn VBScript. Chatgpt spit out a somewhat working script in minutes that needed 15 minutes of tweaking. The alternative would have been weeks of work learning a proprietary Microsoft language. That’s a waste of time.
Okay fine. You can vibe code. Got anything else?
I stand by my statement.
I agree with Blue_Morpho. LLMs have some utility, but the utility is limited and WAY overhyped. I certainly don’t want to offload all my thinking to these things.
Here’s a few things I use LLMs for:
-
When reading a book (a physical book, all by myself, like a big boy), I’ll leave chat voice mode on. Whenever I get lost or have a question, I’ll just ask the robot “I’m on page 143. Without giving away any spoilers, who’s this guy the author is referencing, again? And what does the author mean with this phrase here, exactly?” This works pretty darn well for me; I can answer questions without interrupting my flow (I’m very prone to distraction once I open a dictionary or hop on Wikipedia…).
-
I use LLM tools (like Notebook LM) to ingest and process academic papers and YouTube videos, have it summarize them, and then create and output Anki flashcards for me. This is great for language learning, making cloze cards from interesting sentences pulled from YouTube videos, for example.
-
And of course, monkey-work that I don’t want to do, like analyzing PowerPoint slides and offering recommendations on style (I fed it a library of “good” vs “bad” slides, so now it can tell me how to improve slides for presentation and content). This is work that needs to be done, but, it impedes my real work, so I delegate it to the machine.
I believe LLMs can be used as a tool to make one smarter, when used wisely and judiciously. It’s just a tool. Alas, most folks won’t use it that way, because it still requires work to do that.
LLMs can also make one much, much dumber when overly relied-on, copied-pasted without analysis, or believed whole-hog without checking sources or using critical thinking skills.
It’s like a kid using LLMs for high school math. Do you use it to break down and explain the problem, and give examples, so you actually learn how to do it when you get stuck? Or do you use it to just spit out the answers at you, so you can get a passing grade on your homework?
And honestly, what would/do most high school kids do with it?
Using an AI to write an argument in favor of AI? Please. Debate us like a real human. You’re not a bot. I’m not a bot. Act like it.
Downvote me if you like, but NONE of my comments are AI-generated. I type them out, by hand, with my thumbs, on my phone. Every single one, goddamnit.
This is one (of many) downsides of AI—people confusing long-form text with machine-generated text.
(And yes, I used an em-dash here. Eat shit.)
I hate this, because my son was also flagged his school’s AI-detection system, even though I 100% know he wrote his essay (I watched him do it). The reasoning? His paper was “well beyond his grade level.” It’s bullshit, and I’m pretty irate about it. I taught him all the paper-composition principles we learned in high school: introduction, body / main thesis (including the rule-of-three [no more than three main bullet points], and Aristotle’s three modes of persuasion), summary, and conclusion. And proper punctuation. Fuck.
So my question to you is this: how does one go about proving a negative? You say I used AI, I say I did not. How can I prove that I didn’t? How could you prove that you didn’t? Yeah, it sucks, man. One thing I actually hate about the advent of LLMs.
I didn’t use AI to write that.
In fact, I don’t use AI for writing at all. I use it in the ways described above.
Also, I use bullet points in my writing. I’ve done this for many years. And I am not going to stop. It adds clarity.
I use italics for emphasis. I’m not going to stop doing that, either.
Nor will I stop using the em-dash.
Just because LLMs are trained to mimic good writing style, doesn’t mean we should stop using these style principles ourselves.
-
The use of LLM had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of 4 months, the LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, scoring.
I equate it with doing those old formulas by hand in math class. If you don’t know what the formula does or how to use it, how do you expect to recall the right tool for the job?
Or in DND speak, it’s like trying to shoehorn intelligence into a wisdom roll.
That would be fine if LLM was a precise tool like a calculator. My calculator doesn’t pretend to know answers to questions it doesn’t understand.
the irony is that LLMs are basically just calculators, horrendously complex calculators that operate purely on statistics…
What I’m getting from this exchange is that people on the left have ethical concerns about plagiarism, and don’t trust half-baked technology. They also value quality over quantity.
I’m okay with being pigeonholed in this way. Drink all the coffee you want, dude.