He said:
"LLM is going to change schools and universities a lot"
You said:
"No it won't. It really, really wont."
With the explosive development of LLMs and their abilities, it seems your point of view is probably the hopeful one while the other poster has the realistic one.
It seems that you simply can't say anything about what LLMs will not be able to do. Especially when you try to use current "AI slop" as your main reason, which is being more and more eradicated.
> "AI slop" as your main reason, which is being more and more eradicated.
The slop is the hard truth.
As I made perfectly clear in my original post. My university professor friends get handed AI slop by their students each and every day.
There is no "eradication of slop" happening. If anything, it is getting worse. Trust me, my friends see the output from all the latest algorithms on their desk.
The students think they are being very clever, the students think the magical LLM is the best thing since sliced bread.
All the professor sees is a wall of slop on their desk and a student that is not learning how to reason and think with their own damn brain.
And when the professors tries politely and patiently to challenge them and test their understanding as you would expect in a university environment, the snowflake students just whine and complain because they know they've been caught out drinking the LLM kool-aid again for the 100th time this week.
Hence the student is wasting their time and money at university, and the professor is wasting their time trying to teach someone who is clearly not interested in learning because they think they can get the answer in 5 seconds from an LLM chatbot.
My professor friends chose the career they did because they enjoy the challenge of helping students along the way through their courses and watching them develop.
They are no longer seeing that same development in their students. And instead of devoting time to helping students, they are wasting time thinking up over-engineered fiendishly-complicated lab-tasks and tests that the students cannot cheat using LLM.
It is honestly a lose-lose situation for everybody.
I think you're missing the point.
The conversation is not about what students give the professors, it's about how students learn. This obviously requires someone that wants to learn.
> it's about how students learn. This obviously requires someone that wants to learn.
FINALLY ! Someone who gets the point I was trying to make. I wish I could upvote you a million times.
This is precisely the point. Professors are happy to help people who want to learn.
Students who prefer to copy/paste into LLMs do not want to learn. University is there to foster learning and reasoning using your own brain. An LLM helps with neither.
Sweep aside the misunderstanding about students trying to "cheat" with LLM output instead of engagement in the topic at hand. I think there is a secondary debate here, even when you understand the original intent of the post above. It still boils down to the same concerns about "slop". Not the student presenting slop to the existing teaching system, but the student being led stray by the slop they are consuming on their own.
Being an auto-didact has always been a double-edged sword. You can potentially accelerate your learning and find your own specialization, but it is an extremely easy failure mode to turn yourself into some semi-educated crank. Once in a while, this leads to some renegade genius who opens new branches of knowledge. But in more cases, it aborts useful learning. The crank gets lost in their half-baked ontology and unable to really fix the flaws nor progress to more advanced topics.
The whole long history of learning institutions is, in part, trying to manage this very human risk. One of a teacher's main roles is to recognize a student who is spiraling out in this manner and steer them back. Nearly everyone has this potential to incrementally develop a sort of self-delusion, if not getting reality-checked on a regular basis. It takes incredible diligence to self-govern and never lose yourself in the chase.
This is where "sycophancy" in LLMs is a bigger problem than mere diction. If the AI continues to function as a sort of keyhole predictor, it does not have the context to model a big-picture purpose like education and keep all the incremental wanderings on course and bound to reality. Instead, it can amplify this worst-case scenario where you plunge down some rabbit-hole.
I sure hope those "university professor friends" exist, and you're not self-distancing. Because you really need help with the mindset like that. Students are not your enemies and LLMs are not ought to get you. Seek help.
I have a MacBook Pro M1 Max w/64 GB RAM, and a Mac Studio M3 Ultra w/96 GB RAM. What do you think is possible to run on these? Just curious before I really try it out.
Below are my test results after running local LLMs on two machines.
I'm using LM Studio now for ease of use and simple logging/viewing of previous conversations. Later I'm gonna use my own custom local LLM system on the Mac Studio, probably orchestrated by LangChain and running models with llama.cpp.
My goal has all the time been to use them in ensembles in order to reduce model biases. The same principle has just now been introduced as a feature called "model council" in Perplexity Max: https://www.perplexity.ai/hub/blog/introducing-model-council
Chats will be stored in and recalled from a PostgreSQL database with extensions for vectors (pgvector) and graph (Apache AGE).
For both sets of tests below, MLX was used when available, but ultimately ran at almost the same speed as GGUF.
I hope this information helps someone!
/////////
Mac Studio M3 Ultra (default w/96 GB RAM, 1 TB SSD, 28C CPU, 60C GPU):
• Gemma 3 27B (Q4_K_M): ~30 tok/s, TTFT ~0.52 s
• GPT-OSS 20B: ~150 tok/s
• GPT-OSS 120B: ~23 tok/s, TTFT ~2.3 s
• Qwen3 14B (Q6_K): ~47 tok/s, TTFT ~0.35 s
(GPT-OSS quants and 20B TTFT info not available anymore)
//////////
MacBook Pro M1 Max 16.2" (64 GB RAM, 2 TB SSD, 10C CPU, 32C GPU):
-
_(Sorry for the joke. I know nothing about you, it was just a cheap one-liner I felt can be shared with this disclaimer. Much love.)_
reply