Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>also plagiarism

To me, this is a reminder of how much of a specific minority this forum is.

Nobody I know in real life, personally or at work, has expressed this belief.

I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.

Clearly, the authors in NeurIPS don't agree that using an LLM to help write is "plagiarism", and I would trust their opinions far more than some random redditor.





> Nobody I know in real life, personally or at work, has expressed this belief.

TBF, most people in real life don't even know how AI works to any degree, so using that as an argument that parent's opinion is extreme is kind of circular reasoning.

> I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.

I don't see parent's opinions as anti-AI. It's more an argument about what AI is currently, and what research is supposed to be. AI is existing ideas. Research is supposed to be new ideas. If much of your research paper can be written by AI, I call into question whether or not it represents actual research.


> Research is supposed to be new ideas. If much of your research paper can be written by AI, I call into question whether or not it represents actual research.

One would hope the authors are forming a hypothesis, performing an experiment, gathering and analysing results, and only then passing it to the AI to convert it into a paper.

If I have a theory that, IDK, laser welds in a sine wave pattern are stronger than laser welds in a zigzag pattern - I've still got to design the exact experimental details, obtain all the equipment and consumables, cut a few dozen test coupons, weld them, strength test them, and record all the measurements.

Obviously if I skipped the experimentation and just had an AI fabricate the results table, that's academic misconduct of the clearest form.


I am not an academic, so correct me if I am wrong, but in your example, the actual writing would probably only represent a small fraction of the time spent. Is it even worth using AI for anything other than spelling and grammar correction at that point? I think using an LLM to generate a paper from high level points wouldn't save much, if any, time if it was then reviewed the way that would require.

My brother in law is a professor, and he has a pretty bad opinion of colleagues that use LLMs to write papers, as his field (economics) doesn't involve much experimentation, and instead relies on data analysis, simulation, and reasoning. It seemed to me like the LLM assisted papers that he's seen have mostly been pretty low impact filler papers.


> I am not an academic, so correct me if I am wrong, but in your example, the actual writing would probably only represent a small fraction of the time spent. Is it even worth using AI for anything other than spelling and grammar correction at that point? I think using an LLM to generate a paper from high level points wouldn't save much, if any, time if it was then reviewed the way that would require.

Its understandable that you believe that, but its absolutely true that writing in academia is a huge time sink. Think about it, the first thing your reviewers are going to notice is not results but how well it is written.

If its written terribly you have lost, and it doesnt matter how good your results are at that point. Its common to spend days with your PI writing a paper to perfection, and then spend months back and forth with reviewers updating and improving the text. This is even more true the higher up you go in journal prestige.


> TBF, most people in real life don't even know how AI works to any degree

How about the authors who do research for NeurIPS? Do they know how AI works?


Who knows? Do NeurIPS have a pedigree of original, well sourced research dating back to before the advent of LLMs? We're at the point where both of the terms "AI" and "Experts" are so blurred it's almost impossible to trust or distrust anything without spending more time on due diligence than most subjects deserve.

As the wise woman once said "Ain't nobody got time for that".


"If much of your research paper can be written by AI, I call into question whether or not it represents actual research" And what happens to this statement if next year or later this year the papers that can be autonomously written passes median human paper mark?

What does it mean to cross the median human paper mark? How os that measured?

It seems to me like most of the LLM benchmarks wind up being gamed. So, even if there were a good benchmark there, which I do not believe there is, the validity of the benchmark would likely diminish pretty quickly.


I find that hard to believe. Every creative professional that I know shares this sentiment. That’s several graphic designers at big tech companies, one person in print media, and one visual effects artist in the film industry. And once you include many of their professional colleagues that becomes a decent sample size.

Graphic design is a completely different kettle of fish. Comparing it to academic paper writing is disingenuous.

The thread is about not knowing anyone at all who thinks AI is plagiarizing.

Yes, plagiarising text based content. No one in this thread meant graphics.

The LLM model and version should be included as an author so there's useful information about where the content came from.

> AI Overview

> Plagiarism is using someone else's words, ideas, or work as your own without proper credit, a serious breach of ethics leading to academic failure, job loss, or legal issues, and can range from copying text (direct) to paraphrasing without citation (mosaic), often detected by software and best avoided by meticulous citation, quoting, and paraphrasing to show original thought and attribution.


Not sure if I am correctly interpreting your implicit point but

> Plagiarism is using someone else's words,

Its right there. LLM is not "someone else"; its a very useful piece of software.


Higher education is not free. People pay a shit ton of money to attend and also governments (taxpayers) invest a lot. Imagine offloading your research to an AI bot...

“Anti-AI extremism”? Seriously?

Where does this bizarre impulse to dogmatically defend LLM output come from? I don’t understand it.

If AI is a reliable and quality tool, that will become evident without the need to defend it - it’s got billions (trillions?) of dollars backstopping it. The skeptical pushback is WAY more important right now than the optimistic embrace.


The fact that there is absurd AI hype right now doesn't mean that we should let equally absurd bullshit pass on the other side of the spectrum. Having a reasonable and accurate discussion about the benefits, drawbacks, side effects, etc. is WAY more important right now than being flagrantly incorrect in either direction.

Meanwhile this entire comment thread is about what appears to be, as fumi2026 points out in their comment, a predatory marketing play by a startup hoping to capitalize on the exact sort of anti AI sentiment that you seem to think is important... just because there is pro AI sentiment?

Naming and shaming everyday researchers based on the idea that they have let hallucinations slip into their paper all because your own AI model has decided thatit was AI so you can signal boost your product seems pretty shitty and exploitative to me, and is only viable as a product and marketing strategy because of the visceral anti AI sentiment in some places.


“anti-ai sentiment”

No that’s a straw man, sorry. Skepticism is not the same thing as irrational rejection. It means that I don’t believe you until you’ve proven with evidence that what you’re saying is true.

The efficacy and reliability of LLMs requires proof. Ai companies are pouring extraordinary, unprecedented amounts of money into promoting the idea that their products are intelligent and trustworthy. That marketing push absolutely dwarfs the skeptical voices and that’s what makes those voices more important at the moment. If the researchers named have claims made against them that aren’t true, that should be a pretty easy thing for them to refute.


The cat is out of the bag tho. AI does have provably crazy value. Certainly not the agi hype marketing spews and who knows how economically viable it would be without vc.

However, i think any one who is still skeptical of the real efficacy is willfully ignorant. This is not a moral endorsement on how it was made or if it is moral to use but god damn it is a game changer across vast domains.


There were a number of studies already shared reporting on the impression of increased efficiency without the actual increase in efficiency.

Which means that it's still not a given, though there are obviously cases where individual cases seem to be good proof of it.


There was a front page post just a couple of days ago where the article claimed LLMs have not improved in any way in over a year - an obviously absurd statement. A year before Opus 4.5, I couldn't get models to spit out a one shot Tampermonkey script to add chapter turns to my arrow keys. Now I can one small personal projects in claude code.

If you are saying that people are not making irrational and intellectually dishonest arguments about AI, I can't believe that we're reading the same articles and same comments.


Isn’t that the whole point of publishing? This happened plenty before AI too, and the claims are easily verified by checking the claimed hallucinations. Don’t publish things that aren’t verified and you won’t have a problem, same as before but perhaps now it’s easier to verify, which is a good thing. We see this problem in many areas, last week it was a criminal case where a made up law was referenced, luckily the judge knew to call it out. We can’t just blindly trust things in this era, and calling it out is the only way to bring it up to the surface.

> Isn’t that the whole point of publishing?

No, obviously not. You're confusing a marketing post by people with a product to sell with an actual review of the work by the relevant community, or even review by interested laypeople.

This is a marketing post where they provide no evidence that any of these are hallucinations beyond their own AI tool telling them so - and how do we know it isn't hallucinating? Are there hallucinations in there? Almost certainly. Would the authors deserve being called out by people reviewing their work? Sure.

But what people don't deserve is an unrelated VC funded tech company jumping in and claiming all of their errors are LLM hallucinations when they have no actual proof, painting them all a certain way so they can sell their product.

> Don’t publish things that aren’t verified and you won’t have a problem

If we were holding this company to the same standard, this blog wouldn't be posted either. They have not and can not verify their claims - they can't even say that their claims are based on their own investigations.


Most research is funded by someone with a product to sell, not all but a frightening amount of it. VC to sell, VC to review. The burden of proof is always on the one publishing and it can be a very frustrating experience, but that is how it is, the one making the claim needs to defend themselves, from people (who can be a very big hit or miss) or machines alike. The good thing is that if this product is crap then it will quickly disappear.

That's still different from a bunch of researchers being specifically put in a negative light purely to sell a product. They weren't criticized so that they could do better, be it in their own error checking if it was a human-induced issue, or not relying on LLMs to do the work they should have been. They were put on blast to sell a product.

That's quite a bit different than a study being funded by someone with a product to sell.


Yup, and no matter how flimsy an anti-ai article is, it will skyrocket to the top of HN because of it. It makes sense though, HN users are the most likely to feel threatened by LLMs, and therefore are more likely to be anxious about them.

I don’t love ai either, but that’s the truth.


Strange, I find it quite the opposite, especially ”pro-ai” comments are often top of the list.

I think there’s a bit of both, with a valley in the middle

> Clearly, the authors in NeurIPS don't agree that using an LLM to help write is "plagiarism",

Or they didn't consider that it arguably fell within academia's definition of plagiarism.

Or they thought they could get away with it.

Why is someone behaving questionably the authority on whether that's OK?

> Nobody I know in real life, personally or at work, has expressed this belief. I have literally only ever encountered this anti-AI extremism (extremism in the non-pejorative sense) in places like reddit and here.

It's not "anti-AI extremism".

If no one you know has said, "Hey, wait a minute, if I'm copy&pasting this text I didn't write, and putting my name on it, without credit or attribution, isn't that like... no... what am I missing?" then maybe they are focused on other angles.

That doesn't mean that people who consider different angles than your friends do are "extremist".

They're only "extremist" in the way that anyone critical at all of 'crypto' was "extremist", to the bros pumping it. Not coincidentally, there's some overlap in bros between the two.


> Why is someone behaving questionably the authority on whether that's OK?

Because they are not. Using AI to help writing is something literally every company is pushing for.


How is that relevant? Companies care very little about plagiarism, at least in the ethical sense (they do care if they think it's a legal risk, but that has turned out to not be the case with AI, so far at least).

What do you mean how is that relevant? Its a vast majority opinion in society that using ai to help you write is fine. Calling it "plagiarism" is a tiny minority online opinion.

First of all, the very fact that companies need to encourage it shows that it is not already a majority opinion in society, it is a majority opinion among company management, which is often extremely unethical.

Secondly, even if it is true that it is a majority opinion in society doesn't mean it's right. Society at large often misunderstands how technology works and what risks it brings and what are its inevitable downstream effects. It was a majority opinion in society for decades or centuries that smoking is neutral to your health - that doesn't mean they were right.


> Secondly, even if it is true that it is a majority opinion in society doesn't mean it's right. Society at large often misunderstands how technology works and what risks it brings and what are its inevitable downstream effects. It was a majority opinion in society for decades or centuries that smoking is neutral to your health - that doesn't mean they were right.

That its a majority opinion instead of a tiny minority opinion is a strong signal that its more likely to be correct. For example its a majority opinion that murder is bad; this has held true for millennia.

Heres a simpler explanation: toaster frickers tend to seek out other toaster frickers online in niche communities. Occams razor.


As long as AI companies have paid them to train on their data (see a number of licensing deals between OpenAI and news agencies and such).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: