I think you should remove Claude as a contributor to your repo. It probably weaseled its way in on its own, I think it’s the developers job to talk about the tools they used not the tool company.
> I think you should remove Claude as a contributor to your repo
I actually really appreciate it when people do not hide their use of Claude code in their repo like that. It's usually the first thing I check on Show HN posts these days.
It does like to weasel in if you let it write a commit message, and even after rewriting and force pushing, it seems to hang around on the GitHub contributor list.
There’s a bit of a hidden cost here… the longevity of GPU hardware is going to be longer, it’s extended every time there’s an algorithmic improvement. Whereas any efficiency gains in software that are not compatible with this hardware will tend to accelerate their depreciation.
MRIs are good if you know what you’re looking for, and usually with contrast, which in a situation like cancer where you need to do them often can result in allergic reactions.
In a full body situation, they are looking for mets, and the uptake of radioactive sugar by the tumors will let a PET scan find them.
One question is should the writer be dismissed from staff. Or can they stay on at Ars if for example, it was explained as an unintentional mistake while using an LLM to restructure his own words and it accidentally inserted the quotes and slipped through. We’re all going through a learning process with this AI stuff right?
I think for some people this could be a redeemable mistake at their job. If someone turns in a status report with a hallucination, that’s not good clearly but the damage might be a one off / teaching moment.
But for journalists, I don’t think so. This is crossing a sacred boundary.
> Or can they stay on at Ars if for example, it was explained as an unintentional mistake while using an LLM to restructure his own words and it accidentally inserted the quotes and slipped through.
No. Don't giving people free passes because of LLMs. Be responsible for your work.
They submitted an article with absolute lies and now the company has a reputational problem on its hands. No one cares if that happened because they sought out to publish lies or if it was because they made a tee-hee whoopsie-doodle with an LLM. They screwed up and look at the consequences they've caused for the company.
> I think for some people this could be a redeemable mistake at their job. If someone turns in a status report with a hallucination, that’s not good clearly but the damage might be a one off / teaching moment.
Why would you keep someone around who:
1. Lies
2. Doesn't seem to care enough to do their work personally, and
3. Doesn't check their work for the above-mentioned lies?
They have proven, right then, right there, that you can't trust their output because they cut corners and don't verify it.
The quote is not suggesting a quantum computer can’t be simulated classically, it can in fact, just slowly, by keeping track of the quantum state where n qubits is 2^n complex amplitudes.
It relates more to the Bell results, that there doesn’t exist a hidden variable system that’s equivalent to QM.
The 2023 paper even if true doesn’t preclude the 2026 paper from being true, it just sets constraints on how a faster attention solution would have to work.
I think you should remove Claude as a contributor to your repo. It probably weaseled its way in on its own, I think it’s the developers job to talk about the tools they used not the tool company.
reply