Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We could anthropomorphize any textbook too and claim it has human level understanding of the subject. We could then claim the second edition of the textbook understands the subject better than the first. Anyone who claims the LLM "understands" is doing exactly this. What makes the LLM more absurd though is the LLM will actually tell you it doesn't understand anything while a book remains silent but people want to pretend we are living in the Matrix and the LLM is alive.

Most arguments then descend into confusing the human knowledge embedded in a textbook with the human agency to apply the embedded knowledge. Software that extracts the knowledge from all textbooks has nothing to do with the human agency to use that knowledge.

I love chatGPT4 and had signed up in the first few hours it was released but I actually canceled my subscription yesterday. Part because of the bullshit with the company these past few days but also because it had just become a waste of time the past few months for me. I learned so much this year but I hit a wall that to make any progress I need to read the textbooks on the subjects I am interested in just like I had to this time last year before chatGPT.

We also shouldn't forget that children anthropomorphize toys and dolls quite naturally. It is entirely natural to anthropomorphize a LLM and especially when it is designed to pretend it is typing back a response like a human would. It is not bullshitting you though when it pretends to type back a response about how it doesn't actually understand what it is writing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: