Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
lapsis_beeftech
9 months ago
|
parent
|
context
|
favorite
| on:
The Unreliability of LLMs and What Lies Ahead
Large language models reliably produce misinformation that appears plausible only because it mimics human language. They are dangerous toys that cannot be made into tools that are safe to use.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: