Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI is not just for creators, it's also for readers: summarisation, fact checking, "talking to" articles, books and papers.


"Fact checking" is the one I worry about there. If AI confidently tells you something is true will you know that it's not?

It's obvious when it's telling you it's still 2022 but what about more everything else?


Better than no checking, which is what we will do 99% of the time. There is a NLP task called "entailment", where two affirmations are judged to see if they support each other or not. A combination of search + entailment would work for fact-checking articles.

But if you want to do this properly you need to first mine all facts from all sources, then do reconciliation, then update your "truth" table for reference. Probably everyone will want to select the sources of truth they want loaded into the system, we're not going to agree on truth.

Even the bare minimum of knowing when an affirmation is controversial or doesn't exist in references would be of great help. AI could indicate <controversial> tags for the first and <citation needed> for the latter. Fortunately search can tell us when no results are found, unlike LLMs.


Currently it seems mostly useful for exercising human fact checking ability ...


the new turing test




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: