The next grail here would be the automatic use of more trustworthy systems like WA when using ChatGPT in general. If one were to ask it to write an essay on a subject, that it'd infer which pieces need fact-checking based on confidence intervals of snippets of discernable and differentiable data, then run a query against said trustworthy system.
With this improvement, it would at least never get dates or measurements wrong.
I don't think we can ever solve the problem of needing real editors and fact checkers as ultimate sources of truth for ChatGPT's output, especially when it's for something critical, but for many tasks, this would be a major improvement.
With this improvement, it would at least never get dates or measurements wrong.
I don't think we can ever solve the problem of needing real editors and fact checkers as ultimate sources of truth for ChatGPT's output, especially when it's for something critical, but for many tasks, this would be a major improvement.