Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, it means we need to learn how to prompt, and maybe future versions need to be more self guarded. If you prompt GPT-3 properly it can detect nonsense questions. Nonsense detection should be more efficient in future versions, also detecting inflammatory content (currently being tested in the GPT console).

What I would like to see is a larger training corpus that also includes all the supervised NLP datasets (translation, numerical and symbolic math, programming from prompts, all sorts of linguistic and logic tasks, and any of thousands of tasks we could conceive...) The end result would be a GPT that excels in all these sub-tasks while remaining general. It's a matter of making the training data better and model larger. Btw, we could teach GPT to detect bias, explain it and rewrite the text. I expect no huge hurdles on this task.

Another thing I would like to see is some sort of kNN memory to enlarge the context to any size, acting like a semantic search engine inside the model. We should be able to build more interesting applications if we could put much more initial data in the prompt.

Basically make the base model larger, augment the corpus with many tasks and enlarge the prompt capacity.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: