Maybe that's the process PG describes themselves, but I think that the amount of different "processes" or "human equivalent to LLM decoding/sampling techniques" used by humans is likely very large - far larger than the number of techniques available to LLMs today.
Today within LLMs, there typicality sampling, contrastive search, top-p/top-k (nucleus sampling), and a whole massive amount of other more obscure techniques which didn't make it into huggingfaces model.generate() function.
I believe that the ontological configuration of humans has wide variety. As such, I believe that humans go through a diverse and vibrant amount of different writing processes - and often radically different processes can still result in excellent works!
Specifically, beam search is really only good at dealing with sequence-sequence tasks, i.e. summarization or translation. It's also horribly, horribly inefficient in the worst case scenarios. That said, it's also the only good way we know of for enforcing sequence level constraints (i.e. like this https://huggingface.co/blog/constrained-beam-search)
Today within LLMs, there typicality sampling, contrastive search, top-p/top-k (nucleus sampling), and a whole massive amount of other more obscure techniques which didn't make it into huggingfaces model.generate() function.
I believe that the ontological configuration of humans has wide variety. As such, I believe that humans go through a diverse and vibrant amount of different writing processes - and often radically different processes can still result in excellent works!
Specifically, beam search is really only good at dealing with sequence-sequence tasks, i.e. summarization or translation. It's also horribly, horribly inefficient in the worst case scenarios. That said, it's also the only good way we know of for enforcing sequence level constraints (i.e. like this https://huggingface.co/blog/constrained-beam-search)