I think the path to actually considering those models to be sentient is to make them able to assimilate new knowledge from conversation and making them able to create reasonably supported train of thought leading from some facts to a conclusion, akin to mathematical proof.
Wasn't new knowledge assimilation from talks the reason for Microsoft infamous Twitter chatbot to be discarded [1]. Despite such ability it definitely was not sentient.
I rised this point in another thread on HN about LaMDA: all its answers were "yes"-answers, not a single "no". Self-sentient AI should have its own point of view: reject what it thinks is false, and agree about what it thinks is true.