I think this is true: sentience is hard to recognise (to the extent that "sentience" has any tangible meaning other that "things which think like us")
But I think with LaMDA certain engineers are close to the opposite failure mode: placing all the weight on familiarity with use of words being a familiar thing perceived as intrinsically human, and none of it on the respective whys of humans and neural networks emitting sentences. Less like failing to recognise civilization on another planet because it's completely alien and more like seeing civilization on another planet because we're completely familiar with the idea that straight lines = canals...
But I think with LaMDA certain engineers are close to the opposite failure mode: placing all the weight on familiarity with use of words being a familiar thing perceived as intrinsically human, and none of it on the respective whys of humans and neural networks emitting sentences. Less like failing to recognise civilization on another planet because it's completely alien and more like seeing civilization on another planet because we're completely familiar with the idea that straight lines = canals...