Yes, but that’s not how “normal” people take it. People believe these are thinking machines and programs, capable of some amount of reasoning. It’s coming from somewhere between decades of terrifying sci-fi and a few months of impressive model results.
The other response has my thoughts right. Although the two things are separate, laypeople (and a surprising number of others too) seem too easily duped by false claims of modern language models exhibiting "emergent properties" of potential sentience. It's absurd, to be sure, but that also means it would've been easy to tap into that to generate hype among such people.
Since when does artificial intelligence imply sentience? They're interdependent and neither are reliant on the other.