The issue here is that memorization of any distinguishable part of IP is an incidental aspect - those models aren't memorizing stuff, they're learning it. We don't expect people to keep track of the source of every single piece of information they encounter. It would arguably make learning impossible - as much for humans as for LLMs.
As an intuition pump, when I write "2+2 = " and you mentally complete it with "4", should I chastise you for not completing it with "4, as per ${your elementary class math textbook} and ${that other book you read as a kid}, corroborated by ${your first math teacher} and ${your parent} quoting ${some other work}"?
When you make an omelette, what is the technical barrier making it practically impossible to tell which egg contributed how much to any given part of the meal?
As an intuition pump, when I write "2+2 = " and you mentally complete it with "4", should I chastise you for not completing it with "4, as per ${your elementary class math textbook} and ${that other book you read as a kid}, corroborated by ${your first math teacher} and ${your parent} quoting ${some other work}"?