I think the argument here is that (1) the model is going to have false positives (e.g. revealing pictures of you and your spouse, your beach photos, etc.) that will permit access for non-CSAM (or at the very least, mark your account as suspicious in the eyes of authorities), and (2) the model itself can be updated/replaced for any reason and potentially at any government's demand, so the limits on scope are effectively moot
For argument (1), they are only looking matches from existing database of hashes what NCMEC is providing. They are not developing general AI to identify new pictures, they only try to stop redistribution of known files. Because of that, their claim for 1/1 trillion false postives might be actually close to be correct since it is easily validated on development phase. Also, there is human verification before law-enforces are included.
For argument (2), this might be valid, but yet again, all we can do is to trust Apple, as we do all the time by using their closed source system. Model can be changed, but it is still better option than store everything unencrypted? In case you mean forging hashes to decrypt content.
For the sake of surveillance, it is not strong argument because again, system is closed and we know only what they say. Creating such model is trivial, and is not stopping government for demanding if Apple would want to allow that. System would be identical for antivirus engines which have existed since 1980s.
This is such a PR failure for Apple, because all their incoming features are improving privacy on CSAM area, everything negative comes from speculation which was equally already possible.