If they didn't scan and detect child porn, there would be articles about how they're letting people get away with sharing child porn on Messenger. It seems there's no way for Facebook to win here, given that people want both complete privacy and also no illicit activity on the platform.
Thanks for that link - I can’t edit my above comment now to note that some scanning does happen.
I wonder how well it works, as the false positive rates must be huge? The idea of someone looking at my account and playing abuse/not abuse roulette is disturbing.
Well it's an automated system, so it's highly unlikely that someone is reading all (or any) of your messages. The volume of messages Facebook and Google process every day is astronomical, so no manual oversight process would scale. It's similar to how email spam filters have worked in a completely automated fashion for years. In this case, PhotoDNA works by comparing image hashes, so it probably has fewer false positives than spam filters.
But with a powerful search tool, one could go hunting for any 'type' of person they wanted based on social association info, geolocation or keywords. Its not benign or unweildy just because its large.
Well there's a good balance. Running PhotoDNA [0] on every image sent is a pretty good practice. Raising flags on any content that might break community guidelines is a completely different story. Two users might willingly want to break the code of conduct between them for whatever reason -- and Facebook wants to be able to halt that. In contrast, there's no legal grey area if you share child pornography. Just using an automated tool for that is great -- extending it to the entire platform's guidelines is not.