Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If they didn't scan and detect child porn, there would be articles about how they're letting people get away with sharing child porn on Messenger. It seems there's no way for Facebook to win here, given that people want both complete privacy and also no illicit activity on the platform.


> If they didn't scan and detect child porn, there would be articles about how they're letting people get away with sharing child porn on Messenger.

No there wouldn’t be. Do you hear that about iMessage, SMS, email or the numerous other services?


Google does scan for child pornography in Gmail: https://www.pcworld.com/article/2461400/how-google-handles-c....

This is done with PhotoDNA, a system used by many large tech companies for child pornography detection: https://en.wikipedia.org/wiki/PhotoDNA.


Thanks for that link - I can’t edit my above comment now to note that some scanning does happen. I wonder how well it works, as the false positive rates must be huge? The idea of someone looking at my account and playing abuse/not abuse roulette is disturbing.


Well it's an automated system, so it's highly unlikely that someone is reading all (or any) of your messages. The volume of messages Facebook and Google process every day is astronomical, so no manual oversight process would scale. It's similar to how email spam filters have worked in a completely automated fashion for years. In this case, PhotoDNA works by comparing image hashes, so it probably has fewer false positives than spam filters.


But with a powerful search tool, one could go hunting for any 'type' of person they wanted based on social association info, geolocation or keywords. Its not benign or unweildy just because its large.


It mentions that humans verify hits in the above links - but yes, it would seem unlikely for the average user.


> Do you hear that about...

Yes.

The reason why you don't hear about it is because of all the work already being done to combat it.

How much work have you done in this field?


Well there's a good balance. Running PhotoDNA [0] on every image sent is a pretty good practice. Raising flags on any content that might break community guidelines is a completely different story. Two users might willingly want to break the code of conduct between them for whatever reason -- and Facebook wants to be able to halt that. In contrast, there's no legal grey area if you share child pornography. Just using an automated tool for that is great -- extending it to the entire platform's guidelines is not.

[0]: https://www.onmsft.com/news/microsoft-updates-photodna-softw...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: