The distinction is that Facebook are failing in their moderation efforts while Parler prohibits themselves from even trying. There's also the face that Facebook has such an enormous footprint of positive or neutral content. It would be like shutting down air travel because some planes crash.
Is this not an absurd standard considering section 230 exists? Where the govt literally tells platforms to not moderate? From my understanding Parler has been taking down explicit calls for violence but the bar is just high.
> Is this not an absurd standard considering section 230 exists? Where the govt literally tells platforms to not moderate?
Section 230 does not "literally [tell] platforms to not moderate". It removes some degree of liability for content not produced by the host itself.
If I, on Twitter, make a false statement (libel) against someone, Twitter is not liable for it (I am). Now, Twitter could remove that false statement (on its own or directed by a court as part of or after a suit against me by the injured party). Whether they need to remove it in order to avoid liability would depend on various other circumstances. For instance, if my content is actually illegal (by some measure) then Twitter could actually be legally obligated under other rules. But they remain free of liability for the post itself so long as they (eventually) comply with the law and remove it. But if they permit the illegal content to remain, then they could eventually become liable for it (per other statutes).
Moderation is, as a consequence of the laws in place, both permitted and occasionally mandated.
You cant really be using a letter from the company that dropped them as a source. Of course they are going to say all of this. Twitter just recently let hang mike pence trend for a while until it was taken down. Did they act fast enough? This is all subject to interpretation. We just had a summer of riots that caused quite a bit of destruction, did they act fast enough there? Again, there isnt a science to this, its highly discretionary.