Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The popular radio program “On the Media” feared Musk’s support for free speech would lead to a free-for-all environment rife with child pornography. But that’s a strawman: child pornography is illegal.

This is itself a strawman.

No one thinks Musk will permit it on Twitter. The gutting of the moderation teams who tackle it is the concern. An underenforced rule is often not a very effective one.



All online platforms need to have moderators to prevent things like spam and illegal content. That doesn't mean that they can't also uphold the idea of free speech principles though.

Musk really went crazy with cutting staff, but I'm not sure if it was because he wrongly thought he needed to cut the moderation teams in order to support free speech or if he just did it because moderation is expensive.


Why is spam always glossed over in these "principled defenses" of free speech? It's legal speech.


I love me some free speech, but I don't think I've ever met a true free speech absolutist.

Most of are just fine with limiting speech when it's for the public good. We don't want companies to be able to lie about their products, so we support laws against false advertising. We're okay with people getting arrested for making bomb threats. We even approve of compelling speech when it means forcing companies by law to label the ingredients of their products!

Left unchecked spam makes the services we love unusable. It can prevent us from having those same discussions we flock to social media platforms to try to engage in.

Social media platforms should provide a constructive environment where ideas can be freely discussed and that means moderation is necessary for things like keeping out spam, keeping discussions on topic, and even filtering out trolls.

Content moderation is essential to a healthy forum, but obviously care has be taken to maintain a balance between restricting and allowing content guided by what would best facilitate the constructive discussion of ideas, and not based on personal feelings about the ideas themselves. It's not an easy task, and it's thankless work, but if it wasn't for moderation none of us would be here talking about how spam should or shouldn't be allowed to ruin everything.


> Most of are just fine with limiting speech when it's for the public good.

I'd edit this. "Most are just fine with limiting speech when it isn't them." The "public good" to, say, Andy Ngo is going to involve throwing a whole lot of leftists in prison.

This whole thing is just about having the guy you like be the one making the decisions.


For the public good? You mean “aligns with what they think is for the public good and their own views” - there is no such thing as free speech as there is always limits, and really it is just where the lines are drawn.


Because it's not very interesting because nobody has a good faith reason to argue in favor allowing SPAM. I assume it's being used to demonstrate the arbitrary nature of what speech people would want to allow, but it's a bit pointless. SPAM falls in the category of things that arguably are simply abuse of the system, using it in an unintended way. SPAM is not discourse, or an ideology, or a belief. SPAM is a pattern of behaviors. The general idea is simple: in principle, expressions that are legal should not be disallowed on moral grounds, but disruptive patterns of behavior absolutely should be.


Some people like to post a word or a short sentence or some kind of artwork many, many times, everywhere they can. They enjoy knowing their message is seen by many. For them, it's not pointless.

At what point does it become spam and we restrict what these people view as free speech?


I hate that I am answering this, as if it's really a question posed in good faith. But again, the problem isn't the expression. The word or artwork isn't the problem. The problem is the disruptive behavior pattern.


Is a whole bunch of people calling somebody a "fa***t" in DMs discourse? Lots of people have told me that this isn't illegal so Twitter should do nothing.


This is a good example of a question posed in bad faith. I shouldn't have to reply, but unfortunately in this situation if I don't address it, it looks like I'm simply unable to, which is not the case.

I have a stunning revelation to make:

I am not in favor of allowing all legal things on the principle that because they're legal, they should be allowed. In fact, given that the thing you're replying to is arguing in favor of moderating SPAM, I figured it would be apparent. Apparently, it is not.

Furthermore... Like SPAM, harassing someone in the DMs by simply calling them homophobic or racial slurs is not "free expression." It's harassment. Harassment, libel, etc. are not things that everyone (hopefully not most) who believe in free expression as a principle are trying to defend.

I'm mostly not going to address the "lots of people have told me" and disregard it, since it's probably made up for the bait. But if not, I mean... Good for all of those lunatics, I guess. I'm not associated with them, and I don't like Twitter or any social media platform to begin with.


Is a whole bunch of people calling someone a Nazi any different? It's a signal of social disapproval, even if it's distasteful.

I think you can meaningfully distinguish individuals expressing distasteful speech from coordinated campaigns and harassment, and spam falls under the latter.


We can have a discussion about what sort of speech is unacceptable on a social media platform. My concern is largely with the people who insist that we shouldn't even have the conversation about insulting messages because of free speech absolutism or whatever.

The "oh but lefties are mean too" argument immediately retreats from the idea of free speech absolutism. I'm down for that.


> My concern is largely with the people who insist that we shouldn't even have the conversation about insulting messages because of free speech absolutism or whatever.

That's not where I would draw the line against free speech absolutism. Insults or rudeness from individuals should be permitted, insults from groups/coordinated campaigns is where I would draw that line because that starts crossing into inciting a mob. Mobs of any political persuasion are undesirable.

Uncoordinated-coordination seems like an emergent phenomenon of social media though, which is why this is a tough issue. It's almost like we need some kind of back pressure against virality to keep mobs in check.


It's more that the SPAM issue shows that "speech people" don't have an actual working methodology of distinguishing 'good' speech from 'bad' speech any more than the very people they complain of being censorious.


Even "free speech absolutists" are not actually absolutists, because almost all of them have the asterisk of "within the law." The law is quite arbitrary too, with my most favorite bit of nonsense being obscenity law. The point isn't that you should allow everything all the time, it's that in general banishing ideas or expressions because they're immoral sucks, and I don't like it out of principle. There are, of course, other reasons why we limit expression, and some of those are more reasonable in nature, even if it's not always a good idea.

That all having been said... SPAM and harassment are not problems because of the expression itself, they're problems because of the disruptive patterns of behavior. The point is not that you can't say something, or have a given opinion, or etc.

I'm not really sure how this came to be everyone's ultimate catch 22 on free expression when there's more obvious caveats, such as how arbitrary the law is. But as arbitrary as the law is, it's like gofmt. Nobody's favorite, but everybody's favorite. (This is possibly one of the worst HN analogies this month, which now that I think about it, should probably be a thing someone tracks.)


"That all having been said... SPAM and harassment are not problems because of the expression itself, they're problems because of the disruptive patterns of behavior. The point is not that you can't say something, or have a given opinion, or etc."

Okay, but to ban speech on this is once again to pass judgment on its value to the contribution of the discourse or whatever avenue for communication is at issue. That's why the absolutist position is ridiculous. It isn't navigable from any perspective save for the very perspective they are already criticizing.

Also, I'm not sure why you chose the word arbitrary. That's not what arbitrary means. Obscenity laws aren't arbitrary at all, they are based in specific judgments related to a community's perception of what is and isn't acceptable. I'm not saying obscenity laws are good or especially well-reasoned, but they are clearly not arbitrary. Perhaps you meant subjective/un-objective?


What counts as "obscene" sure feels arbitrary, but fine. Subjective.

> Okay, but to ban speech on this is once again to pass judgment on its value to the contribution of the discourse or whatever avenue for communication is at issue. That's why the absolutist position is ridiculous. It isn't navigable from any perspective save for the very perspective they are already criticizing.

The key point that I've been failing to convey effectively is very simple: with SPAM, the expression itself is not the problem. If you post it 10,000 times responding to unrelated people, that is a problem.

(I realize that commercial SPAM is possibly what you are referring to here but... That sort of SPAM is more or less permitted on social media, so it's kind of neither here nor there.)

This generally follows: if you DM someone to yell racial slurs at them, you are harassing them. It's not about the platform banning naughty words, it's about banning disruptive conduct. The conduct is about the behavior, not the ideas or expressions expressed in them.

The "absolutist" position is basically never actually "absolutist". I initially thought people were interpreting it literally as a joke or something, but it seems like it has been taken pretty seriously. Yet, there are exceedingly few people who think that unprotected speech like CSAM should just be allowed. They DO exist, but I have a feeling the speech absolutists you are referring to do not. Doesn't that already make this discussion moot?


How is obscenity arbitrary? What's considered obscene is related to what's considered not acceptable in society... you are acting like people arbitrarily decided that ducks are obscene...

>The key point that I've been failing to convey effectively is very simple: with SPAM, the expression itself is not the problem. If you post it 10,000 times responding to unrelated people, that is a problem.

You have conveyed that but it is not a useful metric by which to filter things from an absolutist standpoint because you have to make a value-judgment on the worth of the speech in regards to the venue... exactly what I said before.

>This generally follows: if you DM someone to yell racial slurs at them, you are harassing them. It's not about the platform banning naughty words, it's about banning disruptive conduct. The conduct is about the behavior, not the ideas or expressions expressed in them.

It's not that its disruptive... its that its harassment which is already a civil action and likely criminal in your jurisdiction as well. If you are gonna talk about how arbitrary laws are... maybe know a law or two?

>The "absolutist" position is basically never actually "absolutist". I initially thought people were interpreting it literally as a joke or something, but it seems like it has been taken pretty seriously. Yet, there are exceedingly few people who think that unprotected speech like CSAM should just be allowed. They DO exist, but I have a feeling the speech absolutists you are referring to do not. Doesn't that already make this discussion moot?

I'm not Elon Musk saying Im buying Twitter in order to support free speech... so don't look at me! I don't have problems with content moderation because I'm not naive.


I am disengaging at this point. I do genuinely like discussing these things, but I don't think we're speaking the same language.


Which side of that line does neo-Nazis trolling Twitter count as?


Trolling is a pretty vague word, it doesn't really mean anything. So it's hard to really reply to this with any degree of seriousness, if it even was serious to begin with. That said, on properly moderated forums, trolling would usually be moderated, by a human that proactively moderates discussions. Social media can't really do that because the scale of the moderation team they'd need to do it would be literally unthinkable. It's clear that they need huge moderation teams just to upkeep the crappy standard of handing reports inconsistently that they have today. Exactly what to do with that information, I don't know.


Trolling is online speech used to deliberately upset others. That's it. That's what it means. It has a clear definition.

You might have wanted to say that no one can agree on what speech is trolling and what speech isn't, but that's not because the word is vague. It's because people disagree on the deliberate and the upset part.


It may be clear but it is also broad. The law is more narrow: there are forms of trolling that are illegal speech and there are some that are legal.

There may be a good moral argument to draw that line differently when the consequence is a ban from a commercial platform vs being locked in a prison.


I wouldn't define it as limited to online speech. And most trolling I'm familiar with is calling out asshats and holier than thou types in sarcastic and funny manner.

No definition is as concrete, unmallable, and unchanging as you seem to imagine.


No, I didn't mean to say that. Case in point... Even that definition is vague.


Can you elaborate? It seems definitionally straight forward to me, so I'm not sure what I'm missing.


"online speech used to deliberately upset others" does not actually distinguish trolling versus other malicious behaviors, such as just being rude and flippant on purpose, so it is not very precise. It also doesn't really describe the actual kinds of behaviors that trolls engage in, but rather constitutes a class of behaviors that are not necessarily obviously connected, so it's vague. There's a lot of different ways that people troll, and different kinds of trolling, and I don't feel like that definition really summarizes it. For example, the term "concern trolling" is generally included in the umbrella, but it's actually more subtle than just being used to upset others; it's subversive, but the goal isn't necessarily just to upset others.

Truthfully, the two observations are related: The word "trolling" being kind of vague is probably the main reason why people do not agree on what actually constitutes it.


from this thread, it sounds like you're saying spam has a clear, precise definition but trolling does not.

what then, in your mind, is the clear, precise definition of spam?


I'm a little frustrated here, because I never attempted to imply that SPAM is easier to define than trolling. If that's somehow something people are legitimately reading out of my replies, then I must've messed up somewhere. All I have been trying to suggest is that nobody has made a good faith argument in favor of allowing SPAM, which is not the case for trolling. But the thing is, while neither have a concise definition, and social media moderation are imperfect at dealing with both situations, it is MUCH easier for a human to distinguish SPAM from trolling. In some types of trolling, the very point of it is that it is difficult to distinguish from a good-faith post; if it wasn't, it would be bad bait. Whereas many SPAM patterns, by nature of being SPAM, are detectable just by looking at posting patterns and not even contents, which is basically the way that social media handles such content. You can't do that for human trolls, because human trolls don't look much different from a high level as other users do, especially depending on what kind of troll you're dealing with.

I am kind of surprised at how many different ways people have interpreted what I said. I'm frankly feeling a little defeated.


"such as just being rude and flippant on purpose"

How is that not trolling?


Trolling almost always involves fishing for a reaction by acting dishonestly or misleading. When you are simply rude and flippant on purpose, that's just being an asshole. If someone is rude to me, I don't say "ah, I just got trolled."


It sounds relatively straight forward to make it "Deceptive speech used to deliberately upset others or undermine discussion." Is that accurate and precise enough to cover your conceptualization of trolling?


You don't think that being purposefully rude is to solicit a reaction? I guess we'll have to disagree about that.


Yeah, I think it's to solicit a reaction 100%, but it's not subversive at all. Their intentions are clear. The "coaxed into a snafu" meme hints at the nature of what makes trolling unique versus just flaming. Reading back the last part of that sentence has teleported me back in time about 20 years.


It's definitely subversive, as it's subverting communications norms by being rude in the first place.


I mean in a different sense, in the sense that it is insincere. That's the problem with trolling right there. If you're sincere, it's obviously not trolling.

Argueing that something is trolling because it solicits a reaction, or that because it's disruptive it counts as trolling, doesn't make sense. You can't distinguish trolling without knowing someone's motivations. Posts that could be trolling could just as easily be venting, or bringing up a genuine concern that just happens to be contentious, or etc.

Otherwise, flaming people in general is obviously trolling. That's not the way the word trolling has been used historically.


It's not insincere. Perhaps you mean unkind?


I just looked and plenty of definitions of trolling seem to invoke the same idea of insincerity.


How does one tell which ones are trolling and which ones are sincerely expressing their beliefs?


it is or can be. It's a weird case though because you could theoretically drown out everyone else's speech with it. Like imagine if there were no spam inboxes or spam restrictions on SMTP or email inboxes and for every one good email you got, there were 10K spam emails about enlargement pills. It would make email much less useful and maybe unusable.

I have thought about this before and I get kind of annoyed when people assume spam is a given exception to free speech. Free speech by itself doesn't imply a limit. So I think you have to carefully design around this issue or just let it happen and maybe it dies off on its own since too much spam chases everyone away and then what's the point of spamming?

edit: drowning out other people's speech may be allowed by total free speech but it's also contrary to the intention I think. I think free speech means that any speech someone wishes to make, they may do so without restriction or censorship by the medium. This isn't a completely satisfying definition though.


That's because you can spam anonymously. If email senders had to be securely identified, as in China, spam would not be a problem. Spam, and a ton of CAN-SPAM lawsuits land on you.

Most of the problems that seem to justify prior restraint come from poor source identification. If it's illegal, deal with it through the legal system, after the fact.

Facebook may have had the right idea with their "real names" policy, if they'd stuck to it and required strong authentication.


Email could still be useable in that case using personal whitelists and shared whitelists.


Imagine for a moment that the volume of spam is compared to the volume of oration. Unlimited volume is not permitted in free speech. That is a disruption to public life.

There is no reason to assume that this theory would fail to be legitimized in courts. Spam is not legal in other arenas, i.e. phone calls & texts.

The fact that the operators of spam bots are difficult to prosecute does not mean what they are doing has legal grounds.


Spam restrictions aren't generally applied by the government, and therefore don't fall under the constitution. The law doesn't require anyone to listen to someone else's speech. It is not a violation of anyone's rights to discard their emails unread using an automated filter.


I can't make sense of your statement.

Can clarify what it is this means?


Hmmm. I thought it was straightforward. I'll unpack it:

> Spam restrictions aren't generally applied by the government, and therefore don't fall under the constitution.

The "free speech" constitutional amendment stipulates that the government can't restrict speech. It doesn't apply to a mail service provider, which is free to reject whatever it likes.

> The law doesn't require anyone to listen to someone else's speech.

Your freedom to speak to me ends when I decide I don't want to listen to you. I have a right to not listen, and I have a right to reject spam.

> It is not a violation of anyone's rights to discard their emails unread using an automated filter.

I don't know how to say that more clearly; using a spam filter doesn't violate the US constitution. Email would be unuseable without spam filters.


Free speech is about allowing any idea to be expressed. Spam is not an idea, but a way of expressing something, so it's not inherently against free speech to restrict it.


Yet many will complain that what they call "hurt feelings" are irrelevant regarding whether or not their statements were OK and should be welcome. What separates a "way of expressing" from a true "idea"? "Abuse" is not an idea, but a way of expressing something? "You suck" is something of an "idea," but "I hope you die" or "watch out for your kids" if it's not a literally true threat? Not so much... And if it IS a true threat? For that matter... "mockery" is not an idea... "hateful" is not an idea... "insulting" is not an idea? Seems like you can take that definitional dodge to lengths to allow just about any sort of moderation.

(Any given post of spam, basically just unsolicited peer-to-peer advertising, seems evidently an idea, IMO - it's very much something you are supposed to believe and act on. If it's a volume/repetition/thoughtlessness thing, how's that different from a wave of trolling posts?)


These are all heavily subjective and often used as a code for certain opinions. In current political discourse, “hateful” usually means “disagrees with the woke ideology”, so a statement like “kill all white people” would not be considered hateful, whereas “children under 18 shouln't be allowed to take unnecessary cosmetic surgery” would.


You carefully avoid saying that "spam" isn't subjective, but if you think it is, your reply wouldn't be relevant, so: I doubt you'll have any luck trying to "objectively" establish a criteria for spam classification that nobody disagrees with.

Or what about the threat examples, since you claim all those categories were subjective? You didn't engage with that one, or many of the others. "I know your kids go to [specific school], watch out"? Subjective? "I'm going to kill you and your family?" Subjective or straight up threat? How about mockery or insults? "Your post is stupid and you are stupid?" Is it subjective that that's insulting? So is it an "idea" or just a particular way of saying I disagree with you? "You are clearly politically motivated and not arguing in good faith?" Maybe that one is actually an idea!


> You carefully avoid saying that "spam" isn't subjective, but if you think it is, your reply wouldn't be relevant, so: I doubt you'll have any luck trying to "objectively" establish a criteria for spam classification that nobody disagrees with.

Alright. Spam is when you post something to a space where it's not relevant (not really applicable to Twitter since it doesn't have topical spaces), or post something repeatedly without a good reason.

> "I know your kids go to [specific school], watch out"? Subjective? "I'm going to kill you and your family?" Subjective or straight up threat?

Those are specific threats, which are not protected as free speech.

> "Your post is stupid and you are stupid?"

Nothing wrong with saying that, aside from the fact that it's not constructive and you'd be better off explaining why you disagree.

> "You are clearly politically motivated and not arguing in good faith?"

Same as previous.


"I know your kids go to [specific school], watch out" isn't a specific threat.


How not? It expresses intention to harm a specific individual at a specific place if a certain implicit condition is met.


Is "shut the fuck up, fa***t" an idea? That's precisely the sort of thing that I see people say shouldn't lead to bans on Twitter.

This is very clearly not just about "expressing ideas." There is very real behavior that is designed entirely to hurt other people and is 100% legal that is at the center of this discussion of online moderation.


> Is "shut the fuck up, fa**t" an idea? That's precisely the sort of thing that I see people say shouldn't lead to bans on Twitter.

It's not an idea, though I don't see a particular reason to ban it, unless you have a platform like Saidit that generally encourages constructive discussion over baseless insults (which I consider a great idea).


How does this jive with the claim that free speech is about expressing ideas?


I'm not saying it goes against free speech to ban this particular sentence. It's just hypocritical if you don't also ban other insults.


A kind of motte-and-bailey, I guess.


What desire does a platform like Twitter have for spam? We have laws that prevent people from uttering threats. Twitter has rules against spam.

It's their platform and they want their users to be free in their discourse, they just don't want spam.


It's fair to assume he cut moderation teams because they were intimately involved with suppression of speech at the behest of the government and political interests.

Determining those moderators which have been witting participants in the chilling of free speech is more difficult than axing a great many of them that underperformed and keeping the few with real tangible contribution.


Being “intimately involved with suppression of speech at the behest of the government and political interests” exactly describes the job of removing child sexual assault content.


Are you familiar with logical fallacy?

Yes, bad things are on the internet.

That is in no way the content or information I'm describing, nor does it fall into the category of legal free speech.

Bringing up illegal things to argument the suppression of legal speech is, I don't know, moronic.

You are perpetuating a tired trope and I haven't the energy to persist with this discussion


https://twitter.com/elizableu/status/1599484564832854017?cxt...

This is where you get bent.

Your logical fallacy is detailed here:

https://en.m.wikipedia.org/wiki/Appeal_to_emotion

I hope that, despite what you are implying, this success by Musk is not upsetting to you.


A better word is partisan or authoritarian interests.


Hey, it begs the question, how many authoritarian and partisan tactics can one utilize before being considered an partisan authoritarian?


Why do you think that is “fair to assume?” To me, it sounds like wild conjecture.


Musk has stated that Twitter interfered with elections. I'm not sure what evidence he's provided for that, but he did say it.


he has said a lot of things that are wildly inaccurate.


Given his usual proclamations, (e.g. level-5 self driving) I think it is fair to assume that he is not very high on evidence.


And Trump still hasn't conceded 2020... who cares what people say when it's nonsense?


The groups did a fair amount of censoring and controlling political information.

He bought Twitter to stop that from happening.

So, it's safe to assume, in my opinion, that he fired a bunch of people that care more about their ideology than they care about fair or honest debate or acting in accordance with the principals of freedom or the United States


You dont need to assume. Twitter has openly stated their political and ideological values that drive moderation.


Where?


If I had to guess what is in Musk mind, I would bet that he thought he wouldn't go anywhere with an organisation full of employees who actively hate and resist him, so this was a way to shake the tree hard, to have a chance of a fresh start. Doesn't mean a lot of those teams won't be restaffed, just not restaffed with the same political activists.


This is most agreeable take to me, Musk's organization style is a strict, recursive hierarchy, with a unitary and grandiose vision divided and conquered by VPs and managers. He cannot have a distributed self-organizing collective like typical Web services company is, and must be simply cutting down employees to the minimum.


Yeah I'm not sure how anyone could be surprised. Its completely consistent with his explicitly stated plan and goals.


I agree. Musk purchased a company staffed by people that believe they have a duty to silence opposing views. There is simply no good reason to keep staff that are actively opposed to the new owner's goals. It was either gut the company and take the hit now or battle for years with staff that sabotage your goals.


> Musk really went crazy with cutting staff, but I'm not sure if it was because he wrongly thought he needed to cut the moderation teams in order to support free speech or if he just did it because moderation is expensive.

It's largely cost-cutting for sure, but I also think he believes he needs to start over in a lot of these departments, the entrenched beliefs of how things should be moderated was undoubtedly very deep. I suppose we'll see!


All centralized platforms that want to control the content, but this is not actually a requirement of an online platform.

Spam in e-mail is a problem because spam consists of private messages sent to individuals, but public messages can be quickly categorized by a community, and ones own characterization of a message for the purpose of sorting messages for display can be based on a how whether it's chosen for retransission by people who who one oneself has a high trust in.

Imagine having a feed which is initially a horrible spam-filled mess, and then as you encounter things it you like, you upvote that, until your feed is resorted to include mostly quite interesting things. When you again find spam or uninteresting content you reduce the weight of the people who retransmit it.

Moderation always risks being political manipulation, and it's probably too dangerous to democracy to allow it. It's certainly too dangerous to accept foreign moderation (I'm from Europe).


To really drive home the point, simply consider that pre-Musk Twitter did not cite free speech as a core value.


Unless you're splitting hairs and trying to say "free speech" and "free expression" are fundamentally different, they absolutely did.

July, 2021: https://archive.ph/VlpYI

> Defending and respecting the user’s voice is one of our core values at Twitter. This value is a two-part commitment to freedom of expression and privacy.

> This is a global commitment, and while grounded in the United States Bill of Rights and the European Convention on Human Rights, it is informed by a number of additional sources including the members of our Trust and Safety Council, relationships with advocates and activists around the globe, and by works such as United Nations Principles on Business and Human Rights.


I think it's a little late to still believe Musk is a fan of the principle of free speech, after events such as his little pissbaby tantrum when people started changing their Twitter name to Elon Musk and their avatar to that photo of him balding.


Impersonation being banned is a bad example. A better one is Musk refusing to let Alex Jones back on, which is what lead to the head of trust and safety resigning because he realized this was just Musk making up rules as he went along.


This is the quote they're referencing:

NATALIE WYNN (the mind behind Control Points, a left wing YouTube channel) I do think that looking at 8Chan is a pretty good case study in what happens when you create a "okay, let's just let people say anything." People are posting child pornography to this website on a fairly frequent basis.

I think you're right. "On the media" was just talking about how having an "anything goes" policy leads to a place where nobody wants to hang out, where people post illegal stuff even though its technically not allowed. Which I think is valid when Musk has previously said Twitter should allow anything legal.


For reference, it's ContraPoints.


It's been grimly entertaining to watch the Musk/Kanye cycle from inviting him back as a symbol of free speech to discovering that Kanye interpreted that as antisemitism was green lighted now, to banning him again.

I guess that's the context for today's free speech discussion.


> No one thinks Musk will permit it on Twitter. The gutting of the moderation teams who tackle it is the concern. An underenforced rule is often not a very effective one.

This isnt cause for concern because they've already caught some longstanding CPU. They are doing a better job now.


I mean, they are actually not doing better job. Nothing like that was demonstrated at all.


[flagged]


Again, it’s not about whether or not he wants the stuff on Twitter. If you fire all the staff who find it, handle user reports of it, and remove it, the end result is the same.


It doesn't matter how many flunkies you have on staff. What matters is whether the stuff is effectively caught and removed (and probably reported to the authorities).

Time will tell on this, but currently Musk looks a lot _more_ credible on this than Twitter 1.0. They seemed to have been a lot more focused on repressing wrong-think than dealing with actual criminal behavior. And this isn't just abstract--they helped ruin countless lives.


Do you have data that shows Musk is taking down more CP or similarly vile illegal content than "Twitter 1.0?" Or are you seeing the news that he took down a few noteworthy hashtags that had previously been ignored and using that as broader evidence?


> tolerate child pornography (unlike the prior owners of Twitter)

Twitter previously tolerated this? That’s the first I hear of that. Do you have more info?


> (or who's old enough to remember Polanski)

That's not a resolved matter by the way. He's still alive, still a convicted rapist, still a fugitive, and still defended and respected by Hollywood-centric media. He continues to evidently be immune to 'cancellation' because... influential people in movie industry like his movies I guess.


That was a product of a bubble of 1970s liberalism that was open minded towards sex between adults and young teenagers. It was an intersection of the 1960s sexual revolution, and the 1960s trend of treating kids more like adults (e.g. Tinker v. Des Moines)—but prior to the #MeToo era focus on affirmative consent and power dynamics. They simply couldn’t understand the backlash against Polanski.


Are you implying Balenciaga is .. trafficking children or making child porn or something? Not sure what you mean by "the person we should be worried about."

My understanding is they made an offensive photoshoot using children but I wasn't aware of any sexual abuse allegations. I'm aware of the court opinion on the blanket and the sexually suggestive stuffed animals. It's inflammatory, sure, but there's definitely a case that it's artistic as well.


Replying to my own comment to say that I've spent my evening reading up on the controversy, looking at the ads, reading the SCOTUS decision, studying Borremans.

I'm pretty sure this is just a more digestible, mainstream version of the Wayfair human trafficking conspiracy theory.


No abuse that I've heard of. But very offensive. Look further (not at work). It's too gross to describe on HN.

I'm pretty strong on free speech myself. But if you want to repress awful stuff, Musk is not the place to start. And the narrative that he's somehow lowering the level of discourse on Twitter is absurd.


Going further, it feels like the ad campaign did contain some social commentary about the sex-posi, BDSM-posi, generally more adult world children have to navigate. The fact that people find it uncomfortable feels like the point.

But as a society, can art and images only be taken literally and autobiographically now? That feels like old Christian ways when Jesus could not be depicted (or at least the art history part of my brain thinks of that)


I've seen it. It's a child with one of their stuffed animal purses in BDSM gear. And there's some wine in a shot. And there's text for a SCOTUS ruling that child pornography is not protected speech.

I'm honestly not seeing what's so grotesque about it all? Feels tame as far as fashion stuff goes. Wouldn't bat an eye in the '90s.


You haven't reached the bottom yet. Keep going until you hit Borremans. It's not 4chan, but I'm not listening to these people on the subject of standards and practices.


I looked up Borremans because I'd never heard of him but apparently a book of his work in the photo shoot is some sort pedophilic "code"? Or something like that?

What exactly is the issue with his work? That it depicts nude (but not sexualized as far as Incould find) children? And the children aren't real models afaict either.

I like what I see of his work. Reminds of Francis Bacon. Deeply human art. There's something about that kind of figure work that speaks to the soul.

So yeah, not sure why this artist is "the bottom."


I think it's really creepy that you don't see anything wrong with a child in BDSM gear. And alcohol.

That's actually sadism.

I dunno, maybe you just don't get it. Do you know what sadism is?

On the other hand, I've yet to hear a convincing argument as to why it shouldn't bother me other than "oh just get over it it's not that bad", which isn't working with me, since I have no petty bourgeois sensibilities, and I know what I'm doing when it comes to art.

Why should I be cool with sadism?


I thought the ad was creepy and weird, but you're lying here. There was no " child in BDSM gear" in the photo. Why lie to make a point? That's creepy and weird, too


I made a mistake but everyone knows what picture I'm talking about. It's not a lie.

So it was a child carrying a doll that had BDSM gear. But that's funny cus if the girl was the one in BDSM gear, that would make it wrong? Is that the line?


I'm curious why, if you believed you made a mistake, you didn't edit your comment at the time to avoid propagating misinformation (as the 2 hour edit window was still open when you made this response acknowledging it as a mistake)?

I personally am not familiar with this photo and don't really want to see it, so yes, your comment could have mislead me.


There's no ulterior motive. I just messed up. I fixed it as soon as I got called out on it without objection and I left it there so people can see what the issue was.

If you are going to participate in this discussion, you need to keep up. You're attributing malice where there is none. That means you have to see the picture.

I'm not saying you can't say anything, just that I'm going to dismiss you out of hand.


The child was next to some glasses of wine. Why does that make you clutch pearls exactly?

Have you seen Big Daddy? The Adam Sandler film? Is that also on this level? Feels like the same thing. Child actors in an adult piece of media with sexual and violent and otherwise adult themes.


The thing is I just don't care how you feel. You haven't explained how the kid can consent to this photo shoot or why it isn't sexual abuse.

I want to know if you think it's ok if the kid wears BDSM gear. Yes? No?


Why do you want to know that when you've acknowledged in a parallel thread that that wasn't what happened? It's hard for me to read this any other way than a deflection so that you can discuss a different set of events, which didn't happen, where you feel your arguments would fair better.

Given that we're already talking about a hot-button issue which is the subject of conspiracy theories, that seems dangerous.


Yeah, you're trying to catch me on a typo, but it's not gonna work cus I already acknowledged it.

I do know I hit a nerve with that particular scenario. I got all I wanted. Ok, so let's just move on. Let's get to the fun stuff.

So the line stopped at her wearing the BDSM gear. Why? Why not cross that line? Why is it ok for the bear to wear BDSM gear and for her not to?


I misunderstood. I thought we were having a discussion, but I can see you're looking to deliver a monologue. I'll leave you to it.


Lol. All I did was respond to the Adam Sandler move with my own. I'm just trying to speed this up and get to the good stuff already.

Apparently, if it was the kid wearing the BDSM gear, it wouldn't feel like the same thing. It would feel wrong.

Of course, if you, or anyone reading this feels different, speak out.


For added weirdness “ba len ci aga” is Latin for “do what you want.” See for yourself with Google translate. It could also be translated as the more familiar “do what thou wilt,” a rather infamous occult credo.


The company was founded more than a century ago by a spanish prodigy named Cristobal Balenciaga...stop with the conspirationist nonsense. Not here.


Sadly it is commonplace here (HN). You can find comments on HM spouting Tucker Carlson and co's outrage on a daily basis. I do not envy dang's job.

Moderating dog whistles at-scale is an unsolved problem. Quick, Paul Graham, seize the opportunity to change the world!!


Don't get so worked up, it's just a funny[1] coincidence. Noticing funny patterns is definitely a thing hackers do my friend. It's taking them too seriously that's a problem, so don't do that!

[1] Weird, not haha.


Can't help but think of that Sartre quote about arguing with people in Bad Faith.


You’ll need to do some introspection to find any. I haven’t argued anything at all, and certainly not in bad faith. I simply presented a factual and amusing linguistic coincidence. Whatever conclusions you choose to draw from that fact are your own and tell us only about you.


This is started to smell like Q-adjacent nonsense to me


That's basically what it is


And this gets to the heart of the problem. Twitter users think that because it was parody, the photoshoot was neither child pornography nor sexually abusive. Human beings recognize that it may have been parody but it was still child pornography and sexually abusive. It introduced, nay, immersed children in those scenes and behaviors.


But how was it abusive exactly? Child actors aren't allowed to be photographed in adult contexts anymore? I assume their parent(s) were present and involved and therefore it probably wasn't unsafe. Unless it's abuse by osmosis of their surroundings? Feels like Nathan Fielder's use of child actors was more offensive than this ad campaign and that was fine.


Twitter recently shut down several child exploitation hash tags that had been ignored until Musk took over.


Such as?


Is that really the sort of thing we need on HN?


You can easily find them on google if you're actually interested.


What a hero


At least some activists are saying that since the takeover, the moderation of CSAM has been dramatically improved after complaints had been falling on deaf ears for years. There are complaints that the old moderation team was ignoring clear and well-documented reports.

https://twitter.com/elizableu/status/1566255230374842369

https://twitter.com/elizableu/status/1594137408186073089

What's shocking is that apparently Twitter has had a massive CSAM problem for years and Twitter's crack moderation team apparently did very little about it, not even banning hashtags reported to them repeatedly. And none of this got attention until Musk bought Twitter.

[removed section about addition of new reporting option]

I say this as someone who is not in the slightest a Musk fan.


That thread is debunked in the comments; the CSAM report option predates Musk.

https://twitter.com/ChronicBabak/status/1594762640357982208


Thanks, I will remove that part of the comment. Assuming good faith from someone who appears to be extremely active in this area, perhaps this was an A/B tested feature and she didn't have it.

What I find more concerning about this is the way media attention is used offensively. Clearly CSAM has been a problem on Twitter for some time. But only post-Musk is CSAM on Twitter becoming a focus of attention in the media. Is it increasing or decreasing? We're likely to get only fearmongering articles about the T&S team layoffs.


I've been following her for a few weeks now. I think she is coming at it from a good position but she also seems to be a very big Musk fan, and takes what he says at face value. And many of the replies to her tweets come from very Qanon people.


About 90% of what she is saying is true, but she is not coming from a good place.

Eliza Bleu aka Eliza Morthland (aka Eliza Siep aka Eliza Cuts aka Eliza Knows) is the daughter of MAGA politician Richard Morthland and her "Bleu" trafficking advocate persona is an act. She's a former American Idol contestant, ex-gf of MyChemicalRomance's Gerard Way, associate of child molester Jeffree Star, & fundraising partner of convicted child rapist and conservative spokeswoman Felecia Shareese Killings. Bleu also coordinates with Mike "Who cares about rape? I don't!" Cernovich, one of the pizzagate amplifiers (and the man who got James Gunn fired), and is amplified by QANon rags like The Epoch Times. Bleu began her podcasting career by interviewing and platforming Tara Reade, the former aide of President Biden who was caught fabricating claims of sexual abuse.

She is, to be blunt, an incredibly proficient grifter and propagandist who specializes in weaponizing the topic of sex crimes.

Eliza has been a speaker at various Tesla events for years, and works hand in hand with Teslarati, an Elon Musk propaganda news site. They are the original sources of the initial bogus claim that Musk had moved against CSAM on Twitter.

When Bleu claims Twitter had a massive problem with CSAM and that the former Twitter execs did nothing, she is actually telling the truth. The platform relies too heavily on pornography for user retention, and it didn't/doesn't have the manpower or tech to filter out (at scale) the massive amounts of underaged porn and CSAM that comes with being a major adult content platform. Instead of prioritizing child safety and nuking all the porn on the site like Tumblr was forced to do by Apple, the prior admin opted instead to bury the issue even as it continued to grow into a massive albeit invisible problem. They deserve every criticism and attack Eliza Bleu has lobbed at them, regardless of her own actions.

Now Musk finds himself in the exact same bind. Twitter (still) needs porn to hold the site together, yet it's (still) thoroughly infested by CSAM threat agents. It's a tricky situation, and the wrong move sees Apple giving Twitter the Tumblr treatment.

So Eliza Bleu has been tapped to control the narrative to prevent this by using her survivor persona as a platform to convince everyone that Twitter 2.0 has actually addressed that awful CSAM problem. She's the figurative Iraqi minister hands up before the mics.

Observe her reactions to the Forbes article that cites Carolina Christofoletti (an actual researcher, academic and CSAM expert who is respected in the infosec and OSINT communities). Christofoletti explains that the problem was never addressed at all, that CSAM hashtags are a ridiculous focus of attention, and that the situation has gotten worse. Bleu immediately goes into PR mode, attacking Forbes and Christofoletti in a profanity laden tweet accusing them of having an agenda.

Not the behavior of someone actually concerned about child porn, is it?

Any reporting or research that contradicts the narrative she is molding is immediately a danger to her task. Bleu has attacked every other in depth reporting on this issue since her initial tweet claiming that Elon had purged most of the CSAM from Twitter. She is on damage control; Her original claims were misleading, and now she will progressively be on the defensive as more of us in OSINT and the media call her lies into question.

Ironically, Eliza's current role makes her one of the biggest protectors of CSAM users on Twitter. Nothing helps them more than Bleu desperately attempting to propagate the narrative that they've been booted off the network when the reality is that they're surging well beyond anything anyone can imagine.

Keep watching her. You'll see the cracks.


"But only post-Musk is CSAM on Twitter becoming a focus of attention in the media."

False. Do queries about this issue prior to Musk's takeover.


The world has become so weird. That elizableu person responded to one of her tweets 'Important to note, Elon Musk himself “liked” a tweet in this thread last night. If this information wasn’t valid, if it wasn’t factual, that wouldn’t have happened.' and turned her story over to 'teslarati.com'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: