I'm surprised it's taken this long for them to get somewhat serious about this. I've worked with clients who have hate brigades that report everything they do and publish in an effort to get their pages and ad accounts shut down. I've gone as far as finding in infiltrating Facebook groups that openly talk about this while planning their next attack. Reported the groups for months and no one at FB cared.
Without knowing who your clients are, or why they are being targeted, I have no idea whose side I would be on.
For example what you are saying could be said by Cassie at Crowdsurf against the #FreeBritney crowd. But given that she's helping the public image of an unconstitutional and abusive conservatorship means that I don't WANT FB to do what she wants.
However it could also be said by someone representing the latest person who has been canceled for failing to be sufficiently politically correct. And in that case I'd be sympathetic.
I think not knowing is a better framework from which to work with to work out a neutral way to address misdeeds. We should not take sides and should be impartial when making these decisions (goose/gander).
That's not true at all. Context is the only way to make any useful judgements. A couple examples:
Person A tackles person B to the ground and holds them there against their will. Is that morally acceptable? There's no way to tell.
If A is B's estranged ex-husband and is upset that she hasn't returned his calls, most people would say it's unacceptable behavior. If A is a bystander to a knife attack by B on London bridge, most people would (and did) say that it is justified.
I completely agree. Outside of these toy examples, there really is no way to know the complete truth. But I don't think we should give up on doing the best we can to recover as much context as possible, and we certainly shouldn't fall into some sort of epistemic learned helplessness and try to make all judgements from a position of zero knowledge.
The problem is that anything you allow against the perceived enemy might come back to bite you if you suddenly are perceived to be the enemy. This is especially true when silencing opinions; you're not only taking away their right to speak, you're also taking your right to change your mind.
I fully agree with you that when judging a specific occurrence, the context is really important. Judging completely without context, on the other hand, is good way to know whether the action is something morally acceptable.
In this case, the 'goodness' of the action clearly depends on whether you like the target. So I'd argue that this really is not a good thing in general. It's slightly worse if it hits the 'good guys' and slightly better if it hits the 'bad guys', but it's not good either way.
I'm not sure what you think a law would look like that is completely impartial. The very existence of a law represents a thumb on the scales of otherwise unconstrained human behavior. What specifically should we make laws impartial to?
We don’t have to consider ridiculous extremes. We need to consider our philosophy, mores and ethics to inform laws.
We don’t say, it’s illegal to commit theft, well, unless you’re the government or the judge, then it’s okay because we know you must have good intentions.
Okay, so we should not give exemptions to specific categories of people who are a priori assumed to be good. I think that's fair. Are you concerned that that's the situation in the original context? Or would be, if we somehow knew who the original commenter was talking about?
It would be healthier to not know the identity to avoid introducing unnecessary bias in the decision.
It shouldn’t be like: oh it was Joe the grocer, yeah he’s okay, let ‘im go. Vs, oh it was Ernie the latrine digger, he always makes my skin crawl; throw the book at him!
So if the hate brigades were being launched by a group with a long track record of bad-faith and abusive behavior, you don't think that should inform your decision making?
Why not make a rule to address all brigading? why targeted against groups you like or dislike? The groups you like and dislike are not going to be concentric with other people's so keep it consistent.
I’m at a bar, a doctor and a highway construction guy walks in with his buddies. They talk vaxxing. The doctor has his opinion, though he’s not a virologist, the crew have diverse opinions maybe unsettled but giving me their opinions. What, the barmaid throws the doctor or the highways guys out, whichever she disagrees with?
That’s not her job. She’s not there to suss out truth or even narrative. We’re all there for community chit chat.
The WHO and the CDC have changed their minds over and over. Yeah, I get it -things change. Exactly, what's true today may not be true tomorrow and that is the point. People should be able to have a discussion.
I am not sure why you are getting downvoted as this is an astute analysis of the situation. I had the same thought: if the person you are relying to is helping promote say a cigarette company advertising to tweens and the brigading group is mothers against child smoking then I would be inclined to support what that group is doing to stop the marketing campaign. Similarly if the advertiser is trying to advertise something like The Big Lie which is actively harmful to the US democracy, and well, a lie, then I would also want to support the group trying to report the campaign.
This doesn’t cast a value judgement on whether any of this is good or bad, just that you’ll never get an objective measure of the situation based on just what the person you are replying to have us.
No. I am pointing out that based on the info provided and your own bias you might find the clients or the brigading groups that were talked about above to be either in the right or in the wrong. The way that comment was worded, it’s impossible to tell who the real victims are.
Also I think it can be argued that democracy and mob mentality are pretty damn related :)
> you might find the clients or the brigading groups that were talked about above to be either in the right or in the wrong
This is implied in the fact that no details were given, so I just find it incredibly strange that anyone would consider it an "astute comment". I suppose you must be referencing others in the thread who think they (or Facebook) can simply request a few details and efficiently judge correctly right or wrong.
> But given that she's helping the public image of an unconstitutional and abusive conservatorship means that I don't WANT FB to do what she wants.
Imo if Facebook wants to be less toxic (no pun intended?) they would want to quell these as well. As long as we are algorithmically deciding what gets views id prefer the bias skew positive and not negative no matter how justified.
Vaguely - This client was in the fitness niche and made the mistake if upsetting the vegan bodybuilder niche. The vegan bodybuilder group went hard after anyone that said it was harder to put on muscle with just plants. No shadiness, no spam, just a disagreement about building muscle.
Looking at the comments people try to solve a problem that has a simple solution. We don't need to discuss if a given person post, comment should be reported or if FB should follow the report, etc. We have a simple solution, let FB/Twitter/Youtube stop removing posts/movies that does not violate the law.
I understand that a lot of people will see comments and posts from people they don't like, some of those comments might be mean, rude, etc. But that's the cost of becoming public person - in the old days public persons were actors, politicians, etc. Today anyone can become one.
And I don't buy the "noble" aim of the platforms to fight "misinformation", "hate speech", "conspiracy theories". There are so many examples of misinformation from "trusted" resources (e.g. covid vaccine efficacy [1], [2], Saddam Hussein nuclear weapon and so on) and so many conspiracy theories turned out to be true (starting from Watergate, through Iran-Contras to Snowden revelations) that as a society we would be much better without preventive censorship.
Above official "trusted" resources claimed 95% efficacy while in practice it turned out to be much lower [3], obviously because of "new variants", as if it was a big surprise that corona viruses are mutating (common knowledge from 1960-ties) and covid-19 is just one of them, so claim that vaccine will end covid-19 infections was very far fetched, delicately speaking (not using hate words like "lies" or "cheap PR to sell more vaccines").
As for "hate speech" no one is forced to post/comment on Twitter or FB and meeting people who are not nice is just a part of life. Kids at school has to tolerate colleagues (and teachers) they don't like and nobody cares, first caveman also liked less or more other cave inhabitants and had to live with that.
That's the problem with corporations deciding what is right and what is wrong. Or what is considered hate speach.
Most people would agree Nazis are bad (except for the Nazis themselves), then you have anti-vax. In my opinion bad but clearly not an opinion shared by a large group of people.
We'll live in a world where free speech is controlled by tech corps. Which is all well and dandy when their views align with yours but horrific when they don't. The other option would be to not have anyone policing speach which has shown to be problematic as well.
The legal system needs to step up and own online speach monitoring and removals
> The legal system needs to step up and own online speach monitoring and removals
(I'm guessing you're US-based.) So instead of many poor systems trying to do better, we'll end up with one uniform system working poorly, that just became a nuclear-hot political target implicating the 1st Amendment.
The only arguably positive thing that does is relieve pressure on the platforms, while not allowing any behavioral diversity and making any future moderation adjustments a massive culture-war fight.
It puts the burden on judging what is or isn't protected free speech or hate speach in the hands of actual judges that are appointed for that purpose exactly.
I just don't understand why anyone thinks judges should be adjudicating trust and safety issues, why they think that will lead to better outcomes, or why forcing private actors to carry speech they won't want to (which seems to be the goal of such proposals) is in any way proper.
Not to mention the staffing issues you're going to have, running all these disputes through the courts...
Because, either they are a common carrier or they aren't. If they are a common carrier, they shouldn't be deciding, but they are also not liable. If they are not a common carrier, they can do what they want, but it carries done liability. Problem is, they want it both ways, decide and have no liability.
You were doing okay up to that last disastrous couple sentences. The legal system most emphatically should not engage in monitoring and removal of speech. It's bad enough when private companies do it, and that should be opposed as strongly as possible — but government shouldn't begin to touch the decision of what opinions are and are not allowable.
Try to understand that the economy is structured on a system of laws that facilitate the provision of jobs and goods/services by companies, while minimizing harm and maximizing fairness. Are you anti-regulation in general, or saying something more like Facebook is a special case that for some strange reason should keep hiding from legal scrutiny?
I've addressed how I think it should be handled elsewhere in the thread. I'm just going to point out here that condescension — "Try to understand [that my model of the world is the correct one]" — doesn't help gain an audience for your opinion.
Sorry, just asking questions in a manner I thought was appropriate following your response with the words 'disastrous' and 'emphatic'. Honestly it's unclear how or whether you think this brigading issue should be handled. I see that you think Facebook's actions here should be "opposed as strongly as possible" but excluding use of our legal system(?)
There is no good solution. You either tolerate the necessary consequences of actually being able to speak freely or you curtail those freedoms in order to avoid the consequences of having them.
Actually, I disagree with the bit where I said there's no good solution. Curtailing free speech is never a good solution as far as I'm concerned. Freedom includes the ability to choose to behave poorly. This is a fundamental consequence of having a free society.
Since I was being asked, this is exactly the answer I would have given. We don't have to have a solution for every problem, because living in society together has inconveniences; some of them are simply worth it.
There is a good solution - culture and positive social feedback loops. That's also the hardest thing to get right, dependent on shared values and open communication and people wanting to participate.
When you say “planning their next attack” do you mean a literal attack on your client’s security or do you mean a coordinated facebook brigading type of “attack”?
The truth is that tech companies don't give a shit about working with law enforcement and actively try to avoid ever dealing with them at all.
The ONLY time a tech company works with police is when they are obligated to, by law. It's actually quite a shame, I think it's actively harming our society. Laws without enforcement are worthless.
The only time a tech company should work with police is when they're required to by law, and not always even then. What are we coming to when we side with police before even knowing what law they're enforcing?
From what I can tell, Facebook’s internal research on mitigation is leaning towards studying the connections people have to determine the threat profile - not just the individuals posting habits.
Can’t say more, because it’s a guess based on posts and job requirements.
So front/standard moderation is outsourced and more advanced threat detection is looking at large coordinated networks.
It’s a huge pity that the state of the art in modern troll combat is behind NDAs.
I’ll also admit, that as poor security through obscurity is, it’s a useful speed bump here.
It’s kinda odd to contemplate how this space will evolve.
> It’s a huge pity that the state of the art in modern troll combat is behind NDAs.
I mean... do _you_ have any ideas about how to fix this sort of thing that would survive being published for the world to see?
I don't mean to offend you or anything, I just want to point out that the second you go public with your rules about how these things are detected, the people you are targeting will adjust their behavior to evade them.
I think that the secrecy is unfortunately a hard requirement until perhaps we could all coordinate into a global "behavioral score" system. And to be honest, that sort of shit is terrifying, so we should probably never do it.
> do _you_ have any ideas about how to fix this sort of thing that would survive being published for the world to see?
I would try to come up with a rule-based system that actually detects the bad behavior they don't want to have. Of course, people are then free to circumvent that system by not behaving badly.
This is pretty much the equivalent of “I would try to make a rocket that just lands on Mars instead of crashing”. All I gotta do is make it take off instead of not taking off.
Indeed, if somebody wants to build a rocket that flies to Mars, then it should better not crash, and the equivalent to not crashing is an algorithm that actually detects the bad behavior. Kind of strange that you're making fun of that very basic requirement. The problem really is that Facebook's algorithms are like rockets that crash too often (~generate too many false positives) and their "solution" is to ignore the crash death toll for maximum cost cutting (~minimize human intervention).
I'm curious how things look for people who are only friends with people of their extreme political persuasion and only likes/follows similarly extreme pages? Does giving some variety of relatively less extreme viewpoints cause a reduction extreme viewpoints?
This is certainly a desirable change from a user perspective. However, the issue under discussion is coordinated abuse by a group against some entity. I don't see how this addresses that.
only what your friends share in chronological order => bad actors can't influence what gets shown first and if a friend shares disagreeable coordinated materials unfriend
If someone shares SPAM you will see it once and then it will be below what you check the next time. You can unfriend someone that shares too much SPAM. You can report it too. There's no good way for an adversary to game an aglo that decides what to put on your timeline more often when the algo is most recent first.
There certainly is academic research that touches on these topics. Honestly, I'm sure the academics are moving faster than FB given that this has been a known problem for over a decade, and has only gotten worse over time.
Not been involved in this specific issue, but the problem is that academic research tends to be limited unless they have access to, say, Facebook's internal data. In which case they're under the NDAs
I think is is more about detecting highly connected groups and finding what they have in common.
For example, if there is a group where each member connects to most other members and nazism is the common denominator, then it is identified az a nazi group. If you connect to a large part of the group, then you are a potential nazi. If you know someone in that group and most of your other contacts are unrelated, you are probably not a nazi.
It is not that someone is marked as a supernazi and turns everyone he touches into a nazi.
> If you connect to a large part of the group, then you are a potential nazi.
IOW
> > My account can get flagged... Not because of what I say, but because of who I know?
Don't worry, when the US implements a social credit system it will be better than China's because it will be piecemeal and run by unelected tech companies instead of the government!
>From what I can tell, Facebook’s internal research on mitigation is leaning towards studying the connections people have to determine the threat profile - not just the individuals posting habits.
This is not unreasonable. I don't think for a second it's just organic trolling that's causing them problems. They have taken money from literal nation-states attempting to wage war by informational means and crush their enemies. Genocides in small countries doesn't begin to sum up the real threat profile here. Facebook could bear responsibility for actively aiding efforts to plunge the globe into hot war, if their paying customers got ambitious and foolish enough.
Small wonder they're trying to get things under control, but under control THEIR way. If they don't mitigate, they're liable to end up in the Hague.
People should not expect pleasant discourse on social media. You have every single statement, opinion, or reasoning scrutinized by robots or vastly under-paid contractors - none of which understand nuances of conversation like context, sarcasm, etc.
Social media is a cancer that's growing on society. We need to go back to smaller online presences instead of a "global village" as it were. Too many things to worry about in the world, and too many companies want to sell you fear with advertising.
> Social media is a cancer that's growing on society. We need to go back to smaller online presences instead of a "global village" as it were. Too many things to worry about in the world, and too many companies want to sell you fear with advertising.
I feel like this is a big part of why Discord has gotten hugely popular. Communities are isolated and only semi-publicly accessible; most critically, they're not indexed by Google. On most servers, people are using aliases which are not (directly) linked to their real-life identities, but they're still people you get to know and befriend, unlike Reddit-likes where the people commenting are mostly interchangeable. These things make the internet feel a lot closer to how it was in the 90s and early 00s, where you could talk freely with your friends without worrying if someone with a grudge would take what you wrote and turn it into a mob-driven character assassination.
One of the things about a small community is that there are real social consequences for not staying within the confines of polite behavior. People can get embarrassed, shunned, ignored, etc. Their reputations are harmed, often leading to consequences in terms of work, friends, family, etc.
On Facebook, people who are rude & impolite are rewarded with more engagement. Real consequences are very rare.
You must never leave your house. If you must leave your house, you must not talk to anyone. If you must talk to someone, you must not talk to anyone else. These are the new rules to keep us safe.
Right so those with the privilege of deciding who the good guys are and what constitutes misinformation are current political parties and financial interests. Which is great, because governments have never been responsible for orchestrating their own attacks and campaigns of misinformation. I'm so glad we can trust Facebook to have our best interest at heart.
At least there is some level of accountability and transparency potential for elected officials. I don't see how we would be better off by having a small number of private individuals making these decisions for us.
Governments have set up departments to forward requests for takedowns directly to Facebook, sidestepping whatever due process they usually would require to get a court order:
"Watchdogs say that this coordination between governments and online platforms lacks transparency and operates through an alternative enforcement mechanism that denies due process of law. In the vast majority of cases, the Cyber Unit doesn’t file a court order based on Israeli criminal law and go through the traditional legal process to take down online posts. Instead, the Unit makes appeals to the platform’s content moderation policies and community standards. The enforcement of these policies though can be selective, devoid of different cultural contexts, and flexible to the interests of the powerful."
Recently one of my friends got suspended for he commented like "LMAO" in our language. This sort of low quality audits combined with the fact that in Taiwan the auditors are severely biased toward China, make it harmful on its own.
I feel worse for Taiwan now than I’ve ever felt for Hong Kong, because in 1997 I watched that and know the current state was an eventuality. I LOVED the time I spent around Taipei. But considering the state of the US and World, whatever plans China has for Taiwan, seem like they just moved up a decade. After watching what just happened in Afghanistan, and us giving up our basically Chinese border base overnight without telling our allies, no one seriously thinks the Biden Admin will stand up to Chinese action on Taiwan right?
I want to go back before you can’t tell the difference between the Taiwanese and mainland Chinese.
And you highlight it; China has found a way to lean on the country in a million different ways.
Facebook is too successful. They are so successful that they are mainly focused on implementing features to remove users, rather than to add them.
Also, the 2016 election broke big tech a little, but covid turned big tech into China, at least when it comes to that issue. It's as if anti-covid lockdown activism was the west's June 6th tinnamen square remembrance movement. It also broke google in much the same way. I have to use Yandex or at least duckduckgo when searching for information about any controversial topic now. To even hear what the anti-lockdown crowd has to say I have to go to obscure telegram channels.
Facebook isn't the worst offender even. I recently tried to have a political discussion on nextdoor and me and a couple of other people just gave up because all our comments kept getting deleted. This is what I call "The lost doggy and kitty internet".
On "The lost doggy and kitty Internet" only the most light topics can be discussed. Only yoga classes and lost kitty notices are allowed. On the lost doggy and kitty internet there is a polite dinner party taking place, with a slaughterbot drone hovering overhead, buzzing around between partygoers listening everywhere intently for wrong think with a hair trigger fire button ready to go if any section of the crowd starts hurting people's brains...
"Harmful coordination by real accounts" appears to mean mass-reporting and brigading. The truth is, that because Facebook's moderation is so unreliable, with innocent comments frequently leading to bans, while actual death threats and incitement to violence are deemed "not in violation of our community standards", mass-reporting and brigading are among the only recourse real users have to get actual harmful content removed. They're not reliable, of course, and they can be used by all sides, but if Facebook is effective at this, we're looking at a situation where Facebook only does something when Zuckerberg is hauled in front of Congress over something. I'm glad I haven't been on Facebook for many years now, but opting out isn't a solution for everyone.
Is there anything to be said for leaving the harmful content there? If the crapflood made people like facebook less, or get more skeptical, or made the company get serious about not shoveling manipulative garbage down people psyches then I'd be pleased.
I know "deplatforming" seems like a good idea, and is effective in the short term, it just strikes me as ultimately the wrong level at which to attack anti-social behaviour on "social networks".
"If the crapflood made people like facebook less, or get more skeptical, or made the company get serious about not shoveling manipulative garbage down people psyches then I'd be pleased."
The crapflood just caused people to invade the capital building and refuse to get vaccinated. If anything it made them more engaged and less skeptical.
This is great, Facebook should add "Two Minutes Hate" feature to the user timeline to address the spread of misinformation by these enemies of the people.
Facebook most certainly has not. You can get paid very handsomely for facilitating misinformation that wouldn't otherwise catch on without a lot of surreptitious advocacy, preferably astroturfed. The cash value of this has everything to do with how well you can get away with it. It's just market dynamics in action.
They are simply running into externalities, that's all.
>> as it announced a takedown of the German anti-COVID restrictions Querdenken movement.
I don't know if that group used any particularly bad tactics, but on the surface that statement sounds like squashing free speech. IDK about germany, but I'm sure FB will do that in the US too. All because "it's our private platform and we are not the government so no first amendment here".
That's not what happened. Minaj claimed "a friend of a cousin" had a simply impossible side effect. This is pretty clearly a simple case of an urban legend being shortened from "friend of a friend of a friend" and so the rumor always gets passed on as "friend of a friend" no matter how many links there are in the chain.
Did you notice that you shortened it from "Minaj claimed a friend of a cousin..." to "Minaj claimed a friend...?" Hey, you're human too just like all of us. We suck at passing along rumors.
Let’s say it’s unsubstantiated rumor (she said it was her (not ‘a’) cousin’s friend) What every medical rumor gets knocked out? Only the ones they don’t like? What makes them the right decision makers to decide?
It’s no different from the banning of the Wuhan lab escape theory that was banned because… I dunno some republican happened to like it? Meanwhile lots of virologists believed it should at least be investigated. But no, originally only racists could consider it as a possibility.
We don't know. Maybe she is maybe she isn't. Regardless Twitter is not an official record of anything and additionally they are selective with regard to what inaccuracies they censor.
That said, apparently swollen testes are reported in the VAERS as a reaction to the vaccine. Not a great amount but also non-zero.
> Twitter can suspend anyone and anything for any reason. You play on someone else's server you play by their terms.
Yes, but then, why should there be laws that shield these services from any legal responsibility as to what their users post? The government shouldn't need to protect these businesses either and let them off the hook because of the scale of the moderation. it goes both ways. Private companies can accept whatever they want, the government doesn't have to protect them either, they are a private company after all. They should bear all the risk.
Why does it go both ways? The symmetry isn’t obvious. I can kick you out of my house for whatever reason but I’m not automatically complicit if I let you visit and you pull out a rifle and shoot someone out the window when I wasn’t looking. That is right. And it should be that way.
My personal residence doesn't serve as a chosen communications venue for a significant fraction of the local populous. Network effects are real. Failing to consider them is specious.
That doesn't mean I'm necessarily in favor of preventing these companies from moderating things. It just concerns me to see such obvious aspects disregarded.
Sure. An individual can’t count on a message being carried, yeah we know, it’s a private company and all…
But… it should concern people the concerted effort to censor anything including the truth if it doesn’t fit a particular narrative.
What is Twitter decided hey, they wanna be on the side of the police and now anyone reporting anything that goes against the police narrative gets banned, true or made up. Does that sound okay?
It’s effed up of people think that that’s okay because they are a private company and it’s their platform…
Facebook can't even handle extremely basic moderation and spam fighting.
For example, I reported a post today that was a blatant attempt to steal my log-in information. Facebook's response: "it doesn't go against one of our specific Community Standards."
This was for a post that was impersonating Facebook.
I've heard very similar stories from many other people.
Obviously, Facebook just can't handle even the most basic problems to do with moderation. There are so many problems on their platform that go beyond the most obvious attempts at fraud and scams. Yet, if they can't properly handle the most obvious scams, how can we trust them to properly moderate anything at all?
At 2 am last night a bot impersonating a family member added every one of their friends on facebook & sent them messages. I reported it as a fake account, a ticket which was instantly closed.
Facebook sent a notification later to them that the account was not impersonating anyone, no action would be taken, and there was no way to appeal.
Interestingly, messenger did splat a bunch of pretty good warnings on top of the DM they sent me: https://i.imgur.com/gigUA7G.png
When Facebook talks about "real accounts" they mean only accounts that are not very obviously fake in a way an algorithm can detect. It's unquestionable that it is easy to set up a fake account in FB and it always has been. If they tried harder-- and they almost certainly will have to due to political pressure-- it will still be pretty easy to find more sophisticated ways to fake an account.
With an advertising model there are always going to be incentives to fake accounts, and disincentives to FB to close any account. The simple way to stop fake accounts would be to introduce even a small cost to have an account which would make faking accounts not cost effective. But that's not their model.
Regarding fake accounts, at the beginning of 2021 I saw this story a number of places - https://finance.yahoo.com/news/facebook-disables-1-3-billion.... That in 2020 fb shutdown 1.3 billion (with a b, fake accounts). I can no longer find a good number for the total size of fb's membership. I think around this time I saw it reported as 4 billion. Does anyone have such a number? If the four billion number were accurate, then when this story ran fb was admitting that something like 20-25% of the platform up until then was fake.
a common scam is grabbing someones public profile abd cloning it. i would be shocked if even 1% of those reading this couldn't pull that off in a day. from there you just beed to figure out how to make money from the scam.
Are you kidding me? Of course they can but it's complicated. They don't want to lose money. They don't want to be accused of censorship for moderating. If they "moderate" certain people too much they might bring regulation. Etc.
Facebook is caught between wanting to capture as many users as possible (many of whom have been converted to low-level political trolls thanks to recent political dogmatism), and wanting to create a family-friendly no-criticism zone for Brands to advertise in.
So they’re going to use this to stop people spreading harmful misinformation, like, say, the conspiracy theory that governments would create vaccine passports, or that a virology lab in Wuhan might have been involved in a virus that came out of Wuhan.
Or that masks are a good idea.
Dissent from the popular opinion is good and healthy, going both ways.
Make whatever strawman analogies you want, but the fact of the matter is that massive online communication platforms like facebook are increasingly censoring dissenting opinions. One day, an opinion that you hold will become "dangerous", and you'll have to choose between conformity or being excluded from our increasingly digital society.
edit: why do we keep giving these companies the benefit of the doubt when they continue to lie about everything?
Don't try to change the subject. My comment is not about facebook, censorship, corporate-run dystopias, or thought-crime exile.
Move the goalposts all you want but the fact is that all opinions are not equal in merit.
If you do actually want to have a good-faith discussion about how to limit the reach of worse opinions while increasing the reach of better opinions I would be happy to hear your suggestions.
Or if you would prefer to discuss how to find agreement on what we consider a "good" or a "bad" opinion, I would start by offering that I think the opinion "face masks do not prevent the spread of airborne respiratory infections" should be considered significantly less reasonable than the opinion "face masks are effective at preventing the spread of airborne respiratory infection".
Our leaders at the start of 2020 explicitly said to not buy or wear masks [1] in contrast to well established research supporting masks from the 1990s SARs epidemic.
People supporting masks were censored for misinformation.
So you would have been one of those censored at the time, despite being backed by the evidence and correct.
What makes you think such mistakes won't happen again?
I don’t think I’m changing the subject, I am asking you to look at the bigger picture. Your comment may not have been about facebook specifically, but we are in a thread about a new facebook initiative regarding yet another form of censorship. You list an opinion that you say has more merit than another, and fine, let’s say I agree. My problem with looking at things through such a small lens is that “merit” seems pretty subjective, and if we continue to stand by as we let these tech companies decide what merit means, one day they will go too far, and it’ll be too late.
Here’s an example of two opinions that I think are unequal. “the government has the right to confine people who have not broken any laws to their homes” and “the government does not have the right to confine people who have not broken any laws to their homes.” In Australia, the government has decreed that the first opinion has more merit than the second. Should facebook follow suit, and censor anyone in australia who disagrees?
> I am asking you to look at the bigger picture
> My problem with looking at things through such a small lens is that “merit” seems pretty subjective
Ok, the bigger picture with a bigger lens is this: How do you slow the spread of harmful ideas?
You agree that some ideas are "better" than others. I think you would also agree that there is no simple definition over what "better" exactly means. It's complex and often elicits complex discussion.
My point, that you are trying again to skip over, is that presenting any idea as if it is inherently equal in merit to any other idea is fundamentally bad. To be specific, I think this because I believe that good ideas will eventually prove themselves out over time (even if they spread very slowly) while bad ideas will tend to rely on rapid spread to reach critical mass before they are disproven.
Do you just want me to tell you that I think you're right? I don't really see this thread going anywhere productive. You seem to be arguing a purely philosophical position, and if I "changed the subject" to relate your position to TFA, then my bad.
Meanwhile, facebook is using the actions described in TFA to ban anti-lockdown accounts in germany. Just like my hypothetical, but in a different country!
My position remains: opinions do not all have equal merit and because of that should not all be given equal weight. And that there is some level at which an opinion can be so low-merit that comparing it to a conflicting high-merit opinion as if they are equal becomes disingenuous and harmful.
The post that is now flagged was referring to Dr. Fauci's March 8, 2020 statement that "there's no reason to be walking around with a mask." Dr. Fauci made that statement in a context of trying to ensure that enough protective equipment was available for frontline health workers at a time when there were runs on toilet paper in stores.
I believe you are mischaracterizing the argument that was made. Unfortunately, we may no longer view the original post because your opinion has apparently been deemed more correct.
Ok, apparently this is the hill I'm going to die on.
Up through roughly April-May 2020, many, if not most, epidemiologists and virologists believed that masks would not help the situation: they thought respiratory viruses were spread through large droplets produced by symptomatic individuals and that physical separation, sanitation, and behavior would work as well as trying to convince people to were useful masks consistently and correctly. (Earlier today, I walked past a woman wearing a bandana tied around her head. Below her nose. Why!?)
After that time, reports began to appear showing coronavirus could be spread asymptomatically, by normal breathing and speech, in an aerosol form that could stay airborne for long times. Under those situations, masks are the only solution.
The "ensure that enough protective equipment was available for frontline health workers" thing was mostly a response to "but it couldn't hurt" thinking.
"Then there is the infamous mask issue. Epidemiologists have taken a lot of heat on this question in particular. Until well into March 2020, I was skeptical about the benefit of everyone wearing face masks. That skepticism was based on previous scientific research as well as hypotheses about how covid was transmitted that turned out to be wrong. Mask-wearing has been a common practice in Asia for decades, to protect against air pollution and to prevent transmitting infection to others when sick. Mask-wearing for protection against catching an infection became widespread in Asia following the 2003 SARS outbreak, but scientific evidence on the effectiveness of this strategy was limited.
"Before the coronavirus pandemic, most research on face masks for respiratory diseases came from two types of studies: clinical settings with very sick patients, and community settings during normal flu seasons. In clinical settings, it was clear that well-fitting, high-quality face masks, such as the N95 variety, were important protective equipment for doctors and nurses against viruses that can be transmitted via droplets or smaller aerosol particles. But these studies also suggested careful training was required to ensure that masks didn’t get contaminated when surface transmission was possible, as is the case with SARS. Community-level evidence about mask-wearing was much less compelling. Most studies showed little to no benefit to mask-wearing in the case of the flu, for instance. Studies that have suggested a benefit of mask-wearing were generally those in which people with symptoms wore masks — so that was the advice I embraced for the coronavirus, too.
"I also, like many other epidemiologists, overestimated how readily the novel coronavirus would spread on surfaces — and this affected our view of masks. Early data showed that, like SARS, the coronavirus could persist on surfaces for hours to days, and so I was initially concerned that face masks, especially ill-fitting, homemade or carelessly worn coverings could become contaminated with transmissible virus. In fact, I worried that this might mean wearing face masks could be worse than not wearing them. This was wrong. Surface transmission, it emerged, is not that big a problem for covid, but transmission through air via aerosols is a big source of transmission. And so it turns out that face masks do work in this case.
"I changed my mind on masks in March 2020, as testing capacity increased and it became clear how common asymptomatic and pre-symptomatic infection were (since aerosols were the likely vector). I wish that I and others had caught on sooner — and better testing early on might have caused an earlier revision of views — but there was no bad faith involved."
Fauci himself told The Washington Post that mask supply was a motive back in July 2020. So, it was a combination of two factors as you rightly point out. Thank you for correcting my omission.
“We didn’t realize the extent of asymptotic spread…what happened as the weeks and months came by, two things became clear: one, that there wasn’t a shortage of masks, we had plenty of masks and coverings that you could put on that’s plain cloth…so that took care of that problem. Secondly, we fully realized that there are a lot of people who are asymptomatic who are spreading infection. So it became clear that we absolutely should be wearing masks consistently.”
It's funny how we look at opinions as reasonable mostly out of convenience to our own foregone conclusions these days instead of statistics from which we derive evidence that substantiates our conclusions. This guy likes masks, he's one of the good ones. Not one of those baddies who don't. Let's ignore the fact we have almost 2 years of global data pertaining to mandates, transmission, and death rates, and decide who we agree with based on which tribe they hail from. Super reasonable.
ever becomes my chosen way of doing so, I can only hope someone censors me. And takes me to a nice, comfortable assisted living facility where I cannot hurt myself or others.
Here is one of the challenges with this space. What can be used to suppress misinformation can also be used to suppress information as well. Doing nothing has been weaponized. Doing something will be as well. And we have plenty of examples of things which have been ruled "harmful misinformation" that later turned out not to be. With consequences for those who posted what later turned out to be mainstream.
A few examples. There is a reasonable possibility that COVID-19 escaped from a lab. Vaccine passports for COVID-19 are likely to become a thing. The conservatorship of Britney Spears is unconstitutional, and has been a vehicle for grand larceny.
So how do we suppress harmful misinformation, such as that the 2020 election was stolen through widespread fraud. And not provide tools that can be misappropriated against truths which are inconvenient to people with money and influence?
> So how do we suppress harmful misinformation, such as that the 2020 election was stolen through widespread fraud. And not provide tools that can be misappropriated against truths which are inconvenient to people with money and influence?
We recognize that the "we" you're referring to are the "people with money and influence." "Harmful misinformation" from their perspective is information that they dispute either factually or through implication or perspective that could harm them, the people with money and influence.
> So how do we suppress harmful misinformation, such as that the 2020 election was stolen through widespread fraud.
With clear and extensive explanations and transparency. The people who insisted loudly that there was no widespread fraud the moment the election ended in their favor were operating with as little factual, auditable information as the people who were insisting that there was. The people who were insisting on a fraud narrative were of course consuming a lot more misinformation, but both sides were pretending that they were knowledgeable about something they weren't, at all.
First, the "we" that I was referring to was "we as a society" and, more specifically, "we as technologists". As in, how can we create a technology and social norms that both encourage a fact-based narrative while being resistant to political manipulation.
Also to your specific example of the election, a lot of people who spoke out fairly quickly against the fraud information were operating on the factual, auditable information that both sides were presenting their data to judges of various persuasions around the country, and the judges were virtually unanimously concluding that there was no case. And then various recounts began coming back, likewise concluding that there was no fraud.
I don't know what standard of evidence you think people should have spoken out at. But that seemed at the time to be a reasonable level of evidence. And it still does. The judges in this case were literally a collection of people chosen to be trustworthy, with varying political alignments, who were making informed decisions on the basis of more data than I have, and consistently came to the same conclusion.
A similar kind of thing for fact checks would be a standard that I could be comfortable with. But it has to be similar. A collection of independent people. Chosen on the basis of methodology. With different political alignments. And it is only when they broadly agree that we are willing to impose rules.
Let’s not pretend Facebook ever was or ever will be some bastion for democratic values.
We shouldn’t expect that not “suppressing information” is ever in their best interest. Or that it would be something new for them. It seems like fantasy to think in those terms.
Any attempt to suppress "harmful misinformation" will be seen as partisan because, well, it pretty much is. There was a huge amount of utter bullshit pushed on social media about the 2016 election being stolen by fraud, including stuff that the author obviously could not possibly have worked as described or been used to steal the election, and a large proportion of the US population literally believed Russians had hacked the voting tallies. It obviously wasn't harmless either: someone radicalized by Facebook went and shot a Republican senator, and it was only through masses of luck and intrusive medical interventions that he survived. Yet the only thing the mainstream media showed an iota of concern for was the fact that anyone objected to this.
Hell, even in the run-up to the 2020 elections the press were pushing the narrative that it'd be impossible to know the results weren't hacked and the election was valid - right up until it became clear Trump had lost, at which point it became absolutely certain they were valid and the audit chains and processes so robust only a conspiracy nut would question them. The same kinda happened in reverse in 2016.
I'm pretty sure that the "senator" you are naming was Congressman Steve Scalise. I have not specifically heard that the shooter, James Hodgkinson, believed in conspiracy theories about a stolen 2016 election. But he may have.
However the big difference between partisan beliefs about the two elections is this. After 2016, Democrats mostly believed that they lost at the polls due to an effective disinformation campaign run by the Russians. After 2020, Russians believed that they lost due to widespread vote counting fraud. As https://www.rollcall.com/2021/02/24/partisan-voters-claim-we... says, Democrats and Republicans believe this by almost exactly the same margins.
But what is believed matters. Democrats may have been furious, but they believed in rule of law. Radical Republicans, by contrast, attempted to overturn the election via an insurrection.
I did not have facebook account and created one recently. Everyone whom I added was not in US and was not remotely connected to US politics but for some reasons facebook thought I would like "Briertbart" and some anti vaxx related pages and posts. Keep in mind that I have no messages sent and other info and use facebook inside a mozilla container, one top of that I am not even anti vaxxer. But still they promoted all those things to me.
Facebook getting to disavow everything they've been paid to do, and accepting payment from a new set of bosses who turned out to have more authority over them than their previous paying customers had.
Basically, entering the witness protection program and selling out those they've worked for in the past. Getting a huge cash-out, reputation laundering, and going away to preside over a dwindling number of hapless and heavily surveilled 'users', reporting on their doings to the authorities.
That is very much the winning endgame for Facebook. They turned out not to be bigger than governments.
Depends on how you define 'winning', really. If 'growth at all costs' is your metric, then sure, Facebook is winning. If 'adherence to original vision' is your metric, then you could argue that Facebook went astray a long time ago.
I find it entirely implausible that facebook, with their vast troves of data and an army of the very best computer scientists money can buy, is incapable of reliably identifying posts and users responsible for mass manipulation on their platform. Unwilling? Sure. Unable? Not a chance.
With the current logic that Facebook is using, "manipulation" can mean any behavior that they don't endorse. For example someone is "manipulating" their network by posting anti Biden memes.
I recognize that I am extrapolating real group behavior to real individual behavior, but I think that extrapolation is warranted given that we've just made the jump from group bot behaviour to group people behavior.
On the one hand, I can see how using this to combat organized misinformation could be a good thing. Combating organized vaccine misinformation is the obvious current example. On the other hand, it's a bit dystopian to have a platform used as a public square that cracks down on organizers they don't like.
I'm old enough to remember when stating covid came out of a wuhan lab was "dangerous misinformation" that could get you removed from social media. Not so much "misinformation" anymore...
So, it's true that the Constitution doesn't protect rights from infringement by private businesses, but you're onto something important. The rights of free speech and free assembly are critical to a functioning republic, and their curtailment --- even legally, and by private corporations --- is doing great harm to our tottering system.
I don't want government to force businesses to allow speech, but we need to actively oppose private enterprises when they limit speech, and support alternatives that embrace freedom.
https://archive.is/lJoWA