Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From what I can tell, Facebook’s internal research on mitigation is leaning towards studying the connections people have to determine the threat profile - not just the individuals posting habits.

Can’t say more, because it’s a guess based on posts and job requirements.

So front/standard moderation is outsourced and more advanced threat detection is looking at large coordinated networks.

It’s a huge pity that the state of the art in modern troll combat is behind NDAs.

I’ll also admit, that as poor security through obscurity is, it’s a useful speed bump here.

It’s kinda odd to contemplate how this space will evolve.



> It’s a huge pity that the state of the art in modern troll combat is behind NDAs.

I mean... do _you_ have any ideas about how to fix this sort of thing that would survive being published for the world to see?

I don't mean to offend you or anything, I just want to point out that the second you go public with your rules about how these things are detected, the people you are targeting will adjust their behavior to evade them.

I think that the secrecy is unfortunately a hard requirement until perhaps we could all coordinate into a global "behavioral score" system. And to be honest, that sort of shit is terrifying, so we should probably never do it.


> do _you_ have any ideas about how to fix this sort of thing that would survive being published for the world to see?

I would try to come up with a rule-based system that actually detects the bad behavior they don't want to have. Of course, people are then free to circumvent that system by not behaving badly.


This is pretty much the equivalent of “I would try to make a rocket that just lands on Mars instead of crashing”. All I gotta do is make it take off instead of not taking off.


Indeed, if somebody wants to build a rocket that flies to Mars, then it should better not crash, and the equivalent to not crashing is an algorithm that actually detects the bad behavior. Kind of strange that you're making fun of that very basic requirement. The problem really is that Facebook's algorithms are like rockets that crash too often (~generate too many false positives) and their "solution" is to ignore the crash death toll for maximum cost cutting (~minimize human intervention).


how to draw an owl: 1. draw some circles 2. draw the rest of the fucking owl


Sure:

Settings > Timeline

  X chronological
  X only what your friends share


I'm curious how things look for people who are only friends with people of their extreme political persuasion and only likes/follows similarly extreme pages? Does giving some variety of relatively less extreme viewpoints cause a reduction extreme viewpoints?


Perhaps it isn't the purview of an infrastructure company (Facebook in this case) to attempt to manipulate society?


This is certainly a desirable change from a user perspective. However, the issue under discussion is coordinated abuse by a group against some entity. I don't see how this addresses that.


only what your friends share in chronological order => bad actors can't influence what gets shown first and if a friend shares disagreeable coordinated materials unfriend


You just need to spam people then.

Heck - the original fake news phenomenon was Macedonians (?) making websites that looked like news sites to earn click through money.

People could just do that - “spam from the poisoned well”


If someone shares SPAM you will see it once and then it will be below what you check the next time. You can unfriend someone that shares too much SPAM. You can report it too. There's no good way for an adversary to game an aglo that decides what to put on your timeline more often when the algo is most recent first.


How do I keep my friends from sharing means. There are a lot of them.


unfriend them


That loses the good of being their friend with the bad. On facebook I only have people I know personally, and so I want to see their family pictures.


No I fully agree, which is why I made a mention of security through obscurity.

However, I will make the case that something of this magnitude should be available to the public.


There certainly is academic research that touches on these topics. Honestly, I'm sure the academics are moving faster than FB given that this has been a known problem for over a decade, and has only gotten worse over time.


I guarantee that academics have been fighting to get data from FB, and are behind the curve.

I’ve read papers which specifically highlighted this lacuna.


Not been involved in this specific issue, but the problem is that academic research tends to be limited unless they have access to, say, Facebook's internal data. In which case they're under the NDAs


> [...] studying the connections people have to determine the threat profile - not just the individuals posting habits.

Well that's just TERRIFYING. My account can get flagged... Not because of what I say, but because of who I know?

Talk about a way to unperson someone. Make it so that even associating with them causes the social graph to collapse.


I think is is more about detecting highly connected groups and finding what they have in common.

For example, if there is a group where each member connects to most other members and nazism is the common denominator, then it is identified az a nazi group. If you connect to a large part of the group, then you are a potential nazi. If you know someone in that group and most of your other contacts are unrelated, you are probably not a nazi.

It is not that someone is marked as a supernazi and turns everyone he touches into a nazi.


I think you might have missed the point.

> If you connect to a large part of the group, then you are a potential nazi.

IOW

> > My account can get flagged... Not because of what I say, but because of who I know?

Don't worry, when the US implements a social credit system it will be better than China's because it will be piecemeal and run by unelected tech companies instead of the government!


User bjt2n3904 has expressed a strong negative reaction to threat profiling. Increase threat profile score by .4


>From what I can tell, Facebook’s internal research on mitigation is leaning towards studying the connections people have to determine the threat profile - not just the individuals posting habits.

This is not unreasonable. I don't think for a second it's just organic trolling that's causing them problems. They have taken money from literal nation-states attempting to wage war by informational means and crush their enemies. Genocides in small countries doesn't begin to sum up the real threat profile here. Facebook could bear responsibility for actively aiding efforts to plunge the globe into hot war, if their paying customers got ambitious and foolish enough.

Small wonder they're trying to get things under control, but under control THEIR way. If they don't mitigate, they're liable to end up in the Hague.


>It’s kinda odd to contemplate how this space will evolve.

The same way it did on reddit: sock puppets and chat servers outside reddit where the people radicalize themselves further.

The cherry on that shit cake was when /r/againsthatesubs started posting child porn to try and get subs they don't like banned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: