I'm much a proponent of automation as anyone else. But I think right now Google is trying to do something way too hard. By looking for "extremist" material, they are basically trying to determine the intention of a video. How can you expect an AI to do that?
Doesn't matter, it's already out in the wild. This year so far, tons of channels whose videos have had ads for years are being instantly demonetized without explanation. If even one word from a video's title, one tag is on their "controversial" shitlist, it's SOL. Knowing YouTube's track record with this stuff, they will continue to be silent and not give a shit.
Content creators that don't produce content for 5 year-olds need to start looking somewhere else than YouTube.
Demonetization as a feature comes from the advertisers' desire, not from YouTube itself. Advertisers are very sensitive about what they will allow their brands to be associated with, and they demand the tools to configure their ad campaigns to not display on broad swaths of potentially offensive content. YouTube has had some huge issues with advertiser complaints of ads on offensive content in the past. If anything, YouTube should be praised for continuing to host demonetized content at a loss, since it's not making any advertising revenue for them.
People who make non-advertiser-friendly videos need to figure out some other kind of way to make their money. Patreon is excellent for that.
There are people that make Linux Tutorial Videos that have were Demonetized, it is not just "non-advertiser friendly" political content.
Further I do not buy the Not Ad friendly BS, there are TONS of Advertisers that would put (and have to try to buy ads) on some of those channels but Youtube either refuses them as Advertisers or otherwise does not give the Advertisers the proper control to pick and choose which channels they want to out ads on, it says you video is not ad friendly and that is that, the advertiser has not input in it.
Finally Advertisers care about eyeballs, they really do not care about the content of the ads UNTIL they get people complaining or boycotting their product.
Today there are a small number of permanently and perpetually offended people that are VERY vocal, and VERY loud on social media that have wielded undue influence over these YT Advertisers with their threats of boycotts and outrage. These people have no respect for the freedom of expression, and desire nothing more than to shut down the speech of anyone they disagree with.
It would be nice if they let you buy ads on "demonetized" videos at an extreme discount. I'm sure many companies would be more than happy to get cheaper ads and aren't that sensitive to brand association concerns.
Joe Collins was one example I was referring to in my comment, YT has since monetized his videos but here is the source where he Expresses his Frustration with YT not just in the current round but his experiences over the Years,
YouTube is the only option. Smart businesspeople (i.e. most channels over 500k or even 200k subs) have alternate sources of revenue, mainly sponsorships but also Patreon.
I didn't forget Vimeo. YouTube is so dominant that, since everyone else is on YouTube, uploading videos elsewhere is career suicide. No one will switch websites or apps to watch just one creator's content.
One of the keys to YouTube's success is its sub feed. All the videos from all my favorite channels, all in one place. It's extremely convenient. People tend to take the path of least resistance.
Because it's operated on the whims of a 3rd party whose primary concern is ad revenue. It's disappointing that historically important videos are being deleted but unless they introduce a new program for long term archival YT isn't really the place to store them.
But is Youtube really the right platform for such videos? It's a platform made to host videos in order to put advertiser's ads on them. When I think "raw war videos", I think of Liveleak.
For many users, Youtube is the only video publishing site, and many people get their news through YT. Very few people are aware of the existence of LiveLeak; it is an ineffective platform for spreading awareness.
Would you prefer to stumble across those on Youtube?
It seems better that material which needs to be kept for history and remained uncensored due to advertising is on a site dedicated to that instead of a site which most people use for videos of recipes, memes, and great football headers.
Yeah, there does need to be some sort of inhibition against highly traumatizing content, it should certainly not be promoted to people who do not seek it out. But purging news videos of statues being destroyed and ancient buildings being demolished, is going too far.
YouTube is now what TV used to be. So yes, you should be able to show this content. However, the platform needs to provide betyer tools to users and producers to aide in categorizing content.
I saw plenty of different views of folks being injured or killed in recent protests on the television, I saw folks losing their lives in Nice recently on television, I saw a reporter shot point blank in 2015 on television, I saw an old man shot to death from the point of view of the killer earlier this year on television.
And that's not even counting war footage.
I'm not hunting this stuff down, the only protection against watching this crap is a half second pause after the reporter says "Warning, the following may be considered graphic."
The funny thing is : I don't own a television; I just go to a diner with whatever garbage news channel everyone in the place likes at any given time for breakfast occasionally. Even brief casual glimpses at any given news channel are bound to get you an eye-full of gore or violence of some sort.
I dont think its about morals, but more about userbase expectations. YouTube as a product is generally seen as a video archive. They do have the choice to not host it, but they risk an user exodus. Given the competition from Facebook, Snapchat, and the like, that's a risk they cant take. More so when it's Alphabet's only successful social network acquisition.
It's people's expectation for it to be perfect, and the egoic drive to blame someone when something goes wrong. There was no reason for the hype around this story... an AI determinator had a false positive. Thats not google attacking the videos, thats a technical issue and it needs to have zero feelings involved because the entire process happened in a damned computer incapable of feelings...
But everyone needs to feed their outrage porn addiction...
It's not a technical issue. Software is not yet capable of accurate content detection, and even if it were, it's not clear whether this sort of thing should be automated. It's not like google can just change a few lines of code and the problem is gone.
> It's not a technical issue. Software is not yet capable of accurate content detection,
Your second sentence is a technical argument, which makes your first a lie. Obviously Google disagreed, which is why they put this system into place. And if they were wrong about that they were wrong for technical reasons, not moral ones.
I mean, you can say there's a policy argument about accuracy vs. "justice" or whatever. It's a legitimate argument, and you can fault Google for a mistake here. But given that this was an automated system it's disingenuous to try to make more of this than is appropriate.
If you just stare at the words and ignore my meaning, sure. But saying this is a technical problem is like saying that climate change is a technical problem because we haven't got fusion reactors working yet.
Then I don't understand what your words mean. Climate change is a technical problem and policy solutions are technical.
My assumption was that you were contrasting "technical" problems (whether or not Google was able to do this analysis in an automated way) with "moral" ones (Google was evil to have tried this). If that's not what you mean, can you spell it out more clearly?
Is there any problem you wouldn't frame as technical then? If the software isn't anywhere close to capable enough to do this task and YouTube decides to use it anyway that is a management problem. Otherwise literally every problem is technical and we just don't have the software to fix it yet
Sure: "Should Google be involved in censoring extremist content?". There's a moral question on exactly this issue. And the answer doesn't depend on whether it's possible for Google to do it or not.
What you guys and your downvotes are doing is trying to avoid making an argument on the moral issue directly (which is hard) and just taking potshots at Google for their technical failure as if it also constitutes a moral failure. And that's not fair.
If they shouldn't be doing this they shouldn't be doing this. Make that argument.
I would argue climate change is a political problem.
Policy solutions are political.
A policy is a deliberate system of principles to guide decisions and achieve rational outcomes. A policy is a statement of intent, and is implemented as a procedure or protocol. - https://en.m.wikipedia.org/wiki/Policy
If you believe climate change is a technical problem then there isn't much point continuing this discussion. Using that logic you could claim that any problem is technical because everything is driven by the laws of physics.
The point is, there will be false positives, there is no reason to get upset and hurt over them...
There is no perfect system. If its automated, there will be false positives (and negatives), if there is a human involved, you have a clear bias issue, if there is a group of humans involved, you have societal bias to deal with...
There is no perfect system for something like this, to the best answer is to use something like this, that gets it right most of the time... then clean up when it makes a mistake. And you shouldn't have to apologize for the false positive, people need to put on their big boy pants and stop pretending to be the victim when there is no victim to begin with...
If isn't the same process being defended, but I clearlt didn't claim that: the argument used to defend the different processes, however, is the same. This "put on your big boy pants" bullshit is saying that people should accept any incidental harassment because false positives are to be tolerated and no system is perfect, so we may as well just use this one. If the false positives of a system discriminate against a subset of people--as absolutely happens with these filters, which end up blocking people from talking about the daily harassment they experience or even using the names of events they are attending without automated processes flagging their posts--then that is NOT OK.
The false positives are not random: they target minorities; these automated algorithms designed to filter hate have also been filtering people trying to talk about the hate they experience on a daily basis. They keep people from even talking about events they are attending, such as Dykes on Bikes. It is NOT OK to tell these people to "put on their big boy pants" and put up with their daily dose of bullshit from the establishment.
Your whole premise is wrong, because the final decisions were made by humans. But even if they weren't, you're still mistaken. If you write a program to do an important task, it is your responsibility to see that it's both tested and supervised to make sure it does it properly. Google wasn't malicious here, but it was dangerously irresponsible.
"Previously we used to rely on humans to flag content, now we're using machine learning to flag content which goes through to a team of trained policy specialists all around the world which will then make decisions," a spokeswoman said..."So it’s not that machines are striking videos, it’s that we are using machine learning to flag the content which then goes through to humans."
"MEE lodged an appeal with YouTube and received this response: 'After further review of the content, we've determined that your video does violate our Community Guidelines and have upheld our original decision. We appreciate your understanding.'
Humans at YouTube made the decisions about removing videos. Then, on appeal they had a chance to change their minds but instead confirmed those decisions. Then, because of public outcry, YouTube decided it had been a mistake. 'The entire process happened in a damned computer incapable of feelings' is inaccurate.
[Disclaimer: I'm not a youtuber, so my knowledge is only 2nd hand]
Aside from but related to this story, many people are making a living off of YouTube ad revenues, and the AI is unpredictable in how it will respond in terms of promoting your video content on the front page, as links from other popular videos, and so forth. I think it's also unknown how the AI categorizes the content appropriateness of videos to advertisers, which if categorized the wrong way leaves your stuff unmonetizable.
Basically, people are throwing video content up, but have no way to properly have a feedback loop to gauge whether or not they violate the "proper" protocols that the AI rewards. This really is a problem of automation using (presumably) trained statistical rules where nobody really knows what specifically influences the decisions about their videos.
It is the People's expectation that it be perfect. Once they have determined that there is something badly wrong going on in the video destroying it is a violation of U.S. Code § 1519 (destroying evidence with intent). They had better have backups.
And in this case, there should probably be a separately trained group of reviewers to carefully examine these videos. Not the same group that's quickly checking over videos to see if they're pornographic, for instance.
All I am saying is it takes time for things to propagate. If HQ were in PH, then things would have faster turnaround is all I'm saying.
Do you see "race" in every bit of convo. If I said white chocolate or black chocolate, do the colors auto infer race somehow? If I prefer black or white choco what does that mean? If I prefer black or white or does that mean I'm destroying consuming black or white?
How do you know there isn't already a separately trained group of reviews whose only role is to carefully examine videos related to war crimes/terrorism/violence? I suspect there was, and Google/YouTube's senior management just decided to take a harder line on it than they should've.
That's certainly a first step, but I doubt it's a full solution. What's the phrase, "dog whistles" for phrases and keywords that only a target audience would understand?
Basically every YouTuber I follow has complained about having videos demonetized this week. Subjects ranging from video game reviews to body dysmorphic disorder.
It really seems they've bitten off more than their machine learning algorithms can chew here.
Machine Learning is just a way to launder bias. What will be defined as extremist by US companies will favor the West but discounting Western extremism and overplay non-Western extremism.