I mean -- I think my main point here, and to the other commenters who are saying "I'm happy that they don't take things like this down" -- is that I think the dynamic shifts a little bit when the point of the project, the explicit reason for its creation is malicious, & the entire point of it existing on the platform is for other people to use it maliciously
The code will continue to exist regardless of whether you see it on GitHub. Script kiddies can just as easily share the source code or binary in a zip file on a forum somewhere, or even on Discord. All you'd accomplish by removing it from GitHub is adding a censorship layer where some GitHub employee or algorithm now needs to determine what's "allowed" on the site. Are you sure you want that?
Even before considering the deleterious effects of censorship, it would simply be more work for everyone and unlikely to benefit anyone. Not to mention you'd lose valuable telemetry that could be used in investigations after the fact (e.g. if someone is accused of stealing photos from an ex on discord, and GitHub can positively identify them as having downloaded a malicious tool to steal Discord tokens, then investigators could subpoena GitHub for those download records).
If there is a problem here, then hiding the code that exploits the problem does not eliminate it. It's Discord's responsibility to mitigate the scale of risk associated with a stolen token. A program that grabs a token on your machine probably shouldn't be able to use it to exfiltrate all the data from your Discord account. And similarly, it probably shouldn't be so easy for any program running on the machine (as a non-root user) to retrieve such a token in the first place.
Does having it on GitHub not inherently promote it as a sort of aggregator for stuff like it? Instead of having to search for a forum somewhere, they simply have to look at this cool GH topic, and there they now have ~40 options at their disposal
abusive child porn is easy to find, we should have that on a fun easy entry level site like github...it's called a slippery slope, if no moral line is drawn, where do we end up?
Abusive child porn is a well defined content type that is objectively classifiable. For better or worse, so is copyrighted content (according to the rules of the DMCA claim process).
"Harmful software" is a much blurrier line. Is a GitHub URL being used as a dropper in an active malware campaign? That will probably get a repository removed. Is the source code for malware published on GitHub? That's not harming anyone in its current form, just like the source code of Popcorn Time isn't pirating movies.
Do you want to ban any content with a readme claiming it can be used maliciously? What if I want to publish a basic keylogger implementation for an open source cybersecurity class? Where's the line between educational content and cyberweapons? And even if it's a weapon, how do you know I don't have permission to install the keylogger on a system, like one belonging to a company paying me to pentest them?
Every time I hear someone use the “slippery slope” argument, what they’re actually doing is making a strawman argument.
I can assure you, script kiddy code on GitHub isn’t going to lead to people uploading kiddy porn on GitHub as well. The two are not in any way related, let alone one being a slippery slope for another.
> The code will continue to exist regardless of whether you see it on GitHub
You can extrapolate this to literally anything - "we should allow hosting CSAM on GitHub, since it's on the Internet anyways and we can't do anything about that"
> If there is a problem here, then hiding the code that exploits the problem does not eliminate it.
There's no problem here. This code only exploits the naivety of whoever was social engineered into running it. A session token gives access to the account, by design - it's the way the internet works. The only way to steal a token is by having full access to the machine, and at that point there's no possible mitigation. Even if you completely eliminate persistent sessions, which is a major UX regression, malware can still hook into a running process and steal the active session.
> And similarly, it probably shouldn't be so easy for any program running on the machine (as a non-root user) to retrieve such a token in the first place
What are you even saying? How does Discord/Chrome then read their own session data/cookies? Should we run them as root?
On my machine "%LOCALAPPDATA%\Google\Chrome\User Data\Default\Network\Cookies" doesn't have any special permissions, I'm able to open it up with Notepad spawned straight from my shell.
Maybe I'm misunderstanding NTFS permissions and this is expected, I don't do a lot of Windows, but worst case for the malware is that it has to show a UAC prompt, and if you made someone click "free-discord-nitro.exe" they'll probably click through that too.
Permissions are fake, especially Windows ones. If someone is running code on your machine, they can access any data on it.
On a Mac, if a program (not the user in a file selection dialog) attempted to read a file in ~/Library/Application Support/Google Chrome/, then it would trigger an alert like "[App] from Unknown Developer wants to access files in the ~/Library folder. Allow them?" You'd also need to have manually opened system preferences to have allowed the unsigned app to run in the first place.
And yes, a user could click through that. The primary responsibility is always on the user, within the bounds of what the OS allows them to do (as an extreme, a mobile app certainly cannot access data from another app's keychain or configuration directory - but this requires a highly restrictive OS). But the point is that an application should still make an effort to use best practices provided by the operating system for protecting sensitive data. And in the case of Discord, at least on Mac, it should probably be storing tokens in the Keychain, not the filesystem (maybe it does, idk). Yes, malware can hook the process but not without compromising various OS sandboxing mechanisms, which usually requires the assistance of the user clicking past scary warnings (and even going outside the flow of alerts to explicitly disable protections).
It’s a grey area between what is malicious and what isn’t. A lot of people aren’t going to agree.
ytdl is a great example of that. For Google, it’s “stealing” people from their platform by allowing individuals to download content in a way that doesn’t increase engagement and ad views. I don’t personally agree that ytdl is malicious but I do understand how some could make that claim.
Then what about tools that are legitimately intended for research purposes but could still be abused?
The problem with freedoms is they have to work both ways: if you aren’t prepared to allow abuse of that freedom then you certainly aren’t going to allow legitimate but unpopular uses either.