One way for HackerRank to do the right thing would be to sue WorthIT for breach of contract and loss of reputation. Their contract must have included some requirements for due diligence on filing those notices, and having these bogus DMCA notices does hurt HackerRank’s reputation, especially with its target audience.
Only way to make this stop is make companies like WorthIT feel financial pain from their actions.
There's also an argument to be made that this is on HackerRank. Just because they're using a contractor to do the dirty work doesn't excuse them from responsibility for this sloppiness.
To his credit, the CEO read my comment and reached out to me (as he noted in the sibling comment). Since that email answers your question, and also adds some useful context, I'll just share that:
> So, I generally have a personal rule against disparaging things that other people made because I know how hard it is to make stuff and how criticism feels. I ignored that rule in this instance only because I've been a victim of DMCA misuse from a company and have learned the hard way that the only real repercussions anyone will face are reputational.
> For context on my experience with HackerRank, I have used both the take-home test product and the real-time test product. It has been almost a year since I have used either, since I left that job.
> I found the real-time product frustrating to use; frequently I could not rely on the candidate seeing the same state I did. I forget exactly what the problem was but candidates were often confused by the console not appearing and things like that. In some instances, either I or candidates were booted out of an interview in progress and had to refresh the page. Usually we worked it out pretty quickly, but losing precious time can make candidates nervous and it's not a great first experience with the company for them. I also found that code execution ran slowly, in both the real-time product and the take-home test -- we weren't running big programs, I'm not sure what was so slow.
> On the take-home side, for context, I was one of several people responsible for the take-home coding test for my department and was also (unofficially) the go-to person to look into assignments that were flagged for plagiarism. I found the plagiarism detection results to be pretty incomprehensible -- sometimes I'd see tests with very high plagiarism scores and no signs of obvious plagiarism, or things like a 90% score with just a few innocuous lines of overlap with the alleged source. I understand that flagging false positives for human review is generally preferred here to false negatives, but my main concern here is that if recruiters within an organization take these numbers at face value, they end up rejecting candidates pretty much randomly, or because the candidate wrote idiomatic code that looks like everyone else's because it's the idiomatic solution in that language.
i will add to this that if recruiters or code interview people expect you to be a liar or fraud, by default, then they will be looking for evidence and the slightest misstep will instantly be a “fail” with the excuse that they prefer to have missed a great candidate than risk hiring a bad one, which is the typical passive aggressive BS you deal with.
the problem is really coding interviews in general, not just HR itself. who wants to do this type of shit in a browser anyway?
This is how the US Sarbanes–Oxley Act works. During giant accounting scandals in late 1990s / early 2000s, corps were using the onion approach with external vendors to cover-up shady/illegal practices. SOA basically says "no" to this now. We need the same for DMCA.
You’re assuming that this company is actually a separate entity and not some sketchy relatives of the hackerrank management providing for international cover for their bogus DMCA attacks.
Only way to make this stop is make companies like WorthIT feel financial pain from their actions.