Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Usually when I've submitted a dupe story, if it already exists it just adds a vote to the pre-existing story.

Where the logic either breaks (or allows people to subvert it) is where different URL's can get you to the same story. Often times URL's contain some superfluous flags that don't change the content served, but just serve to log some referrer or layout type data (I'm sure everyone reading this site gets how this works). Adding, removing, or changing any of this data seems to pretty much break or confuse whatever dupe-detector logic exists.



Human dupe-detection would be an excellent extension to this process.


Why not just compare the content at the other end of the link with the contents of existing links.

It wouldn't be that hard. Whenever a link is submitted, YC's server would visit the link, get the response, strip all html tags and white space from it, then hash whatever is left. It would then store this hash value with the link. Whenever a new story is submitted, it is likewise hashed and then a check is made for an existing link with the same hash value. If it exists, it's a dupe, if not, allow it.

This would be an extra check to the existing dupe URL string of course. It still wouldn't catch every single thing, but it should eliminate quite a few easy dupes.

If that turns to have a low success rate, try hashing the page title or maybe the http headers.


A single comment or timestamp would change the hash.

Maybe the <title>, or the contents of the first <h1> or something would be a better proxy.


Yeah that is what I was thinking when I added that last line.

For some reason I initially wasn't thinking about comments... so the title would be a much better proxy.


Are you suggesting that new submissions route through Mechanical Turk?

LOL.

Once approach might be that when humans detect dupes, they could be reported. Click the "dupe" link, specify the URL(s) of the dupe(s), and submit. The oldest submission "wins", and the data could be used to train a bayesian dupe detector. I imaging that you could start with a URL text match (it's the ends of the string that tend to be different), along with a check of the <title> for the supposedly dupe page, and maybe the first 128 characters of the story text or something.

It actually sounds like a fun project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: