Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn't really the issue. The real issue is that MD5 (though these hashes are SHA1, which has the same problem) are too easily computed; they are practically byte-forceable. I don't need a rainbow table to compute hashes when I can slam out millions in short order using a GPU. You have a good point about needing to know the salt, but getting the salt is generally easy because it's usually stored in the same place as the hashes (and this practice is fine, because hiding the salts doesn't improve security significantly on its own).

This is a major reason to use bcrypt.



The difference is that if it's salted you need to work to get a specific password. Without salting you can test a generated hash (rainbow table) against all 6.9 million hashes at the same time.

Not defending the choice - bcrypt is obviously a much better way to go.


The thing is, though, that it's trivial to slam through that set of salted passwords. It's like unsecured Wi-Fi versus WEP: "door unlocked" versus "'No Trespassing' sign."


Let's forget about bcrypt for a second.

What prevents developers from adding a large DB-wide salt (in addition to normal salt) to every password? Wouldn't that prevent bruteforce attacks regardless of the hashing algorithm?


Random nonces have very little to do with what makes SHA1 insecure and bcrypt secure. Developers have a very weird and totally misplaced faith in the ability of random "salts" to secure passwords.


We're speaking about a very specific attack here: bruteforce. And I'm speaking about a very specific type of "salt" (which could probably be called something else, since it's not the same as normal unique-per-password salt): large, database-wide string of random bytes.

If every password is padded with such a string before hashing, computing the hash would be slower. Obviously, it would be slower because you would have to process more data. An interesting question is whether this would also make it less parallelizable by the virtue of having more information than would fit into GPU cache.


None of this makes much sense to me, sorry. Brute-force password cracking has worked on salted passwords since Alec Muffett released Crack in the early '90s. The amount of extra computational power required to hash a password and a salt is negligible.

The only thing "salts" do is prevent rainbow table precomputation, but it's just a quirk of the late '90s and early '00s that "rainbow tables" ever became a mainstream attack method: one bad Microsoft password hash and a series of bad web applications. Long before the MD4 LANMAN hash was ever released, people were breaking salted Unix passwords with off-the-shelf tools, on much, much slower computers than we have now.


Computing a hash on 1MB of data is slower than computing a hash of 6-8 bytes of data. Brute-force attacks are based on trying different passwords and seeing that after being salted they generate the same hash as in the database. Therefore, adding a large string to the password before hashing would force the attacker to hash that string. The question is, can this be pre-computed once or efficiently parallelized?


You're advocating creating a 1MB "salt" string to slow down hashes? That's the same as simply iterating your hash function enough times to invoke the block function repeatedly.

Just use bcrypt, scrypt, or PBKDF2. People have already figured this problem out.


First, I do not advocate anything here. I asked a question.

Second, working with a large string of bits is the same as recursive hashing only if you can pre-compute some small intermediate state of the hash function for that string independently from the password you're trying to guess. If you can't, you would have to work with the entire string for every new password tried.


I answered your question: using a very large "salt" to force a password hash to run more block functions is a bad idea.

Modern password crackers are extremely fast without precomputing anything.


How much slower would you estimate it being?


1MB of data will have 16384 SHA 256 blocks. So that's roughly the slowdown I would expect, minus the time it takes to initialize the algorithm for a particular message.

That's not that interesting by itself, but it is interesting to think about how this would affect computing the hashes on GPUs.


And how high can you crank the work factor for, say, bcrypt?


Is there a significant time difference in computing the SHA1 hash of 40 bytes versus say, 128 bytes?


128 bytes is not "large". I was thinking more along the lines of megabyte+. There is no question that it will slow down hash computations, because you would need to process more data. The question is, can you efficiently parallellize this in a commodity hardware (GPUs)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: