"Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."
Genuine question—how would this bug be produced in the first place?
My (limited) experience makes me think that cleartext passwords are somehow hard coded to be logged, perhaps through error logging or a feature that’s intended for testing during development.
I personally would not code a backend that allows passwords (or any sensitive strings) to be logged in any shape or form in production, so it seems a little weird to me that this mistake is considered a “bug” instead of a very careless mistake. Am I missing something?
Let's say you log requests and the POST body parameters that are sent along with them. Oops, forgot to explicitly blank out and fields known to contain passwords. Now they're saved in cleartext in the logs every time the user logs in.
We made this mistake - the trick is determining what fields are sensitive, what are sensitive enough that they should be censored but included in the log, and the rest of the crud.
It turns out that this is non-trivial - when censoring how do you indicate that something was changed, while keeping the output to a minimum? blank/"null" was rejected because it would mask other problems, and "* THIS FIELD HAS BEEN REDACTED DUE TO SENSITIVE INFORMATION *" was rejected for being "too long". Currently we use "XXXXX", which has caused some intern head scratching but is otherwise fine.
Easy, you have a framework that validates & sanitizes all your parameters, don't allow any non-declared parameter, and make something like "can_be_logged" a mandatory attribute, then only log those & audit them.
Wouldn't that make it easier for someone that has access to hashed passwords in the case of a database leak? They would just have to submit the username and the hashed password (which they now have).
In this case client side will have our algorithm (i.e in JavaScript) + private key which we will use to hash the password. If this is the case I could not see any different between giving hacker password or hash-password with algorithm and key.
So sure you don't want to log everything in Prod, but maybe you do in Dev. In that case, a bug would be to push the dev logging configuration to Prod. Oops.
If you have the clear text password at any point in your codebase, then there is no full-proof way to prevent to log it unintentionally as the result of a bug. You just have to be extra-careful ( code review, minimal amount code manipulating it, prod-like testing environment with log scanner, ...)
Not exactly log files, but I once noticed a C coredump contained raw passwords in strings that had been free'd but not explicitly overwritten. Similar to how Facebook "deletes" files by merely marking them as deleted, "free" works the same way in C, the memory isn't actually overwritten until something else writes onto it.
Aren't coredumps static copies of the memory state at time of termination - usually unplanned? So not really the same thing as having ongoing access to a program's memory; I can't really see a debugging process that would involve viewing memory in a dynamic way, whereas it's somewhat of a concern if coredumps (an important debugging tool) reveal plaintext passwords.
In the past, I've seen logs monitored for high-entropy strings that could be API keys or passwords. However, in a NoSQL/UUID-using environment, this could be really hard to implement.
Perhaps implement some type of “password canary” - some type of test account(s) with known high-entropy passwords.
Have an automated system send periodic login requests (or any other requests which contain sensitive information that shouldn’t be logged) for this account, and have another system which searches log files for the password.
If it’s ever found, you know something is leaking.
I think the joke is that both Github and Twitter are famous for being built on Rails (although Twitter just-as-famously required a move off of Rails in order to scale)
There was a great keynote about this at this year's RailsConf
The argument was essentially that if Twitter had instead chosen a language that was more natively inclined toward scalability, they would have necessarily hired 10x as many engineers and they would not have succeeded at building the product that people use to tell each other what bar they are at, simply, which ultimately was the braindead simple thing (that you can probably scale just fine in any language) which drove their success... it wasn't any great technological feat that made Twitter successful, it was pretty much just the "Bro" app that people loved.
(The talk was called "Rails Doesn't Scale" and will be available soon, but RailsConf2018 evidently hasn't posted any of the videos yet.)
Where is it showing a password there? I assume this has been fixed because I can't duplicate it on my machine and the screenshot posted on that article doesn't seem to show any plaintext passwords.
Which makes me wonder, is this really a bug or did someone make it look like a "bug"?
Also they say they found no evidence of anyone stealing these passwords, but I wouldn't be surprised if some companies decide not to look too hard just so they can later say "they found no evidence of such an act."
So best practice would be that the cleartext password is never sent to the server, so they could never log it even accidentally. That means the hashing needs to be done client side, probably with JavaScript. Is there any safe way to do that?
nah, that just makes the "hashed password" the equivalent of the cleartext password. Whatever it is your client sends to the server for auth is the thing that needs to be protected. If the client sends a "hashed password", that's just... the password. Which now needs to be protected. Since if someone has it, they can just send it to the server for auth.
But you can do fancy cryptographic things where the server never sees the password and it's still secure. like the entire field of public key cryptography, diffie-hellman key exchange, etc.
You can store things as follows. Store the salted hashed password with its salt server side. When the user wants to login send them the salt and a random salt. Client side hashes the password + salt then hashes that hash with the random value. What am I missing? Probably something since this is something I rolled my own version of when I was a teenager, but it's not immediately obvious to me.
Server stores hashed-password, hash-salt, and random-salt.
Server sends hash-salt, and random-salt to client.
Client uses user password and hash-salt to generate hashed-password.
Client hashes hashed-password using random-salt.
Client sends hashed-hashed-password to server.
Server grabs stored hashed-password and hashes used stored random-salt to check for match against client's hashed-hashed-password.
--
So the only thing this actually does is not share the text of the password that the user typed to the server. But at a technical level, now the hashed-password is the new "password".
Let's say the database is compromised. The attacker has the hashed-password. They make a login request to fetch the random-salt, hash their stolen hashed-password with it and send that to the server. Owned.
Along with being more complicated with no real gain, this also takes the hashing away from the server-side, which is a big negative, as the time that it takes to hash a password is a control method used to mitigate attacks.
Just send the plain-text password over HTTPS and hash it the moment it hits the server. There's no issue with this technique (as long as it's not logged!)
This is true. It does prevent an attacker from reusing a password they recover from your logs. But as others have pointed out a DB breach means all your users are compromised. Thank you.
No, random-salt is not stored permanently but generated at random by the server every time a client is about to authenticate. Alternatively a timestamp would be just as good.
The random-salt has to be stored, at least for the length of the authentication request, because the server needs to generate the same hashed-hashed-password as the client to be able to match and authenticate.
> Alternatively a timestamp would be just as good.
I don't see how that would work at all.
I also don't see the need to go any further in detail about how this scheme will not be better than the current best practices.
A timestamp would work the same way it works in (e.g.) Google Authenticator.
Incidentally, I really resent how it's impossible to have a discussion of anything at all related to cryptography on HN without somebody bringing up the "never roll your own crypto" dogma.
If the ideas being proposed are bad, please point out why, don't just imply that everyone except you is too stupid to understand.
Edit:
I just reread your comment above and you did a perfectly good job of explaining why it's a bad idea, I must have misunderstood first time round: it's a bad idea because now the login credentials get compromised in a database leak instead of a MITM, which is both more common in practice and affects more users at once.
Sorry for saying you didn't explain why it is a bad idea.
The problem with this scheme is that if database storing the salted hashed passwords is compromised, then an attacker can easily log in as any user. In a more standard setup, the attacker needs to send a valid password to log in, which is hard to reverse from the salted hashed password stored server-side. In this scheme, the attacker no longer needs to know the password, as they can just make a client that sends the compromised server hash salted with the random salt requested by the server.
> Store the salted hashed password with its salt server side.
So now _this_ is effectively just "the password", that needs to be protected, even though you're storing it server side.
If an attacker has it, they can go through the protocol and auth -- I think, right? So you prob shouldn't be storing it in the db.
All you're doing is shuffling around what "the password" that needs to be protected is, still just a variation of the original attempt in top comment in this thread.
The reason we store hashed passwords in the db instead of the password itself is of course because the hashed password itself is not enough to successfully complete the "auth protocol", without knowing the original password. So it means a database breach does not actually expose info that could be used to successfully complete an auth. (unless they can reverse the hash).
I _think_ in your "protocol" the "original" password actually becomes irelevant, the "salted hashed password with it's salt" is all you need, so now _this_ is the thing you've got to protect, but you're storing it in the db, so now we don't have the benefits of not storing the password in the db that we were hashing passwords in the first place for!
I guess your protocol protects against an eavesdropper better, but we generally just count on https/ssl for that, that's not what password hashing is for in the first place of course. Which is what the OP is about, that _plaintext_ rather than hashed passwords ended up stored and visible, when they never should have been either.
Cryto protocols are hard. We're unlikely to come up with a successful new one.
But now your password is effectively just HMAC(user_salt, pwd), and the server has to store it in plaintext to be able to verify. Since plaintext passwords in the db are bad, this solution doesn't sound too attractive, unless you were suggesting something else.
"Since if someone has it, they can just send it to the server for auth" unless it's only good for a few moments (the form you type it into constantly polling for a new nonce).
Not really... it's not that simple. You could use the time of day as a seed for the hash, for example. There are tradeoffs to be made, which is partly why they don't do it, but the story isn't as simple as "the hash becomes the password".
And how do you propose to do that when the clocks arent synchronized? Clock drift is exceptionally common. Not everyone runs ntp or ptp. Probably even fewer use ptp. Desktop/laptop clients it's typically configurable on whether or not to attempt clock sync, and ive never seen where the level of synchronization is documented for PCs. High precision ptp usually requires very expensive hardware, not something to be expected of home users or even a startup depending on the industry.
The point was you could do similarly here. Just have a margin of like 30 seconds (or whatever). I never said you have to do this to nanosecond precision.
Only if the server only keeps around the hash -- which is why I said there are trade-offs to be made. The point I was making was that the mere fact that you're sending a hash does not trigger the "hash-becomes-password" issue; that's a result of secondary constraints imposed on the problem.
Makes sense, and then you're getting into something akin to SSH key pairs, and I know from experience that many users can't manage that especially across multiple client devices.
There are probably ways to make it reasonable UX, but they probably require built-in browser (or other client) support.
Someone in another part of this thread mentioned the "Web Authentication API" for browsers, which I'm not familiar with, but is possibly trying to approach this?
It ties in with the credential management API (A way to have the browser store login credentials for a site, a much less heuristic based approach than autocomplete on forms) and basic principle is generate a key pair, pass back public key to be sent to server during registration. On login generate a challenge value for the client to sign. I don't think iirc the JS code ever sees the private key, only the browser sees it.
I believe you could use a construction like HMAC to make it so that during authentication (not password setting events) you don't actually send the token. But if someone is already able to MITM your requests, what are the odds they can't just poison the JavaScript to send it in plaintext back to them?
I think their goal is to still use https, but stop anything important from leaking if a sloppy server-side developer logs the full requests after TLS decryption (as Twitter did here).
No, there fundamentally isn't, because you can't trust the client to actually be hashing a password. If all the server sees is a hash, the hash effectively is the password. If it's stolen, a hacker can alter their client to send the stolen hash to the server.
If a hash is salted with a domain it won't be use-able on other websites. You should additionally hash the hash on the server, and if you store the client hashes, you can update the salts on next-sign in. A better question is why clients should be sending unhashed passwords to servers in the first place.
https://medium.com/the-coming-golden-age/internet-www-securi...
This discussion is only relevant with an attacker that can break tls. A hash that such an attacker couldn't reverse might be slow on old phones so there is a tradeoff.
Also, hashed passwords shouldn't be logged either.
>That means the hashing needs to be done client side, probably with JavaScript. Is there any safe way to do that?
No [0,1...n]. Note that these articles are about encryption, but the arguments against javascript encryption apply to hashing as well.
Also consider that no one logs this stuff accidentally to begin with. If the entity controlling the server and writing the code wants to log the passwords, they can rewrite their own javascript just as well as they can whatever is on the backend. There's nothing to be done about people undermining their own code.
> consider that no one logs this stuff accidentally to begin with
It's possible. You create an object called Foo (possibly a serialized data like a protobuf, but any object), and you recursively dump the whole thing to the debug log. Then you realize, oh, when I access a Foo, sometimes I need this one field out of the User object (like their first name), so I'll just add a copy of User within Foo. You don't consider that the User object also contains the password as one of its members. Boom, you are now accidentally logging passwords.
Any user object on the server should only ever have the password when it is going through the process of setting or checking the password, and this should be coming from the client and not stored. So, your case of logging the user would only be bad at one of those times. Otherwise like in the case of a stored user you should just have a hashed password and a salt in the user object.
Creating a User object that holds a password (much less a password in plaintext) seems next level stupid to begin with, but fair enough, I guess it could happen.
> Also consider that no one logs this stuff accidentally to begin with.
It can happen if requests are logged in middleware, and the endpoint developer doesn't know about it. It's still an extremely rookie mistake though, regardless of whether it was done accidentally or on purpose.
As others have stated, you'd just be changing the secret from <password> to H(<password>). The better solution is using asymmetric cryptography to perform a challenge-response test. E.g. the user sets a public key on sign up and to login they must decrypt a nonce encrypted to them.
That would transfer it from something you know (a password) to something you have (a device with SSL cert installed) which are meant to protect against different problems.
Hmm why should passwords (hashed or not) be stored in logs though? I don’t see a reason for doing that. You could unset it (and/or other sensitive data) before dumping them into logs.
Wouldn't it be better to never even send the password to the server, but instead performing a challenge-response procedure? Does HTTP have no such feature built in?
Being simplistic, perhaps an automated test with a known password on account creation or login (e.g. "dolphins") and then a search for the value on all generated logs.
For people that know more about web security than I: Is there a reason it isn't good practice to hash the password client side so that the backends only ever see the hashed password and there is no chance for such mistakes?
Realize the point of hashing the password is to make sure the thing users send to you is different than the thing you store. You'll still have to hash the hashes again on your end, otherwise anyone who gets accessed to your stored passwords could use them to login.
In particular, the point is to make it so that the thing you store can't actually be used to authenticate -- only to verify. So if you're doing it right, the client can't just send the hash, because that wouldn't actually authenticate them.
But at least, with salt, it wouldn't be applicable to other sites, just one. Better to just never reuse a password though. Honestly sites should just standardize on a password changing protocol, that will go a long way towards making passwords actually disposable.
I don't think a password changing protocol would help make passwords disposable. Making people change passwords often will result in people reusing more passwords.
No the point is for password manager. The password manager would regularly reset all the password.... until someone accesses your password manager and locks you out of everything!
Ultimately, what the client sends to the server to get a successful authentication _is_ effectively the password (whether that's reflected in the UI or not). So if you hash the password on the client side but not on the server, it's almost as bad as saving clear text passwords on the server.
You could hash it twice (once on the server once on the client) I suppose, but I'm not entirely sure what the benefit of that would be.
A benefit would be that under the sort of circumstance in the OP, assuming sites salted the input passwords, the hashes would need reversing in order to acquire the password and so reuse issues could be obviated. But I don't think that's really worth it when password managers are around.
I'm imagining we have a system where a client signs, and timestamps, a hash that's sent meaning old hashes wouldn't be accepted and reducing hash "replay" possibilities ... but now I'm amateurishly trying to design a crypto scheme ... never a good idea.
Assuming that you are referring to browsers as client here. One simple reason is that the client side data can always be manipulated so it does not really makes any difference. It might just give a false sense of safety but does not changes much.
In case we are talking about multi-tier applications where probably LDAP or AD is used to store the credentials then the back end is the one responsible for doing the hashing.
I can't think of a good reason not to hash on the client side (in addition to doing a further hash on the server side -- you don't want the hash stored on the server to be able to be used to log in, in case the database of hashed passwords is leaked). The only thing a bit trickier is configuring the work factor so that it can be done in a reasonable amount of time on all devices that the user is likely to use.
Ideally all users would change their passwords to something completely different in the event of a leak. But realistically this just doesn't happen -- some users refuse to change their passwords, and others just change one character. If only the client-side hash is leaked rather than the raw password, you can greatly mitigate the damage by just changing the salt at the next login.
If you don’t have control on the client, it’s a bad idea: Your suggestion means the password would be the hash itself, and it wouldn’t be necessary for an attacker to know the password.
For one, you expose your hashing strategy. Not that security by obscurity is the goal; but there's no real benefit. Not logging the password is the better mitigation strategy.
We schedule log reviews just like we schedule backup tests. (Similar stuff gets caught during normal troubleshooting, but reviews are more comprehensive.)
It only takes one debug statement leaking to prod - it has to be a process, not an event.
Create a user with an extremely unusual password and create a script that logs them in once an hour. Use another script to grep the logs for this unusual password, and if it appears fire an alert.
Security reviews are important but we should be able to automate detection of basic security failures like this.
It would also be a good idea to search for the hashed version of that user’s password. It’s really bad to leak the unencrypted password when it comes in as a param, but it’s only marginally better to leak the hashed version.
This only works if you automate every possible code path. If you're logging passwords during some obscure error in the login flow then an automated login very likely won't catch it.
Log review is done for every single project at my workplace too (Walmart Labs). So I don't think this is a novel idea. And it does not stop there. Our workplace has a security risk and compliance review process which includes reviewing configuration files, data on disk, data flowing between nodes, log files, GitHub repositories, and many other artifacts to ensure that no sensitive data is being leaked anywhere.
Any company that deals with credit card data has to be very very sure that no sensitive data is written in clear anywhere. Even while in memory, the data needs to be hashed and the cleartext data erased as soon as possible. Per what I have heard from friends and colleagues, the other popular companies like Amazon, Twitter, Netflix, etc. also have similar processes.
It's novel to me; never worked anywhere that required high level PCI compliance or that scheduled log reviews. Adhoc log review, sure. I think it's a fantastic idea regardless of PCI compliance obligations.
We just realised the software I'm working on has written RSA private keys in the logs for years. Granted, it was at debug level and only when using a rarely-used functionnality, but still.
We also do log reviews, but 99% of the time they simply complain about the volume rather than the contents.
Do you enable debug logging in production? In our setup we log at info and above by default, but then have a config setting that lets us switch to debug logging on the fly (without a service restart).
This keeps our log volume down, while letting us troubleshoot when we need it. This also gives us an isolated time of increased logging that can be specifically audited for sensitive information.
(We caught ourselves doing it 4-5 months back, and went through _everything_ checking... Only random accident that brought it to the attention of anyone who bothered to question it too... Two separate instances by different devs of 'if (DEBUG_LEVEL = 3){ }' instead of == 3 - both missed by code reviews too...)
I know it's irrational but I really dislike Yoda notation. Every time I encounter one while reading code I have to take a small pause to understand them, I don't know why. My brain just doesn't like them. I don't think I'm the only one either, I've seen a few coding styles in the wild that explicitly disallow them.
Furthermore any decent modern compiler will warn you and ask to add an extra set of parens around assignments in conditions so I don't really think it's worth it anymore. And of course it won't save you if you're comparing two variables (while the warning will).
I don't think "Yoda notation" is good advice. How do you prevent mistakes like the following with Yoda notation?
if ( level = DEBUGLEVEL )
When both sides of the equality sign are variables, the assignment will succeed. Following Yoda notation provides a false sense of security in this case.
As an experienced programmer I have written if-statements so many times in life that I never ever, even by mistake, type:
if (a = b)
I always type:
if (a == b)
by muscle memory. It has become a second nature. Unless of course where I really mean it, like:
One way to not write any bugs is to not write any code.
If you must write code, errors follow, and “defence in depth” is applicable. Use an editor that serves you well, use compiler flags, use your linter, and consider Yoda Notation, which catches classes of errors, but yes, not every error.
These kinds of issues are excellent commercials for why the strictness of a language like F# (or OCaml, Haskell, etc), is such a powerful tool for correctness:
1) Outside of initialization, assignment is done with the `<-` operator, so you're only potentially confused in the opposite direction (assignments incorrectly being boolean comparisons).
2) Return types and inputs are strictly validated so an accidental assignment (returning a `void`/`unit` type), would not compile as the function expects a bool.
3) Immutable by default, so even if #1 and #2 were somehow compromised the compiler would still halt and complain that you were trying to write to something unwriteable
Any of the above would terminally prevent compilation, much less hitting a code review or getting into production... Correctness from the ground up prevents whole categories of bugs at the cost of enforcing coding discipline :)
Note these are only constant pointers. Your data is still mutable if the underlying data structure is mutable, (e.g. HashMap). Haven't used Java in a few years, but I made religious use of final, even in locals and params.
I don't really see how unless you've never actually read imperative code before; either way you need to read both sides of the comparison to gauge what is being compared. I'm dyslexic and don't write my comparisons that way and still found it easy enough to read those examples at a glance.
But ultimately, even if you do find it harder to parse (for whatever reason(s)) that would only be a training thing. After a few days / weeks of writing your comparisons like that I'm sure you'll find is more jarring to read it the other way around. Like all arguments regarding coding styles, what makes the most difference is simply what you're used to reading and writing rather than actual code layout. (I say this as someone who's programmed in well over a dozen different languages over something like 30 years - you just get used to reading different coding styles after a few weeks of using it)
Often when I glance over code to understand what it is doing I don't really care about values. When scanning from left to right it is easier when the left side contains the variable names.
Also I just find it unnatural if I read it out loud. It is called Yoda for a reason.
But again, not of those problems you've described are unteachable. Source code itself doesn't read like how one would structure a paragraph for human consumption. But us programmers learn to parse source code because we read and write it frequently enough to learn to parse it. Just like how one might learn a human language by living and speaking in countries that speak that language.
If you've ever spent more than 5 minutes listening to arguments and counterarguments regarding Python whitespace vs C-style braces - or whether the C-style brace should append your statement for sit on its own line - then you'd quickly see that all these arguments about coding styles are really just personal preference based on what that particular developer is most used to (or pure aesthetics on what looks prettiest - that that's just a different angle of the same debate). Ultimately you were trained to read
if (variable == value)
and thus equally you can train yourself to read
if (value == variable)
All the reasons in the world you can't or shouldn't are just excuses to avoid retraining yourself. That's not to say I think everyone should write Yoda-style code - that's purely a matter of personal preference. But my point is arguing your preference as some tangible issue about legibility is dishonest to yourself and every other programmer.
There are always assumptions being made, no matter what you do. But "uppercase -> constant" is such a generic and cross-platform convention that it should always be followed. This code should never have passed code review for this glitch alone.
Rust has an interesting take on that, it's not a syntax error to write "if a = 1 { ... }" but it'll fail to compile because the expression "a = true" returns nothing while "if" expects a boolean so it generates a type check error.
Of course a consequence of that is that you can't chained affectations (a = b = c) but it's probably a good compromise.
Well in Rust ypu couldn't have = return the new value in general anyway, because it's been moved. So even if chained assignment did work, it'd only work for copy types, and really, a feature that only saves half a dozen characters, on a fraction of the assignments you do, that can only be used a fraction of the time, doesn't seem worth it at all.
Unless you follow a zero warning policy they are almost useless. If you have a warning that should be ignored add a pragma disable to that file. Or disable that type of warning if it's too spammy for your project.
I've been working on authentication stuff for the last two weeks and the answer is "more than you'd like".
But luckily it's something we cover in code reviews and the logging mechanism has a "sensitive data filter" that defaults to on (and has an alert on it being off in production.)
From the email I received:
"During the course of regular auditing, GitHub discovered that a recently introduced bug exposed a small number of users’ passwords to our internal logging system, including yours. We have corrected this, but you'll need to reset your password to regain access to your account."
"[We] are implementing plans to prevent this bug from happening again" sure makes it sound like this bug is still happening. Should we wait a couple of days before changing passwords? Will it end up in this log right now, just like the old one?
That sounds more like "We're adding a more thorough testing and code-review process for our password systems to prevent developers from accidentally logging unhashed passwords in the future".
No, it sounds like a reasonable bugfixing strategy. Identify the bug, identify the fastest way to resolve it, then once it's fixed figure out how to ensure it never happens again, and what to do if it does.
I think you read "prevent this bug from happening again" to mean "prevent this particular problem from happening one more time", while the blogpost probably means something like "prevent this class of bug from occurring in the future"
"Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."
Exact same thing that github did just recently.