Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actual twitter post: https://blog.twitter.com/official/en_us/topics/company/2018/...

"Due to a bug, passwords were written to an internal log before completing the hashing process. We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again."

Exact same thing that github did just recently.



Genuine question—how would this bug be produced in the first place?

My (limited) experience makes me think that cleartext passwords are somehow hard coded to be logged, perhaps through error logging or a feature that’s intended for testing during development.

I personally would not code a backend that allows passwords (or any sensitive strings) to be logged in any shape or form in production, so it seems a little weird to me that this mistake is considered a “bug” instead of a very careless mistake. Am I missing something?

EDIT: Thank you very much in advance!


Let's say you log requests and the POST body parameters that are sent along with them. Oops, forgot to explicitly blank out and fields known to contain passwords. Now they're saved in cleartext in the logs every time the user logs in.


Even logging a username field is likely to catch a bunch of false positives of users entering their passwords in the username input.


We made this mistake - the trick is determining what fields are sensitive, what are sensitive enough that they should be censored but included in the log, and the rest of the crud.

It turns out that this is non-trivial - when censoring how do you indicate that something was changed, while keeping the output to a minimum? blank/"null" was rejected because it would mask other problems, and "* THIS FIELD HAS BEEN REDACTED DUE TO SENSITIVE INFORMATION *" was rejected for being "too long". Currently we use "XXXXX", which has caused some intern head scratching but is otherwise fine.


Easy, you have a framework that validates & sanitizes all your parameters, don't allow any non-declared parameter, and make something like "can_be_logged" a mandatory attribute, then only log those & audit them.


I'd replace redacted fields with [redacted], or maybe U+2588


You prevent a lot of these problems by hashing passwords as soon as possible (i.e., on the client).


Wouldn't that make it easier for someone that has access to hashed passwords in the case of a database leak? They would just have to submit the username and the hashed password (which they now have).


You're right, but the attacker won't get the user's original password that they probably reuse elsewhere.

If it's just your authentication system hashes that are compromised, the damage can be contained.


In this case client side will have our algorithm (i.e in JavaScript) + private key which we will use to hash the password. If this is the case I could not see any different between giving hacker password or hash-password with algorithm and key.


While there is merit to clientside hashing, you should always hash serverside as well, lest a leak prove catastrophic.


In the context of production, why would you need to log anything other than X-Forwarded-For/X-Real-IP, timestamp, and the endpoint that was hit?


Remember that the context is a bug.

So sure you don't want to log everything in Prod, but maybe you do in Dev. In that case, a bug would be to push the dev logging configuration to Prod. Oops.

If you have the clear text password at any point in your codebase, then there is no full-proof way to prevent to log it unintentionally as the result of a bug. You just have to be extra-careful ( code review, minimal amount code manipulating it, prod-like testing environment with log scanner, ...)


Because when fatal exceptions happen you want to know what the request was. It helps debug what went wrong.


Not exactly log files, but I once noticed a C coredump contained raw passwords in strings that had been free'd but not explicitly overwritten. Similar to how Facebook "deletes" files by merely marking them as deleted, "free" works the same way in C, the memory isn't actually overwritten until something else writes onto it.


But if you have access to the programs memory you have access to all the post requests anyway.


Aren't coredumps static copies of the memory state at time of termination - usually unplanned? So not really the same thing as having ongoing access to a program's memory; I can't really see a debugging process that would involve viewing memory in a dynamic way, whereas it's somewhat of a concern if coredumps (an important debugging tool) reveal plaintext passwords.


Your getting a lot of what I would consider bad responses.

There are ways with downsides to mitigate the risk logging requests.

HMAC with time component will render the data useless before long. Essentially OTP. Downside client time needs to be accurate.

Negotiate a shared key ala NTLM. Downside more round trips; essentially establishing encrypted transport inside encrypted transport (https).


Careless mistakes are probably one of the most common types of bug you’ll find in the wild


In the past, I've seen logs monitored for high-entropy strings that could be API keys or passwords. However, in a NoSQL/UUID-using environment, this could be really hard to implement.


Perhaps implement some type of “password canary” - some type of test account(s) with known high-entropy passwords.

Have an automated system send periodic login requests (or any other requests which contain sensitive information that shouldn’t be logged) for this account, and have another system which searches log files for the password.

If it’s ever found, you know something is leaking.


And regularly check for that password on haveibeenpwned and other breached password databases.


Do you trust the database to not have been hijacked to capture checked passwords?

A better advice is to delete accounts you don't use. If not possible (illegal in EU now) scramble private data and the password.

Download the databases yourself and check them locally.

Changing passwords regularly also limits the damage.


Log line -> high entropy check -> false positive uuid check -> alerts

I’m not seeing how it would be a challenge in a uuid based environment, unless there’s a nuanced detail I’m missing.



Of course it could have! No API is foolproof


I think the joke is that both Github and Twitter are famous for being built on Rails (although Twitter just-as-famously required a move off of Rails in order to scale)


There was a great keynote about this at this year's RailsConf

The argument was essentially that if Twitter had instead chosen a language that was more natively inclined toward scalability, they would have necessarily hired 10x as many engineers and they would not have succeeded at building the product that people use to tell each other what bar they are at, simply, which ultimately was the braindead simple thing (that you can probably scale just fine in any language) which drove their success... it wasn't any great technological feat that made Twitter successful, it was pretty much just the "Bro" app that people loved.

(The talk was called "Rails Doesn't Scale" and will be available soon, but RailsConf2018 evidently hasn't posted any of the videos yet.)


Sounds like the same kind of thing that happened with APFS encryption passwords recently, too.

https://www.mac4n6.com/blog/2018/3/30/omg-seriously-apfs-enc...


Where is it showing a password there? I assume this has been fixed because I can't duplicate it on my machine and the screenshot posted on that article doesn't seem to show any plaintext passwords.


Which makes me wonder, is this really a bug or did someone make it look like a "bug"?

Also they say they found no evidence of anyone stealing these passwords, but I wouldn't be surprised if some companies decide not to look too hard just so they can later say "they found no evidence of such an act."


So best practice would be that the cleartext password is never sent to the server, so they could never log it even accidentally. That means the hashing needs to be done client side, probably with JavaScript. Is there any safe way to do that?


nah, that just makes the "hashed password" the equivalent of the cleartext password. Whatever it is your client sends to the server for auth is the thing that needs to be protected. If the client sends a "hashed password", that's just... the password. Which now needs to be protected. Since if someone has it, they can just send it to the server for auth.

But you can do fancy cryptographic things where the server never sees the password and it's still secure. like the entire field of public key cryptography, diffie-hellman key exchange, etc.


But wouldn't you due to random salting at least mitigate the disclosure of the password which might people use elsewhere?

edit: considering someone eavesdrops on the connection, otherwise that's a whole different kind of vulnerability


But then you have to store the password instead of a hash of it because it would change each time thanks to the salt. A much worse situation.


You can store things as follows. Store the salted hashed password with its salt server side. When the user wants to login send them the salt and a random salt. Client side hashes the password + salt then hashes that hash with the random value. What am I missing? Probably something since this is something I rolled my own version of when I was a teenager, but it's not immediately obvious to me.


So let me make sure we're on the same page...

--

Server stores hashed-password, hash-salt, and random-salt.

Server sends hash-salt, and random-salt to client.

Client uses user password and hash-salt to generate hashed-password.

Client hashes hashed-password using random-salt.

Client sends hashed-hashed-password to server.

Server grabs stored hashed-password and hashes used stored random-salt to check for match against client's hashed-hashed-password.

--

So the only thing this actually does is not share the text of the password that the user typed to the server. But at a technical level, now the hashed-password is the new "password".

Let's say the database is compromised. The attacker has the hashed-password. They make a login request to fetch the random-salt, hash their stolen hashed-password with it and send that to the server. Owned.

Along with being more complicated with no real gain, this also takes the hashing away from the server-side, which is a big negative, as the time that it takes to hash a password is a control method used to mitigate attacks.

Just send the plain-text password over HTTPS and hash it the moment it hits the server. There's no issue with this technique (as long as it's not logged!)


This is true. It does prevent an attacker from reusing a password they recover from your logs. But as others have pointed out a DB breach means all your users are compromised. Thank you.


No, random-salt is not stored permanently but generated at random by the server every time a client is about to authenticate. Alternatively a timestamp would be just as good.


The random-salt has to be stored, at least for the length of the authentication request, because the server needs to generate the same hashed-hashed-password as the client to be able to match and authenticate.

> Alternatively a timestamp would be just as good.

I don't see how that would work at all.

I also don't see the need to go any further in detail about how this scheme will not be better than the current best practices.

Never. Roll. Your. Own. Crypto. https://security.stackexchange.com/questions/18197/why-shoul...


A timestamp would work the same way it works in (e.g.) Google Authenticator.

Incidentally, I really resent how it's impossible to have a discussion of anything at all related to cryptography on HN without somebody bringing up the "never roll your own crypto" dogma.

If the ideas being proposed are bad, please point out why, don't just imply that everyone except you is too stupid to understand.

Edit:

I just reread your comment above and you did a perfectly good job of explaining why it's a bad idea, I must have misunderstood first time round: it's a bad idea because now the login credentials get compromised in a database leak instead of a MITM, which is both more common in practice and affects more users at once.

Sorry for saying you didn't explain why it is a bad idea.


You have to store the salt somehow, because you need to check that the salted, hashed password matches.


The problem with this scheme is that if database storing the salted hashed passwords is compromised, then an attacker can easily log in as any user. In a more standard setup, the attacker needs to send a valid password to log in, which is hard to reverse from the salted hashed password stored server-side. In this scheme, the attacker no longer needs to know the password, as they can just make a client that sends the compromised server hash salted with the random salt requested by the server.


Very true, I had not considered that possibility.


> Store the salted hashed password with its salt server side.

So now _this_ is effectively just "the password", that needs to be protected, even though you're storing it server side.

If an attacker has it, they can go through the protocol and auth -- I think, right? So you prob shouldn't be storing it in the db.

All you're doing is shuffling around what "the password" that needs to be protected is, still just a variation of the original attempt in top comment in this thread.

The reason we store hashed passwords in the db instead of the password itself is of course because the hashed password itself is not enough to successfully complete the "auth protocol", without knowing the original password. So it means a database breach does not actually expose info that could be used to successfully complete an auth. (unless they can reverse the hash).

I _think_ in your "protocol" the "original" password actually becomes irelevant, the "salted hashed password with it's salt" is all you need, so now _this_ is the thing you've got to protect, but you're storing it in the db, so now we don't have the benefits of not storing the password in the db that we were hashing passwords in the first place for!

I guess your protocol protects against an eavesdropper better, but we generally just count on https/ssl for that, that's not what password hashing is for in the first place of course. Which is what the OP is about, that _plaintext_ rather than hashed passwords ended up stored and visible, when they never should have been either.

Cryto protocols are hard. We're unlikely to come up with a successful new one.


It's unclear to me how your random salt would work. From my understanding, you're suggesting smth like:

register: send (username, user_salt, HMAC(user_salt, pwd))

login: send (username). retrieve user_salt. retrieve a server_salt generated randomly. send HMAC(server_salt, HMAC(user_salt, pwd))

But now your password is effectively just HMAC(user_salt, pwd), and the server has to store it in plaintext to be able to verify. Since plaintext passwords in the db are bad, this solution doesn't sound too attractive, unless you were suggesting something else.


Nope, that's what I was suggesting and I see now where it's weak.


"Since if someone has it, they can just send it to the server for auth" unless it's only good for a few moments (the form you type it into constantly polling for a new nonce).


The server would not be able to verify a changing hash without knowing the password


Or you could just use PAKE or SRP.


Not really... it's not that simple. You could use the time of day as a seed for the hash, for example. There are tradeoffs to be made, which is partly why they don't do it, but the story isn't as simple as "the hash becomes the password".


If the client knows to use the time of day then an attacker also does.

This is exactly the same: the seeded hash is the password.


Then how does the server check that it's valid?


The time of day is known to both the client and the server right? So they check to see that they get the same hash.


And how do you propose to do that when the clocks arent synchronized? Clock drift is exceptionally common. Not everyone runs ntp or ptp. Probably even fewer use ptp. Desktop/laptop clients it's typically configurable on whether or not to attempt clock sync, and ive never seen where the level of synchronization is documented for PCs. High precision ptp usually requires very expensive hardware, not something to be expected of home users or even a startup depending on the industry.


Well how do you think TOTP works?


TOTP works by having huge margins of errors (minutes worth). The original post is suggesting using time of day as seed.


The point was you could do similarly here. Just have a margin of like 30 seconds (or whatever). I never said you have to do this to nanosecond precision.


But the password is only known to the client?


Only if the server only keeps around the hash -- which is why I said there are trade-offs to be made. The point I was making was that the mere fact that you're sending a hash does not trigger the "hash-becomes-password" issue; that's a result of secondary constraints imposed on the problem.


Makes sense, and then you're getting into something akin to SSH key pairs, and I know from experience that many users can't manage that especially across multiple client devices.


There are probably ways to make it reasonable UX, but they probably require built-in browser (or other client) support.

Someone in another part of this thread mentioned the "Web Authentication API" for browsers, which I'm not familiar with, but is possibly trying to approach this?


Web Auth API (authn) does try to make it usable.

It ties in with the credential management API (A way to have the browser store login credentials for a site, a much less heuristic based approach than autocomplete on forms) and basic principle is generate a key pair, pass back public key to be sent to server during registration. On login generate a challenge value for the client to sign. I don't think iirc the JS code ever sees the private key, only the browser sees it.


How does Web Auth API and Credentials Management API address the "manage across multiple client devices" issue?


Useless unless browsers get their act together and encrypt their autocomplete data. I would never trust any API loosely associated with it.


I believe you could use a construction like HMAC to make it so that during authentication (not password setting events) you don't actually send the token. But if someone is already able to MITM your requests, what are the odds they can't just poison the JavaScript to send it in plaintext back to them?


I think their goal is to still use https, but stop anything important from leaking if a sloppy server-side developer logs the full requests after TLS decryption (as Twitter did here).


Couldn't you hash it client-side, then hash it again server-side?


How is that any different to only hashing server-side?


Password reuse wouldn't be as big of an issue if each site hashed the password a different way


No, there fundamentally isn't, because you can't trust the client to actually be hashing a password. If all the server sees is a hash, the hash effectively is the password. If it's stolen, a hacker can alter their client to send the stolen hash to the server.


If a hash is salted with a domain it won't be use-able on other websites. You should additionally hash the hash on the server, and if you store the client hashes, you can update the salts on next-sign in. A better question is why clients should be sending unhashed passwords to servers in the first place. https://medium.com/the-coming-golden-age/internet-www-securi...


This discussion is only relevant with an attacker that can break tls. A hash that such an attacker couldn't reverse might be slow on old phones so there is a tradeoff.

Also, hashed passwords shouldn't be logged either.



>That means the hashing needs to be done client side, probably with JavaScript. Is there any safe way to do that?

No [0,1...n]. Note that these articles are about encryption, but the arguments against javascript encryption apply to hashing as well.

Also consider that no one logs this stuff accidentally to begin with. If the entity controlling the server and writing the code wants to log the passwords, they can rewrite their own javascript just as well as they can whatever is on the backend. There's nothing to be done about people undermining their own code.

[0]https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...

[1]https://tonyarcieri.com/whats-wrong-with-webcrypto


> consider that no one logs this stuff accidentally to begin with

It's possible. You create an object called Foo (possibly a serialized data like a protobuf, but any object), and you recursively dump the whole thing to the debug log. Then you realize, oh, when I access a Foo, sometimes I need this one field out of the User object (like their first name), so I'll just add a copy of User within Foo. You don't consider that the User object also contains the password as one of its members. Boom, you are now accidentally logging passwords.


Any user object on the server should only ever have the password when it is going through the process of setting or checking the password, and this should be coming from the client and not stored. So, your case of logging the user would only be bad at one of those times. Otherwise like in the case of a stored user you should just have a hashed password and a salt in the user object.


Ok.

Creating a User object that holds a password (much less a password in plaintext) seems next level stupid to begin with, but fair enough, I guess it could happen.


> Also consider that no one logs this stuff accidentally to begin with.

It can happen if requests are logged in middleware, and the endpoint developer doesn't know about it. It's still an extremely rookie mistake though, regardless of whether it was done accidentally or on purpose.


As others have stated, you'd just be changing the secret from <password> to H(<password>). The better solution is using asymmetric cryptography to perform a challenge-response test. E.g. the user sets a public key on sign up and to login they must decrypt a nonce encrypted to them.


Instead of trying to hash the password, just use SSL so the whole request is encrypted. But that doesn't fix servers accidentally logging passwords.

Maybe there could be a standard way to signal the beginning and end of a password string so logging software can redact that part.


You could do client ssl certs and just skip the password. It would be more work for the user though.


That would transfer it from something you know (a password) to something you have (a device with SSL cert installed) which are meant to protect against different problems.


Hmm why should passwords (hashed or not) be stored in logs though? I don’t see a reason for doing that. You could unset it (and/or other sensitive data) before dumping them into logs.


They shouldn’t. It was an unintentional bug


They shouldn't. It was a mistake.


Probably logging the HTTP/S requests, which included usernames & passwords in plaintext.


Wouldn't it be better to never even send the password to the server, but instead performing a challenge-response procedure? Does HTTP have no such feature built in?


So the us commander in chief can now be impersonated on twitter? I am shocked!


> implementing plans to prevent this bug from happening again

How does one do that?


Being simplistic, perhaps an automated test with a known password on account creation or login (e.g. "dolphins") and then a search for the value on all generated logs.


Is it there on the github blog? Any links would be appreciated


From what I've read it only applied to a small number of users and they were notified by email.


For people that know more about web security than I: Is there a reason it isn't good practice to hash the password client side so that the backends only ever see the hashed password and there is no chance for such mistakes?


Realize the point of hashing the password is to make sure the thing users send to you is different than the thing you store. You'll still have to hash the hashes again on your end, otherwise anyone who gets accessed to your stored passwords could use them to login.


In particular, the point is to make it so that the thing you store can't actually be used to authenticate -- only to verify. So if you're doing it right, the client can't just send the hash, because that wouldn't actually authenticate them.


But at least, with salt, it wouldn't be applicable to other sites, just one. Better to just never reuse a password though. Honestly sites should just standardize on a password changing protocol, that will go a long way towards making passwords actually disposable.


I don't think a password changing protocol would help make passwords disposable. Making people change passwords often will result in people reusing more passwords.


No the point is for password manager. The password manager would regularly reset all the password.... until someone accesses your password manager and locks you out of everything!


If by protocol you mean a standard, consistent API that can be used by password managers to update passwords automatically, then I completely agree.


Ultimately, what the client sends to the server to get a successful authentication _is_ effectively the password (whether that's reflected in the UI or not). So if you hash the password on the client side but not on the server, it's almost as bad as saving clear text passwords on the server.

You could hash it twice (once on the server once on the client) I suppose, but I'm not entirely sure what the benefit of that would be.


A benefit would be that under the sort of circumstance in the OP, assuming sites salted the input passwords, the hashes would need reversing in order to acquire the password and so reuse issues could be obviated. But I don't think that's really worth it when password managers are around.

I'm imagining we have a system where a client signs, and timestamps, a hash that's sent meaning old hashes wouldn't be accepted and reducing hash "replay" possibilities ... but now I'm amateurishly trying to design a crypto scheme ... never a good idea.


> meaning old hashes wouldn't be accepted and reducing hash "replay" possibilities

How would the server even verify the hash, then?


Verify the signature, check the time, use the hash as if it were the password to re-hash and compare with DB?


I think there is value in that. I would still be sure to hash it a second time on the server.

My guess is that this isn't popular because of the added client side complexity.

I'm also curious if anyone has considered or implemented this idea.


Ah answered elsewhere, if the client sends the hash and you log the hash then you still have a problem. The user should change passwords.

Although I think this still improves the situation if the password is reused. I.E. I can't use the logged hashed password on other sites.


Assuming that you are referring to browsers as client here. One simple reason is that the client side data can always be manipulated so it does not really makes any difference. It might just give a false sense of safety but does not changes much.

In case we are talking about multi-tier applications where probably LDAP or AD is used to store the credentials then the back end is the one responsible for doing the hashing.


I can't think of a good reason not to hash on the client side (in addition to doing a further hash on the server side -- you don't want the hash stored on the server to be able to be used to log in, in case the database of hashed passwords is leaked). The only thing a bit trickier is configuring the work factor so that it can be done in a reasonable amount of time on all devices that the user is likely to use.

Ideally all users would change their passwords to something completely different in the event of a leak. But realistically this just doesn't happen -- some users refuse to change their passwords, and others just change one character. If only the client-side hash is leaked rather than the raw password, you can greatly mitigate the damage by just changing the salt at the next login.


If you don’t have control on the client, it’s a bad idea: Your suggestion means the password would be the hash itself, and it wouldn’t be necessary for an attacker to know the password.


For one, you expose your hashing strategy. Not that security by obscurity is the goal; but there's no real benefit. Not logging the password is the better mitigation strategy.


>Due to a bug

>Write passwords to a log

Security level - Twitter.


It's funny, I wonder if hearing about that github bug made them check if they had committed the same mistake... only to find that they did :-)


I think I, and everyone here, should check as well. If capable, security-minded companies can make such a mistake, so can you.


We schedule log reviews just like we schedule backup tests. (Similar stuff gets caught during normal troubleshooting, but reviews are more comprehensive.)

It only takes one debug statement leaking to prod - it has to be a process, not an event.


Why not automate this?

Create a user with an extremely unusual password and create a script that logs them in once an hour. Use another script to grep the logs for this unusual password, and if it appears fire an alert.

Security reviews are important but we should be able to automate detection of basic security failures like this.


It would also be a good idea to search for the hashed version of that user’s password. It’s really bad to leak the unencrypted password when it comes in as a param, but it’s only marginally better to leak the hashed version.


This only works if you automate every possible code path. If you're logging passwords during some obscure error in the login flow then an automated login very likely won't catch it.


True, but it is more effective than doing nothing.


But it's not a choice of doing this or nothing. It's a choice of doing this or something else. That something else may be a better use of your time.


Log review is an awesome idea. Do you mind divulging your workplace?


Log review is done for every single project at my workplace too (Walmart Labs). So I don't think this is a novel idea. And it does not stop there. Our workplace has a security risk and compliance review process which includes reviewing configuration files, data on disk, data flowing between nodes, log files, GitHub repositories, and many other artifacts to ensure that no sensitive data is being leaked anywhere.

Any company that deals with credit card data has to be very very sure that no sensitive data is written in clear anywhere. Even while in memory, the data needs to be hashed and the cleartext data erased as soon as possible. Per what I have heard from friends and colleagues, the other popular companies like Amazon, Twitter, Netflix, etc. also have similar processes.


It's novel to me; never worked anywhere that required high level PCI compliance or that scheduled log reviews. Adhoc log review, sure. I think it's a fantastic idea regardless of PCI compliance obligations.


We just realised the software I'm working on has written RSA private keys in the logs for years. Granted, it was at debug level and only when using a rarely-used functionnality, but still.


For whatever its worth, I do security assessment (pentesting and the like).

Checking logs for sensitive data is a routine test when given access atleast.

Being given that access is disappointingly not routine though.


We also do log reviews, but 99% of the time they simply complain about the volume rather than the contents.

Do you enable debug logging in production? In our setup we log at info and above by default, but then have a config setting that lets us switch to debug logging on the fly (without a service restart).

This keeps our log volume down, while letting us troubleshoot when we need it. This also gives us an isolated time of increased logging that can be specifically audited for sensitive information.


Yep, glad I read this thread. We were making the same simple mistake.


We aren't.

Now.

(We caught ourselves doing it 4-5 months back, and went through _everything_ checking... Only random accident that brought it to the attention of anyone who bothered to question it too... Two separate instances by different devs of 'if (DEBUG_LEVEL = 3){ }' instead of == 3 - both missed by code reviews too...)


This is why you should turn on compiler warnings and heed them. It would have caught this.


And consider “Yoda Notation”[0], which some people find annoying, but I found an easy hurdle to clear:

  if ( 3 = DEBUGLEVEL ) 
wouldn’t pass the the parser because you can’t assign to an rvalue.

[0] https://en.wikipedia.org/wiki/Yoda_conditions


I know it's irrational but I really dislike Yoda notation. Every time I encounter one while reading code I have to take a small pause to understand them, I don't know why. My brain just doesn't like them. I don't think I'm the only one either, I've seen a few coding styles in the wild that explicitly disallow them.

Furthermore any decent modern compiler will warn you and ask to add an extra set of parens around assignments in conditions so I don't really think it's worth it anymore. And of course it won't save you if you're comparing two variables (while the warning will).


I don't think "Yoda notation" is good advice. How do you prevent mistakes like the following with Yoda notation?

  if ( level = DEBUGLEVEL )
When both sides of the equality sign are variables, the assignment will succeed. Following Yoda notation provides a false sense of security in this case.

As an experienced programmer I have written if-statements so many times in life that I never ever, even by mistake, type:

  if (a = b)
I always type:

  if (a == b)
by muscle memory. It has become a second nature. Unless of course where I really mean it, like:

  if ((a = b) == c)


FWIW I'm pretty sure both the devs who did this and both the other devs who code reviewed it would claim the same thing...

Like other people are saying - the toolchain should have caught this. And it should have, I don't remember how it'd been disabled...


One way to not write any bugs is to not write any code.

If you must write code, errors follow, and “defence in depth” is applicable. Use an editor that serves you well, use compiler flags, use your linter, and consider Yoda Notation, which catches classes of errors, but yes, not every error.


One of the sides is(should be) a CONSTANT. And you can't assign a value to a constant.


Why should one of the sides be a constant? There is plenty of code where both sides are variables.


What about this?

if (env('LOG_LEVEL') = 3) {}

Would throw and everything would be ok. Otherwise, use constants.

Also, lint against assignment in if/while conditions. If you want to assign in those conditions, disable linting for the line and make it explicit.


And if you write f# or Java code?


These kinds of issues are excellent commercials for why the strictness of a language like F# (or OCaml, Haskell, etc), is such a powerful tool for correctness:

1) Outside of initialization, assignment is done with the `<-` operator, so you're only potentially confused in the opposite direction (assignments incorrectly being boolean comparisons).

2) Return types and inputs are strictly validated so an accidental assignment (returning a `void`/`unit` type), would not compile as the function expects a bool.

3) Immutable by default, so even if #1 and #2 were somehow compromised the compiler would still halt and complain that you were trying to write to something unwriteable

Any of the above would terminally prevent compilation, much less hitting a code review or getting into production... Correctness from the ground up prevents whole categories of bugs at the cost of enforcing coding discipline :)


This is why I love F# but if you jump between f# and c# your muscle memory will suffer.


In Java you usually use `.equals()` to test equality, or if your argument is a boolean value:

    if (myVar) {
        //
    }
Instead of `myVar == true/false`.

The accidental assignment is much less common due to the way equality is tested in Java.

Also, `null` comparisons being assigned will fail to compile (assuming var is a String here):

    TestApp.java:6: error: incompatible types: String cannot be converted to boolean
        if (var = null) {


But in Java it’s much easier to do the error of using == instead of equals if you always jump language.


Sure, but that is such a common mistake that all Java IDE's warn you when you try to use == for Strings and normal non-number objects.


For java code, use final so that you have constants.


Note these are only constant pointers. Your data is still mutable if the underlying data structure is mutable, (e.g. HashMap). Haven't used Java in a few years, but I made religious use of final, even in locals and params.


Good point regarding mutable data. But since we were talking about loglevels, I don't think it's a problem there.


Yep - I pointed out that I used to do this in Perl back in '95 or so. At least one of the devs wasn't born then, none of them had ever used Perl.

(I'm not even sure how they'd ended up with a Grails configuration that'd let them do this anyway...)


Thanks for sharing. I am a less experienced programmer and have never seen this before. The name is so wonderful.


Apt day for discussing Yoda condition :)


Yoda makes code more confusing to read at a glance so I would recommend against it.


I don't really see how unless you've never actually read imperative code before; either way you need to read both sides of the comparison to gauge what is being compared. I'm dyslexic and don't write my comparisons that way and still found it easy enough to read those examples at a glance.

But ultimately, even if you do find it harder to parse (for whatever reason(s)) that would only be a training thing. After a few days / weeks of writing your comparisons like that I'm sure you'll find is more jarring to read it the other way around. Like all arguments regarding coding styles, what makes the most difference is simply what you're used to reading and writing rather than actual code layout. (I say this as someone who's programmed in well over a dozen different languages over something like 30 years - you just get used to reading different coding styles after a few weeks of using it)


Consistency is king.

Often when I glance over code to understand what it is doing I don't really care about values. When scanning from left to right it is easier when the left side contains the variable names.

Also I just find it unnatural if I read it out loud. It is called Yoda for a reason.


But again, not of those problems you've described are unteachable. Source code itself doesn't read like how one would structure a paragraph for human consumption. But us programmers learn to parse source code because we read and write it frequently enough to learn to parse it. Just like how one might learn a human language by living and speaking in countries that speak that language.

If you've ever spent more than 5 minutes listening to arguments and counterarguments regarding Python whitespace vs C-style braces - or whether the C-style brace should append your statement for sit on its own line - then you'd quickly see that all these arguments about coding styles are really just personal preference based on what that particular developer is most used to (or pure aesthetics on what looks prettiest - that that's just a different angle of the same debate). Ultimately you were trained to read

    if (variable == value)
and thus equally you can train yourself to read

    if (value == variable)
All the reasons in the world you can't or shouldn't are just excuses to avoid retraining yourself. That's not to say I think everyone should write Yoda-style code - that's purely a matter of personal preference. But my point is arguing your preference as some tangible issue about legibility is dishonest to yourself and every other programmer.


In this specific case DEBUGLEVEL should be a constant anyways, and thus assignment should fail, no? Also kind of denoted by being all caps.


Conventions cause assumptions.


There are always assumptions being made, no matter what you do. But "uppercase -> constant" is such a generic and cross-platform convention that it should always be followed. This code should never have passed code review for this glitch alone.


Which language would stop/warn you assigning the value of a constant to a variable? Doesn't "var = const" just work in most languages?


It's yoda condition, so const = var would fail.


Yes! I've been doing this in C for years. It's a little weird to read at first, but every now and then it really saves you.


Yeah, exactly. This error shouldn't ever happen, period. All modern development tools give big fat warnings when you do this.


“Should” is a bad word. If you are basing a conclusion off of a “should,” you are skating on thin ice.


This is just one of the many reasons why I like Python.

    > if a = 3:
           ^
    SyntaxError: invalid syntax


Rust has an interesting take on that, it's not a syntax error to write "if a = 1 { ... }" but it'll fail to compile because the expression "a = true" returns nothing while "if" expects a boolean so it generates a type check error.

Of course a consequence of that is that you can't chained affectations (a = b = c) but it's probably a good compromise.


Well in Rust ypu couldn't have = return the new value in general anyway, because it's been moved. So even if chained assignment did work, it'd only work for copy types, and really, a feature that only saves half a dozen characters, on a fraction of the assignments you do, that can only be used a fraction of the time, doesn't seem worth it at all.


People (atleast me) ignore warnings quite often, they aren’t safe haven if you ask me.


Hey no problem, just add -Werror to your compiler flags (C/C++/Java) or '<TreatWarningsAsErrors>true</TreatWarningsAsErrors>' to your csproj (C#).


This! Treat every warning as a failure, ideally in your CI system so people can't forget, and this problem (ignoring warnings..) goes away.

You will have a better, more reliable, and safer codebase once you clean up the legacy mess and turn this on..


I agree. Having worked in a project with warnings as errors on (c++) I found it annoying at first but it made me a better coder in the long run.

Plus you get out of the habit of not reading output from the compiler because there are so many warnings...


Unless you follow a zero warning policy they are almost useless. If you have a warning that should be ignored add a pragma disable to that file. Or disable that type of warning if it's too spammy for your project.


I'm curious, how often do you actually need to print out the password in a development context?


I've been working on authentication stuff for the last two weeks and the answer is "more than you'd like".

But luckily it's something we cover in code reviews and the logging mechanism has a "sensitive data filter" that defaults to on (and has an alert on it being off in production.)


seems like a bug in your platform


What developer in their right mind would ever log a password in the first place? Are we devolving as a profession?


Can be more accidental. e.g. dumping full POST data in a more generic way (e.g. on exceptions) that happens to also be applied on the login page.


Wasn't this GitLab, not GitHub?


From the email I received: "During the course of regular auditing, GitHub discovered that a recently introduced bug exposed a small number of users’ passwords to our internal logging system, including yours. We have corrected this, but you'll need to reset your password to regain access to your account."


"[We] are implementing plans to prevent this bug from happening again" sure makes it sound like this bug is still happening. Should we wait a couple of days before changing passwords? Will it end up in this log right now, just like the old one?


That sounds more like "We're adding a more thorough testing and code-review process for our password systems to prevent developers from accidentally logging unhashed passwords in the future".


No, it sounds like a reasonable bugfixing strategy. Identify the bug, identify the fastest way to resolve it, then once it's fixed figure out how to ensure it never happens again, and what to do if it does.


I think you read "prevent this bug from happening again" to mean "prevent this particular problem from happening one more time", while the blogpost probably means something like "prevent this class of bug from occurring in the future"


~~Sounds more like "we fixed this bug, and will ignore the processes that led to it happening" bullshit to me.~~


And this comment sounds more like "DAE hate twitter."

Their response is acceptable and textbook. Doesn't really seem like the appropriate place to wage the battle.


Yeah, bad kneejerk response on my part. Sorry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: