Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We schedule log reviews just like we schedule backup tests. (Similar stuff gets caught during normal troubleshooting, but reviews are more comprehensive.)

It only takes one debug statement leaking to prod - it has to be a process, not an event.



Why not automate this?

Create a user with an extremely unusual password and create a script that logs them in once an hour. Use another script to grep the logs for this unusual password, and if it appears fire an alert.

Security reviews are important but we should be able to automate detection of basic security failures like this.


It would also be a good idea to search for the hashed version of that user’s password. It’s really bad to leak the unencrypted password when it comes in as a param, but it’s only marginally better to leak the hashed version.


This only works if you automate every possible code path. If you're logging passwords during some obscure error in the login flow then an automated login very likely won't catch it.


True, but it is more effective than doing nothing.


But it's not a choice of doing this or nothing. It's a choice of doing this or something else. That something else may be a better use of your time.


Log review is an awesome idea. Do you mind divulging your workplace?


Log review is done for every single project at my workplace too (Walmart Labs). So I don't think this is a novel idea. And it does not stop there. Our workplace has a security risk and compliance review process which includes reviewing configuration files, data on disk, data flowing between nodes, log files, GitHub repositories, and many other artifacts to ensure that no sensitive data is being leaked anywhere.

Any company that deals with credit card data has to be very very sure that no sensitive data is written in clear anywhere. Even while in memory, the data needs to be hashed and the cleartext data erased as soon as possible. Per what I have heard from friends and colleagues, the other popular companies like Amazon, Twitter, Netflix, etc. also have similar processes.


It's novel to me; never worked anywhere that required high level PCI compliance or that scheduled log reviews. Adhoc log review, sure. I think it's a fantastic idea regardless of PCI compliance obligations.


We just realised the software I'm working on has written RSA private keys in the logs for years. Granted, it was at debug level and only when using a rarely-used functionnality, but still.


For whatever its worth, I do security assessment (pentesting and the like).

Checking logs for sensitive data is a routine test when given access atleast.

Being given that access is disappointingly not routine though.


We also do log reviews, but 99% of the time they simply complain about the volume rather than the contents.

Do you enable debug logging in production? In our setup we log at info and above by default, but then have a config setting that lets us switch to debug logging on the fly (without a service restart).

This keeps our log volume down, while letting us troubleshoot when we need it. This also gives us an isolated time of increased logging that can be specifically audited for sensitive information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: