Hmm, I don't have much in the way of links; but I can brain-dump some of my accrued personal opinions, for what they're worth:
* Relatively early in your project, incorporate a "real" logging system, and start leaning on it. For me, in C++, I have most recently been using 'spdlog'[1] fairly happily. It accumulates data in memory and dumps it from a dedicated logging thread at a configurable frequency. That approach helps avoid logging getting in the way of your main program performance, but it has downsides (see below).
** You want logging to be easy to add wherever you need it. It should be about as easy as adding a literal "printf" statement; minimize the barrier to making use of it.
* Be able to slice/dice your logs by category of information, not just debug/warning/error. I find myself often making special "SYSTEM_XYZ_LOG(...)" macros, for different systems. Then, with a compiler flag I can enable/disable output for different facets of the program. Some of those I might leave on, and others are so specialized (or performance-impacting) that I only enable them when needed.
* It can be nice to have several log files, one per system (eg, in a video game: graphics messages go to one file, physics to another, UI to another, etc). However, it can also be useful to have a "combined" log that shows everything intermingled, so you can see the overall timing of things without having to correlate timestamps across files. Ideally your logging systems supports having both.
* Along with the above, develop an ergonomic way of viewing various logs at once. My approach is pretty simplistic: I have a bunch of XTerm windows, and some of them are displaying logs, sometimes filtered variously (eg: `tail -F foo.log | grep -C5 interesting-pattern`). You could get fancier with tmux or something.
** Aside: the general skill of being able to wrangle lots of data from various text files is a good one to develop — not just for logging. I find this much easier to do in a Unix-style environment than on Windows.
* There is a tradeoff between logging synchronously (eg: writing to STDERR, unbuffered) or accumulate-then-flush (such as a buffered STDOUT, or a logger thread approach like in spdlog). If your program crashes, you might not see the most recent (and thus most relevant!) log messages in the latter approach. I usually have some "write to stderr right friggin' now" function for special cases when I need it. However, if you run your program in a debugger and the crash happens, you might be able to step the debugger to let the logging thread dump out whatever is not-yet-flushed-to-disk. I have had good success with that; I just have to remember to do it.
* When you generate too many logging messages, there is a tradeoff between "flush to disk to free up memory" and "just allocate more memory". If you are generating huge amounts of logs, it is possible you will use too much memory on the system, in the latter approach. But in some cases, the latter approach is faster. I've had good enough luck using the former approach and just using a "plenty big" memory buffer.
* If you are logging across multiple machines, you usually need to correlate events via timestamps. So, make sure your clocks are synchronized, or at the very least be aware of this issue (can be a pain).
* Don't be afraid to allow big sizes for your log files, at least when debugging. Storage is cheap, grep is fast. Depends on the scope of your project, of course.
* There's a difference between the logging for your codebase in general, and logging for very specific debugging purposes. I find it handy to have a special log for the latter case, which generally is empty, but when I am investigating something, I can write to that log and monitor it specially, to cut down on having to sift through a lot of noise. It is just the "debugging the current problem" log, and I delete those log statements once I am done. This is essentially the same as using ad-hoc "printf" statements, but using the real logging system, with the benefits that affords.
* Ideally, your log system should not construct its message unless it is actually wanted. Eg: if your log level is "warnings or worse", then LOG_INFO("foo={}", someValue) should not perform any string building work. This seems fairly common today, but some logging APIs don't get this right.
* In C/C++, logging should go through a macro, so it can be compiled out (or compiled out beyond a given severity level) depending on your build. spdlog supports this, and it is fairly easy to write your own, typically.
* A nice-to-have feature is to be able to only log the first N of a duplicate message, when desired. Sometimes (especially when things go wrong), your program will produce an outrageous amount of the same log message, which just drowns out the useful information and potentially use lots of memory. Some logging systems have explicit support for this concept. You can also roll your own (eg: by adding a timer or counter guarding some logging).
* Another nice-to-have feature is having a notion of pushing/popping scopes for logging (eg: log4j's "NDC" concept). In your code, this would correspond to lexical scopes. In the log statements, it would come across as some sort of "toplevel>outer>inner>" prefix or so. This is one thing I wish spdlog had.
** Aha! In digging up the docs for NDC, I found this[2], which does mention a book for your reading list: "Patterns for Logging Diagnostic Messages" part of the book "Pattern Languages of Program Design 3" edited by Martin et al. I cannot vouch for it.
And, as mentioned variously in this thread, logging is just one tool in the toolbox. Don't forget about performance counters, even traces, writing good tests, the debugger, etc. Good logging requires some up-front cost, but generally worth it, IME.