Andre LaMothe's prior book, Tricks of the Game Programming Gurus, was literally life changing for me.
I was in my late junior or early senior year of high school when it came out. My stepfather had a 386/20 and then later a 486/33, a Borland C compiler, and a generic 700 page "Learn C" book at home, and I had worked all the way through the book. But I couldn't for the life of me figure how in the world to bridge the gap between the extremely slow, "high res" 16 color graphics libraries that came with the compiler, on the one hand, and what Wolfenstein and Doom were doing, on the other, both of which I was utterly entranced by.
And then I saw LaMothe's book on a random shopping trip to... Software Etc, I think? I'd never seen anything like it. And I knew I had to have it, immediately.
After getting that book, I was diving headlong into relatively fast VGA C programming in mode 13h (320x200x256 color). I spent the afternoons of my senior year of high school writing relatively fast texture mapping routines and trying to get full screen 30+ fps interactive scenes and levels running, which I think I mostly did. I had to write my own paint program, too, for 256 color palettized textures. It was thrilling.
Thanks largely to my time with that book, later when I was introduced to the internet the first week I started a Computer Science program at college, I was primed to dive into all the awesome C open source game libraries and tools (like Allegro and DJGPP) that I found online, and I was making commercial games and working in the guts of the Quake and Quake 2 code bases two short years later. (The book and then the internet were not, however, great for my college career)
I know there are corny parts of the book, and maybe things that weren't as cutting edge as they claimed to be. It doesn't teach you how to actually write actual Doom, of course.
But prior to the widespread roll out of the internet, it's hard to get across just how inaccessible most of the knowledge in the book was, at least for a high school kid like me. It really was like turning on a light switch when I got it. Sometimes something is just at the right place at the right time for someone, and that's what that book was for me.
"So do we all have to keep reinventing these wheels, but only after a production outage?"
Lotta cynical replies, and mine is going to sound like one of them at first, but I actually mean it in a relatively deep and profound way: Time is hard. You can even see it in pure math, where Logic is all fun and everyone's having a great time being clever and making all sorts of exciting systems and inferences in those systems... and then you try to build Temporal Logic and all the pretty just goes flying out the door.
Even "what if the reply takes ten seconds" is the beginning. By the very nature of the question itself I can infer the response is expected to be small. What if it is large? What if it might legitimately take more than ten seconds to transfer even under ideal circumstances, but you need to know that it's not working as quickly as possible? Is your entry point open to the public? How does it do with slowloris attacks [1]? What if your system simply falls behind due to lack of resources? The difference between 97% capacity and 103% capacity in your real, time-bound systems can knock your socks off in ways you'd never model in an atemporal system that ignored how long things take to happen.
Programming would be grungy enough even if we didn't have these considerations, but I'm not even scratching the surface on the number of ways that adding time as a real-world consideration complexifies a ton of things. Our most common response is often just to ignore it. This is... actually often quite rational, a lot of the failure cases can be feasibly addressed by various human interventions, e.g., while writing your service to be robust to "a slow internal network" might be a good idea, there's also a sense in which the only real solution is to speed up the internal network. But still, time is always sitting there crufting things up.
One of my favorites is the implicit dependency graph you accidentally start creating once your business systems guys start doing "daily processes" of this and that. We're going to do a daily process to run the bills, but that depends on the four daily dumps that feed the billing process to all have been done first. By the way, did you check that the dumps are actually done and not actually in progress as you're trying to use them? And those four daily dumps each have some other daily processes behind them, and if you're not very careful you'll create loops in those processes which introduce all sorts of other problems... in the end, a set of processes that in perfect atemporal logic land wouldn't be too difficult to deal with becomes something very easy to sleepwalk into a nightmare world, where your dump is scheduled to run between 2:12 and 2:16 and it damned well better not fail for any reason, in your control or out of it, or we're not doing billing today. (Or even the nightmare world where your dump is scheduled to run after 3pm but before 1pm every day... that is, these dependency graphs don't have to get very complicated before literally impossible constraints start to appear if you're not careful!) Trying to explain this to a large number of teams at every level of engineering capability level (frequently going all the down to "a guy who distrusts and doesn't like computers who, against his will, maintains a spreadsheet, which is also one of the vital pillars of our business") is the sort of thing that may make you want to consider becoming a monk.