> Code isn't worth optimizing until it either becomes a bottleneck
This is a well-known fallacy. There's no guarantee your performance problems have a single bottleneck. In fact, more often than not, your entire program is poorly thought and the only way to fix it is a full rewrite, with the associated risks.
> When I was teaching, I often used this metaphor: suppose you’re writing some system, you decide that you should avoid premature optimization, so you take the usual advice and build something simple that works. In this metaphor let’s pretend that your whole program is a sort. So you choose a simple sort that works. Bubble Sort. You try it out and it functions perfectly. Now remember Bubble Sort is a metaphor for your whole program. Now we all know that Bubble Sort is crap, so you have to eventually change to Quicksort. Hoare likes you more now. So how do you get there? Do you just, you know, “tune” the Bubble Sort? Of course not, you’re screwed, you have to throw it all out and do it over. OK, except the greater-than test, you can keep that. The rest is going in the trash.
> But you got valuable experience, right? No, you didn’t. Anything you learned about the Bubble Sort is worthless. Quicksort has entirely different considerations.
> The point here is that a small bit of analysis up front could have told you that you needed a O(n*lg(n)) sort and you would have been better served doing that up front. This does not mean you have to microtune the Quicksort up front. Maybe down the road you’ll discover that part of the sort (remember this is a metaphor) should be written in ASM because it’s just that important. Maybe you won’t. There will be time for that. But getting the right key choices up front was not premature. There is a suitable amount of analysis that is appropriate at each stage of your product.
This is a well-known fallacy. There's no guarantee your performance problems have a single bottleneck. In fact, more often than not, your entire program is poorly thought and the only way to fix it is a full rewrite, with the associated risks.
> When I was teaching, I often used this metaphor: suppose you’re writing some system, you decide that you should avoid premature optimization, so you take the usual advice and build something simple that works. In this metaphor let’s pretend that your whole program is a sort. So you choose a simple sort that works. Bubble Sort. You try it out and it functions perfectly. Now remember Bubble Sort is a metaphor for your whole program. Now we all know that Bubble Sort is crap, so you have to eventually change to Quicksort. Hoare likes you more now. So how do you get there? Do you just, you know, “tune” the Bubble Sort? Of course not, you’re screwed, you have to throw it all out and do it over. OK, except the greater-than test, you can keep that. The rest is going in the trash.
> But you got valuable experience, right? No, you didn’t. Anything you learned about the Bubble Sort is worthless. Quicksort has entirely different considerations.
> The point here is that a small bit of analysis up front could have told you that you needed a O(n*lg(n)) sort and you would have been better served doing that up front. This does not mean you have to microtune the Quicksort up front. Maybe down the road you’ll discover that part of the sort (remember this is a metaphor) should be written in ASM because it’s just that important. Maybe you won’t. There will be time for that. But getting the right key choices up front was not premature. There is a suitable amount of analysis that is appropriate at each stage of your product.
https://ricomariani.medium.com/hotspots-premature-optimizati...