> most line-of-business software needs to optimise for changeability and correctness ("programming over time") not performance
These are not mutually exclusive, in fact, more often than not, they are correlated.
Maybe the most important aspect of performance is to make things small. Small code, small data structures, small number of executed instructions. Writing small code is what "thinking about the instruction cache" essentially is, btw.
And as it turns out, the smaller the code, the less room there is for bugs, the more you can understand at once, and the easier it is to get good coverage, good for correctness. As for changeability, the smaller the code, the smaller the changes. The same applies to data.
Now, some optimization techniques can make the code more complicated, for example parallelization, caching, some low level optimization, etc... but these only represent a fraction of what optimizing for performance is. And no serious performance conscious programmer will do that without proper profiling/analysis.
Then there are things that make the code faster with limited impact (positive and negative), and this is what the article is about. Functionally, if/for is not really different from for/if, but one is faster than the other, so why pick the slow one? And even if the compiler optimizes that for you, why rely on the compiler if you can do it properly at no cost. Just like looping over 2D arrays, it is good to know that there are two ways of doing it, and while they look equivalent, one is fast and one is slow, so that you don't pick the slow one by accident.
Is for/if faster since loops get started right away where ifs need to check conditionals constantly on top of whatever action they are supposed to execute?
Loops are fastest when they fit in the processor's instruction cache (and preferably, only touch data that fits in the data cache). Similarly, code is fastest when it has to execute the least amount of instructions. In the first example, the walrus(Option) function is designed to be executed unconditionally, only to return early when there is no walrus. That's an unnecessary function call that can be removed by changing the method signature (in Rust, because it has non-nullable types. In other languages you would need to do the non-null check anyway for safety reasons).
These are not mutually exclusive, in fact, more often than not, they are correlated.
Maybe the most important aspect of performance is to make things small. Small code, small data structures, small number of executed instructions. Writing small code is what "thinking about the instruction cache" essentially is, btw.
And as it turns out, the smaller the code, the less room there is for bugs, the more you can understand at once, and the easier it is to get good coverage, good for correctness. As for changeability, the smaller the code, the smaller the changes. The same applies to data.
Now, some optimization techniques can make the code more complicated, for example parallelization, caching, some low level optimization, etc... but these only represent a fraction of what optimizing for performance is. And no serious performance conscious programmer will do that without proper profiling/analysis.
Then there are things that make the code faster with limited impact (positive and negative), and this is what the article is about. Functionally, if/for is not really different from for/if, but one is faster than the other, so why pick the slow one? And even if the compiler optimizes that for you, why rely on the compiler if you can do it properly at no cost. Just like looping over 2D arrays, it is good to know that there are two ways of doing it, and while they look equivalent, one is fast and one is slow, so that you don't pick the slow one by accident.