The short answer is absolutely not, even when you are sure that it should. Even something as simple as a naive byteswap function might wind up generating surprisingly suboptimal code depending on the compiler. If you really want to be sure, you're just going to have to check. (And if you want to check, a good tool is, of course, Compiler Explorer.)
No, compilers (correctly) prefer correctness over speed, so they can optimise “obvious” things, but they cannot account for domain knowledge or inefficiencies further apart, or that “might” alter some global state, so they can only make optimisations where they can be very sure there’s no side effects, because they have to err on the side of caution.
They will only give you micro optimisations which could cumulatively speed up sometimes but the burden of wholistic program efficiency is still very much on the programmer.
If you’re emptying the swimming pool using only a glass, the compiler will optimise the glass size, and your arm movements, but it won’t optimise “if you’re emptying the correct pool” or “if you should be using a pump instead” - a correct answer to the latter two could be 100,000 times more efficient than the earlier two, which a compiler could answer.
Some of the moves seemed to change what an individual function might do. For example they suggested pulling an if from a function to the calling function.
Could the compiler figure it out? My gut says maybe; maybe if it started by inlining the callee? But inlining happens based on some heuristics usually, this seems like an unreliable strategy if it would even work at all.
It's a real issue in most compiled languages too if you're not careful (also a sort of opposite issue; too few functions causing unnecessary bloat and also killing performance).