Depends how you look at it. Single thread perf is pretty much stagnant. Only recently ryzen has budged multi core perf, we're still to see how long this growth path can be.
I agree that cloud has made computing power more accessible, but there are known limits regarding scalability. Moreover, using distributed computing (spark, etc) kills efficiency (see all those posts about a laptop beating a medium sized big data cluster).
If I am not misremembering, "all those blog posts" are all written by Frank McSherry, one of the cofounders of materialize materialize and developer of differential dataflow :-)
Very few data processing activities are bottlenecked by single thread performance. Similarly as memory, thread count, and storage capacities of modern servers have continued Moore's law the number of applications that can fit within the same finite cost has also improved.
In terms of efficiency, the big question is "does an incremental gain in efficiency matter for this application?" if we're talking about < 10x performance/cost change the answer will be no for most teams. Consider how many of these big data applications are implemented in python or other interpreted languages.
I agree that cloud has made computing power more accessible, but there are known limits regarding scalability. Moreover, using distributed computing (spark, etc) kills efficiency (see all those posts about a laptop beating a medium sized big data cluster).