Some funds that tried to recruit me were really interested in classical generative models (ARMA, GARCH, HMMs with heavy-tailed emissions, etc.) extended with deep components to make them more flexible. Pyro and Kevin Murphy's ProbML vol II are a good starting point to learn more about these topics.
The key is to understand that in some of these problems, data is relatively scarce, and it is really important to quantify uncertainty.
I know next to nothing about this. How do people make use of forecasts that don't provide an uncertainty? It seems like that's the most important part. Why hasn't bayseyan statistics taken over completely?
Bayesian inference is costly and adds a significant amount of complexity to your workflow. But yes, I agree, the way uncertainty is handled is often sloppy.
Maximum likelihood estimates are very frequently atypical points in the posterior distribution. It is unsettling to hear people are using this and not computing the entire posterior.
The key is to understand that in some of these problems, data is relatively scarce, and it is really important to quantify uncertainty.