This is so cool. What is the second order effect of model training becoming democratized? And local models becoming the norm? Tasks like agentic work are well handled by current AI as long as you know what you're doing and can stress the agent against tests/spec, etc.
I am thinking that one effect is:
- it will become normal for meta-models to train a model specific to a particular task/product.
Also, differently, I'm quite sure that AGI is not available on this current path (useful tho it is), but that some algo improvements might crack ubiquitous trainable AGI. Probably including some kind of embodiment to provide world-models and emotions (which are essential to embodied survival and success).
Personally, I think usable AI is more valuable than simply more intelligence. Many of the labs are pushing towards models that are 1% better on CodeForces and AIME if you just let it think and use tools for hours, instead of more user-friendly models with better coding habits, like writing shorter and more modular code.
Totally this. But the corp labs have incentives to keep researching per investors and staffing load, so they have to show work.
I guess a nice advantage of backwardness here is that economic opportunities exist for those who can solve pain points in the use of existing intel. Older models often do almost as well at agentic tasks in reality, can probably go further.
Still, AGI should remove a lot of this making it redundant, and it will then be more about the intel than the tooling. But an opportunity exists now. We may not have widespread AGI until 8 - 10 years later, so plenty of money to be made in the meantime.
Ya definitely, that makes total sense. It feels to me that currently the labs have great researchers, who only care about making models perform better across raw intel and then they have incompetent applied AI engineers / FDE's who can only suggest using better prompting to remove bad habits to make agents more usable.
I am thinking that one effect is:
- it will become normal for meta-models to train a model specific to a particular task/product.
Also, differently, I'm quite sure that AGI is not available on this current path (useful tho it is), but that some algo improvements might crack ubiquitous trainable AGI. Probably including some kind of embodiment to provide world-models and emotions (which are essential to embodied survival and success).