This is why I’ve always been skeptical of runaway superintelligence. Where does a brain in a vat get the map to go where there are no roads? Where does it get its training data? It is not embodied so it can’t go out there and get information and experience to propel its learning.
Giving an AI the ability to self modify would just be a roundabout way of training it on itself. Repeatedly compress a JPEG and you don’t get the “enhance” effect from Hollywood. You get degraded quality and compression artifacts.
> Where does a brain in a vat get the map to go where there are no roads? Where does it get its training data? It is not embodied so it can’t go out there and get information and experience to propel its learning.
AI in a vat that can't do it is obviously useless. It's the ML equivalent of a computer running purely functional software: i.e. just sitting there and heating up a bit (though technically that is a side effect).
Conversely, any AI that's meant to be useful will be hooked up to real world inputs somehow. Might be general Internet access. May be people chatting with it via REST API. Might be a video feed. Even if the AI exists only to analyze and remix outputs of LLMs, those LLMs are prompted by something connected to the real world. Even if it's a multi-stage connection (AI reading AI reading AI reading AI... reading stock tickers), there has to be a real-world connection somewhere - otherwise the AI is just an expensive electric heater.
Point being, you can assume every AI will have a signal coming in from the real world. If such AI can self-modify, and if it would identify that signal (or have it pointed out) as a source of new information, it could grow based on that and avoid re-JPG-compressing itself into senility.
Input from the real world probably isn't enough. It seems to me a real threatening intelligence needs the ability to create feedback loops through the real world, just like humans do.
Unless a given class of LLMs is run only once and then forgotten, there alredy is a feedback loop through the real world - the output of the LLM is used for something, and influences the next input to a smaller or larger degree.
Giving an AI the ability to self modify would just be a roundabout way of training it on itself. Repeatedly compress a JPEG and you don’t get the “enhance” effect from Hollywood. You get degraded quality and compression artifacts.