> Do you actually believe that a technology that has access to vast amounts of human-generated data and can imitate our language well enough to fool us a lot of the time will suddenly scale into or somehow spark itself into an actual intelligence with its own goals and ideas and a drive to survive and reproduce? Or that something that has no body or senses with which to experience the world can evolve itself?
Not in its current state. Have you looked at things like LangChain at all? I think people are overlooking the integration piece of this pie. People are already using it to write code which can recompile. People are already integrating plugins and training it to call functions instead of respond with human language. We're really only a few iterations away from it being able to use what it has learned with human language to be able to emulate "decision making" and that may or may not be reasonable, depending on how it's built or trained.
Despite what people keep saying about "it's just an LLM!", it should be clear to people that language is a human abstraction for meaning, and clearly GPT-4+ can produce pretty meaningful results in the language department. If you give it camera feeds, and access to APIs, and knowledge of burpsuite and kali linux, and on and on, things do start to look a little scary.
Edit: and by give it, I don't mean ChatGPT, I mean whatever the next nth iteration is. I think we're safe for now. :)
> Despite what people keep saying about "it's just an LLM!", it should be clear to people that language is a human abstraction for meaning, and clearly GPT-4+ can produce pretty meaningful results in the language department.
Yes -- it produces results meaningful to us, or we infer meaning from the results because our brains use the best model they have, inferring intelligence and meaning from language. But the language comes from data sets of human-produced material, not from the LLM's imagined intelligence or consciousness.
Not in its current state. Have you looked at things like LangChain at all? I think people are overlooking the integration piece of this pie. People are already using it to write code which can recompile. People are already integrating plugins and training it to call functions instead of respond with human language. We're really only a few iterations away from it being able to use what it has learned with human language to be able to emulate "decision making" and that may or may not be reasonable, depending on how it's built or trained.
Despite what people keep saying about "it's just an LLM!", it should be clear to people that language is a human abstraction for meaning, and clearly GPT-4+ can produce pretty meaningful results in the language department. If you give it camera feeds, and access to APIs, and knowledge of burpsuite and kali linux, and on and on, things do start to look a little scary.
Edit: and by give it, I don't mean ChatGPT, I mean whatever the next nth iteration is. I think we're safe for now. :)