> You’re looking at the world from a very anthropocentric pov. Sight, sound, touch, feel, taste are all human senses, but they’re all just one thing: ingesting information. An AI can ingest information… that’s just a fact… so… what are we talking about here?
I think my comment was misunderstood, so let me try to break it down a little. Let's remember that this was in the context of "there's nothing about AI in general that limits it to learning only from prior data":
- Senses are used to ingest information, and processors process that information into usable data. The density of the information ingested, and the speed at which it is processed, and the nature of how processing occurs, is vastly different. To further break it down: I'm stating that we don't yet have sensors anywhere near as capable as humans, and that even if we did, without a human brain to process the data, you will receive a different output. Again, see photography for more on this. And we have not even begun to scratch the surface of touch or taste. I understand the touch issue is (one small part) of why general purpose personal robots are not yet viable. I argue that we are a LONG way off from computers being able to interpret the world in a similar fashion to humans.
I believe our sensory capacity is a large (but not complete) part of what it means to be a living animal.
- Emotion still appears to be exclusive to living things, not machines. It's unclear what is necessary for this to change this. This is a limiting factor to computers being able to understand the world, "social interactions, and the unique context of each moment," which was the claim in question.
- As far as I'm aware, no LLM today exhibits true reasoning or morality. While LLMs are certainly impressive in their ability to recall information from compressed data, and even generate streams of text that look like reasoning, they are still simply decompressing stored data. Morality today is implemented as content filters and fine-tuning of this statistical model.
> Also, we have absolutely no idea how the brain works. Current AI was developed off of modern theories on how the brain works. Saying that AI doesn’t represent how the brain works is ridiculous because the whole story of AI was that we developed a theory of how the brain worked, modeled it through tech, and it worked way better than we thought it would.
It makes me really sad when people say this, because it's incredibly disingenuous. There are certainly more questions than answers when it comes to the brain, but we _do_ understand quite a lot. It's not surprising to me that people who are focused on technology and AI would anthropomorphize machines, and then claim that (because they aren't aware of how the brain works) "we don't know how the brain works." I had similar beliefs, as a software engineer. But, after watching my partner attend medical school and residency, it's become clear that my own knowledge is far from the sum of humanity's knowledge in this area.
You're absolutely right that LLMs borrow concepts from neuroscience, but they are still a VERY long way from "recreating the brain." I genuinely find it sad that people think they are no smarter / better than an LLM. Keep in mind no LLM has even passed a Turing test yet. (No, I'm not talking about the Facebook comments section - I'm talking about a test where someone knowingly communicates with a machine and a human through text, and through targeted questions and analysis of the answers, is unable to accurately determine which is which.)
Here's some more food for thought: Can LLMs sleep? Can they dream? What does that look like? Can they form opinions? Can they form meaningful, fulfilling (to themselves) relationships?
> Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history. In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
So a lot here that I disagree with. You start out by pointing out how much more information humans ingest, but there's no reason why amount of information ingested leads to a fundamentally different organism. In the exact same way, I eat a lot more food than an amoeba, but we're still both animals. Scale doesn't make a difference.
The idea that human emotions are somehow unique from thoughts needs to be proven to me. IMO emotions are just thoughts that happen too quick for language. This discretization of the human experience is unnecessary, and like I said before it would be immediately challenged in a philosophy setting. So would your claim that humans exhibit some kind of reasoning or morality that’s distinct and unique. Modern philosophy is quite clear that this is bullshit. I was just reading Nietzsche today and I can feel him rolling over in his grave right now.
Also, the base of machine learning centers around simulating emotions: if the AI does something good, it's rewarded. If it does something bad, it's punished. We created the whole algorithm by simulating Freud's pleasure principle and who are we to say that the simulation is any different from the real thing?
> It's not surprising to me that people who are focused on technology and AI would anthropomorphize machines, and then claim that (because they aren't aware of how the brain works) "we don't know how the brain works." I had similar beliefs, as a software engineer.
Well I'm actually much more knowledgeable about humanities than I am about tech, and IMO the tech world is at the forefront of making our abstract philosophical understanding of the brain concrete. Neural networks and LLMs are the most successful method of creating cognition. I'm sure we'll find that there's a lot more to do, but this could very well be the fundamental algorithm of the brain, and I don't see any reason to discount that by saying what you've been saying in this comment thread.
I think my comment was misunderstood, so let me try to break it down a little. Let's remember that this was in the context of "there's nothing about AI in general that limits it to learning only from prior data":
- Senses are used to ingest information, and processors process that information into usable data. The density of the information ingested, and the speed at which it is processed, and the nature of how processing occurs, is vastly different. To further break it down: I'm stating that we don't yet have sensors anywhere near as capable as humans, and that even if we did, without a human brain to process the data, you will receive a different output. Again, see photography for more on this. And we have not even begun to scratch the surface of touch or taste. I understand the touch issue is (one small part) of why general purpose personal robots are not yet viable. I argue that we are a LONG way off from computers being able to interpret the world in a similar fashion to humans.
For example, Caltech researches note that while our brain can process thoughts at only 10 bits / second, our sensory systems process 1 billion bits / second: https://www.caltech.edu/about/news/thinking-slowly-the-parad...
I believe our sensory capacity is a large (but not complete) part of what it means to be a living animal.
- Emotion still appears to be exclusive to living things, not machines. It's unclear what is necessary for this to change this. This is a limiting factor to computers being able to understand the world, "social interactions, and the unique context of each moment," which was the claim in question.
- As far as I'm aware, no LLM today exhibits true reasoning or morality. While LLMs are certainly impressive in their ability to recall information from compressed data, and even generate streams of text that look like reasoning, they are still simply decompressing stored data. Morality today is implemented as content filters and fine-tuning of this statistical model.
> Also, we have absolutely no idea how the brain works. Current AI was developed off of modern theories on how the brain works. Saying that AI doesn’t represent how the brain works is ridiculous because the whole story of AI was that we developed a theory of how the brain worked, modeled it through tech, and it worked way better than we thought it would.
It makes me really sad when people say this, because it's incredibly disingenuous. There are certainly more questions than answers when it comes to the brain, but we _do_ understand quite a lot. It's not surprising to me that people who are focused on technology and AI would anthropomorphize machines, and then claim that (because they aren't aware of how the brain works) "we don't know how the brain works." I had similar beliefs, as a software engineer. But, after watching my partner attend medical school and residency, it's become clear that my own knowledge is far from the sum of humanity's knowledge in this area.
You're absolutely right that LLMs borrow concepts from neuroscience, but they are still a VERY long way from "recreating the brain." I genuinely find it sad that people think they are no smarter / better than an LLM. Keep in mind no LLM has even passed a Turing test yet. (No, I'm not talking about the Facebook comments section - I'm talking about a test where someone knowingly communicates with a machine and a human through text, and through targeted questions and analysis of the answers, is unable to accurately determine which is which.)
You can get started on some of the differences here: https://buttondown.com/apperceptive/archive/how-llms-are-and...
Here's some more food for thought: Can LLMs sleep? Can they dream? What does that look like? Can they form opinions? Can they form meaningful, fulfilling (to themselves) relationships?
> Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history. In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.