It seems like you've misunderstood what I'm trying to say. I'm saying your statement
>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.
is very wrong. It is not really possible comparing atari games to even the same class of difficulty as household chores. You've just agreed with my point and said that I'm speculating.
>Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)
I never said that they are trivial. Although the point I'm making, again, is that you can't say that we can brute force even narrow household chores. It has a level of complexity - friction (which is a huge problem), elasticity and even air flow can mess up the actions, and they lack the computing power to account for everything. Whereas we have something called intuition (which I may add everyone interested in AGI to properly read up on, starting with Brain Games - S4:Intuition which is on netflix)
And it seems like you don't consider brute-forced solutions as proper solutions. I agree with that, as will any one who has common sense and read a couple of wikipedia articles. But RL is not exactly the brute forcing as we think of it, although it might look like it. We all employ brute force learning in our own lives, to whatever extent it might be, although our feedback and thought processes are much more complex so we feel we are acting out of pure intelligent deductions we make in our brain. We still need a couple of 'brute force' attempts, although with the number of iterations we need, you can't call them that.
I suggest you read some literature too, and please point out where I'm speculating.
Almost. I'm saying the difficulty (of household tasks) is so much more than they are in a different class of problems, and cannot be equated using a comparative adjective
>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.
is very wrong. It is not really possible comparing atari games to even the same class of difficulty as household chores. You've just agreed with my point and said that I'm speculating.
>Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)
I never said that they are trivial. Although the point I'm making, again, is that you can't say that we can brute force even narrow household chores. It has a level of complexity - friction (which is a huge problem), elasticity and even air flow can mess up the actions, and they lack the computing power to account for everything. Whereas we have something called intuition (which I may add everyone interested in AGI to properly read up on, starting with Brain Games - S4:Intuition which is on netflix)
And it seems like you don't consider brute-forced solutions as proper solutions. I agree with that, as will any one who has common sense and read a couple of wikipedia articles. But RL is not exactly the brute forcing as we think of it, although it might look like it. We all employ brute force learning in our own lives, to whatever extent it might be, although our feedback and thought processes are much more complex so we feel we are acting out of pure intelligent deductions we make in our brain. We still need a couple of 'brute force' attempts, although with the number of iterations we need, you can't call them that.
I suggest you read some literature too, and please point out where I'm speculating.
1. DeepMind's reinforcement learning paper : http://www.readcube.com/articles/10.1038/nature14236?shared_...