This is a deep, significant post (pardon pun etc).
The author is clearly informed and takes a strong, historical view of the situation. Looking at what the really smart people who brought us this innovation have said and done lately is a good start imo (just one datum of course, but there are others in this interesting survey).
Deepmind hasn't shown anything breathtaking since their Alpha Go zero.
Another thing to consider about Alpha Go and Alpha Go Zero is the vast, vast amount of computing firepower that this application mobilized. While it was often repeated that ordinary Go program weren't making progress, this wasn't true - the best, amateur programs had gotten to about 2 Dan amateur using Makov Tree Search. Alpha Go added CNNs for it's weighting function and petabytes of power for it's process and got effectiveness up to best in the world, 9 Dan professional, (maybe 11 Dan amateur for pure comparison). [1]
Alpha Go Zero was supposedly even more powerful, learned without human intervention. BUT it cost petabytes and petabytes of flops, expensive enough that they released a total of ten or twenty Alpha Go Zero game to the world, labeled "A great gift".
The author convenniently reproduces the chart of power versus results. Look at it, consider it. Consider the chart in the context of Moore's Law retreating. The problems of Alpha Zero generalizes as described in the article.
The author could also have dived into the troubling question as of "AI as ordinary computer application" (what does testing, debugging, interface design, etc mean when the app is automatically generated in an ad-hoc fashion) or "explainability". But when you can paint a troubling picture without these gnawing problems appearing, you've done well.
>Deepmind hasn't shown anything breathtaking since their Alpha Go zero
They went on to make AlphaZero, a generalised version that could learn chess, shogi or any similar game. The chess version beat a leading conventional chess program 28 wins, 0 losses, and 72 draws.
That seemed impressive to me.
Also they used loads of compute during the training but not so much during play.(5000 TPUs, 4TPUs).
Also it got better than humans in those games from scratch in about 4 hours whereas humans have had 2000 years to study them so you can forgive it some resource usage.
It's not like humanity really needs another chess playing program 20 years after IBM solved that problem (but now utilizing 1000x more compute power). I just find all these game playing contraptions really uninteresting. There are plenty real world problems to be solved of much higher practicality. Moravec's paradox in full glow.
The fact that it beat Stockfish9 is not what is impressive with AlphaZero.
What was impressive was the way Stockfish9 was beaten. AlphaZero played like a human player, making sacrifices for position that stockfish thought were detrimental. When it played as white, the fact that is mostly started with the Queen pawn (despite that the King pawn is "best by test") and the way AlphaZero used Stockfish pawnstructure and tempo to basicaly remove a bishop from the game was magical.
Yes, since its a game, it's "useless", but it allowed me (and i'm not the only one) to be a bit better at chess. It's not world hunger, not climate change, it's just a bit of distraction for some people.
PS: I was part of the people thinking that Genetic algorithm+deep learning was not enough to emulate human logical capacities, AlphaZero vs Stockfish games made me admit i was wrong (even if i still think it only works inside well-defined environments)
Playing like a human for me also means making human mistakes. A chess-playing computer playing like a 4000 rated "human" is useless, one that can be configured to play at different ELOs is more interesting, although most can do that and there's no ML needed, nor huge amounts of computing power.
> What was impressive was the way Stockfish9 was beaten.
Without its opening database and without its endgame tablebase?
Frankly, the Stockfish vs AlphaZero match was the beginning of the AI Winter in my mind. The fact that they disabled Stockfish's primary databases was incredibly fishy IMO and is a major detriment to their paper.
Stockfish's engine is designed to only work in the midgame of Chess. Remove the opening database and remove the endgame database, and you're not really playing against Stockfish anymore.
The fact that Stockfish's opening was severely gimped is not a surprise to anybody in the Chess community. Stockfish didn't have its opening database enabled... for some reason.
I think for most people, the research interest in games of various sorts, is not simply a desire for a better and better game contraption, a better mousetrap. But rather the thinking is, "playing games takes intelligence, what can we learn about intelligence by building machines that play games?"
Most games are also closed systems, and conveniently grokkable systems, with enumerable search spaces. Which gives us easily produceable measures of the contraptions' abilities.
Whether this is the most effective path to understanding deeper questions about intelligence is an open question.
But I don't think it's fair to say that deeper questions and problems are being foregone simply to play games.
I think most 'games researchers' are pursuing these paths because they themselves and no one else has put forth any other suggestion that makes them think, "hmm, that's a really good idea, that seems like it might be viable and there is probably something interesting we could learn from it."
This is so true, I can't understand why people miss this. The games are just games. It's intelligence that is the goal.
And comparing Alpha Go Zero against those "other chess programs that existed for 30 years" is exactly missing the point also.
Those programs were not constructed with zero-knowledge. They were carefully crafted by human players to achieve the result. Are we also going to count in all the brain processing power and the time spent by those researchers to learn to play chess? Alpha Go Zero did not need any of that, besides the knowledge about the basic rules of the game. Who compare compute requirements for 2 programs that have fundamentally different goals and achievements? One is carefully crafted by human intervention. The other one learns a new game without prior knowledge...
It shows something about the game, but it's clear that humans don't learn in the way that alpha zero does, do i don't think that alpha zero illuminated any aspect of human intelligence.
I think that fundamentally the goal of research is not necessarily human-like intelligence, just any high-level general intelligence. It's just that the human brain (and the rest of the body) has been a great example of an intelligent entity which we could source of a lot inspiration from. Whether the final result will share a the technical and structural similarity (and how much) to a human, the future will tell.
In principle you are right. In practice we will see. My bet is that attempts that focused on the human model will bear more fruit in the medium term because we have huge capability for observation at scale now which is v. exciting. Obviously ethics permitting!
Not sure if I am reading you correctly but to me you basically are saying "we have no idea but we believe that one day it will make sense".
Sounds more like religion and less like science to me.
I guess we could argue until the end of the world that no intelligence will emerge from more and more clever ways of brute-forcing your way out of problems in a finite space with perfect information. But that's what I think.
But humans could learn in the same way that AlphaZero does. We have the same resources and the same capabilities, just running on million-year-old hardware. Humans might not be able to replicate the performance of AlphaZero, but that does not mean it is useless in the study of intelligence.
The problem is that outside perfect information games, most areas where intelligence is required have few obvious routes to allow the computer to learn by perfectly simulating strategies and potential outcomes. Cases where "intelligence" is required typically entail handling human approximations of a lot of unknown and barely known possibilities with an inadequate dataset, and advances in approaches to perfect information games which can be entirely simulated by a machine knowing the ruleset (and possibly actually perturbed by adding inputs of human approaches to the problem) might be at best orthogonal to that particular goal. One of the takeaways from AlphaGo Zero massively outperforming AlphaGo is that even very carefully designed training sets for a problem fairly well understood by humans might actually retard system performance...
I totally agree with you and share your confusion.
On the topic of the different algorithmic approaches, I find it so fascinating how different these two approaches actually end up looking when analyzed by a professional commentator. When you watch the new style with a chess commentator, it feels a lot like listening to the analysis of a human game. The algorithm has very clearly captured strategic concepts in its neural network. Meanwhile, with older chess engines there is a tendency to get to positions where the computer clearly doesn't know what its doing. The game reaches a strategic point and the things its supposed to do are beyond the horizon of moves it can computer by brute force. So it plays stupid. These are the positions that, even now, human players can beat better than human old style chess engines at.
The thing is that you can learn new moves/strategies that were never thought about before in these games but you still doesn't understand anything about intelligence at all.
It's not like the research on games is at the expense of other more worthy goals. It is a well constrained problem that lets you understand the limitations of your method. Great for making progress. Alpha zero didn't just play chess well, it learned how to play chess well (and could generalize to other games). I'd forgive it 10000 times the resources for that.
I'd say getting better sample efficiency is a bigger deal. It isn't like POMDP's are a huge step away theoretically from MDP's. But if you attach one of these things to a robot, taking 10^7 samples to learn a policy is a deal breaker. So fine, please keep using games to research with.
This. Learning to play a game is one thing. Learning how to teach computers to learn a game is another thing. Yes chess programs have been good before, but that's missing the point a little bit. The novel bit is not that it can beat another computer, but how it learned how to do so.
It's Deep Blue, not Big Blue. The parameters used by its evaluation function were tuned by the system on games played by human masters.
But it's a mistake to think that a system learning by playing against itself is something new. Arthur Samuel's draughts (chequers) program did that in 1959.
It's not that it's new, it's that they've achieved it. Chess was orders of magnitude harder than draughts. The solution for draughts didn't scale to chess but Alpha Go zero showed that chess was ridiculously easy for it once it had learned Go.
Both Samuel's chequer's program and Deep Blue used alpha-beta pruning for search, and a heuristic function. Deep Blue's heuristic function was necessarily more complex because chess is more complex than draughts. I think the reason master chess games were used in Deep Blue instead of self-play was the existence of a large database of such games, and because so much of its performance was the result of being able to look ahead so far.
I guess there are reasons why researchers build chess programs: it is easy to compare performance between algorithms. When you can solve chess, you can solve a whole class of decision-making problems. Consider it as the perfect lab.
What is that class of decision-making problems? It's nice to have a machine really good at playing chess, but it's not something I'd pay for. What decision-making problems are there, in the same class, that I'd pay for?
Consider it as the perfect lab.
Seems like a lab so simplified that I'm unconvinced of its general applicability. Perfect knowledge of the situation and a very limited set of valid moves at any one time.
Strongly disagree. There are a lot of approximation algorithms and heuristics in wide use - to the tune of trillions of dollars, in fact, when you consider transportation and logistics, things like asic place & route, etc. These are all intractable perfect info problems that are so widespread and commercially important that they amplify the effect of even modest improvements.
Indeed, there are a few problems where even with perfect information you will be hard pressed to solve them. But that is only a question of computational power or the issue when the algorithm does not allow efficient approximation (not in APX space or co-APX).
The thing is, an algorithm that can work with fewer samples and robustly tolerating mistakes in datasets (also known as imperfect information) will be vastly cheaper and easier to operate. Less tedious sample data collection and labelling.
Working with lacking and erroneous information (without known error value) is necessarily a crucial step towards AGI; as is extracting structure from such data.
This is the difference between an engineering problem and research problem.
Perhaps a unifying way of saying this is: it's a research problem to figure out how to get ML techniques to the point they outperform existing heuristics on "hard" problems. Doing so will result in engineering improvements to the specific systems that need approximate solutions to those problems.
I completely agree about the importance of imperfect information problems. In practice, many techniques handle some label noise, but not optimally. Even MNIST is much easier to solve if you remove the one incorrectly-labeled training example. (one! Which is barely noise. Though as a reassuring example from the classification domain, JFT is noisy and still results in better real world performance than just training on imagenet.)
> Perfect information problem solving is not interesting anymore.
I guess in the same way as lab chemistry isn't interesting anymore ? (Since it often happens in unrealistically clean equipment :-)
I think there is nothing preventing lab research from going on at the same time as industrialization of yesterday's results. Quite on the contrary: in the long run they often depend on each other.
Poker bots actually deal with a (simple) game with imperfect information. It is not the best test because short memory is sufficient to win at it.
The real challenge is to devise a general algorithm that will learn to be a good poker player in thousands of games, strategically, from just a bunch of games played. DeepStack AI required 10 million simulated games. Good human players outperform it at intermediate training stages.
And then the other part is figuring out actual rules of a harder game...
I think chess may actually be the worst lab. Decisions made in chess are done so with perfect knowledge of the current state and future possibilities. Most decisions are made without perfect knowledge.
This is not what the terminology "perfect knowledge" means. Perfect knowledge (more often called "perfect information") refers to games in which all parts of the game state are accessible to every other player. In theory, any player in the game has access to all information contained in every game state up to the present and can extrapolate possible forward states. Chess is a very good example of a game of perfect information, because the two players can readily observe the entire board and each other's moves.
A good example of a game of imperfect information is poker, because players have a private hand which is known only to them. Whereas all possible future states of a chess game can be narrowed down according to the current game state, the fundamental uncertainty of poker means there is a combinatorial explosion involved in predicting future states. There's also the element of chance in poker, which further muddies the waters.
Board games are often (but not always) games of perfect and complete information. Card games are typically games of imperfect and complete information. This latter term, "complete information", means that even if not all of the game state is public, the intrinsic rules and structure of the game are public. Both chess and poker are complete, because we know the rules, win conditions and incentives for all players.
This is all to say that games of perfect information are relatively easy for a computer to win, while games of imperfect information are harder. And of course, games of incomplete information can be much more difficult :)
A human might not be able to, but a computer can. Isn't the explicit reason research shifted to using Go the fact that you can't just number crunch your way through it?
AlphaGo Zero did precisely that. Most of its computations were done on a huge array of GPUs. The problem with Go is that look-ahead is more of a problem than in Chess, as Go has roughly between five and ten times as many possible moves at each point in the game. So Go was more of a challenge, and master-level play was only made possible by advances in computer hardware.
When you can solve chess, you can solve a whole class of decision-making problems
If this were true, there would be a vast demand for grandmasters in commerce, government, the military... and there just isn’t. Poker players suffer from similar delusions about how their game can be generalised to other domains.
> Poker players suffer from similar delusions about how their game can be generalised to other domains.
Oh that's so true
Poker players in the real life would give up more often than not, whenever they didn't know enough about a situation or they didn't have enough resources for a win with a high probability.
Those traits seem to me like a thing most people desperately need ... Everyone being confident in their assessment of everything seems like one of major problems of today's population.
I think batmansmk doesn't mean "when X is good at chess, X is automatically good at lots of other things", but "the traits that make you a good chess player (given enough training) also make you good at lots of other things (given enough training)".
I might suspect (but certainly cannot prove) that the traits that make a human good at playing chess are very different to the traits that make a machine good at playing chess, and as such I don't think we can assume that the machine skilled-chess-player will be good at lots of other things in an analagous way to the human skilled-chess-player.
And Gaius point stands before this argument as well, chess is seen as such a weak predictor that playing a game of chess or requesting an official ELO rating isn't used for hiring screening for instance.
I suspect that chess as a metagame is just so far developed that being "good at chess" means your general ability is really overtrained for chess.
Second world chess champion Emanuel Lasker spent a couple years studying Go and by his own report was dejected by his progress. Maybe he would have eventually reached high levels, but I've always found this story fascinating.
True, but I'd phrase it the other way around. The traits that make you (a human) good at general problem solving are also the traits that make you a good chess player. I do suspect, though, that there are some Chess-specific traits which boost your Chess performance but don't help much with general intelligence. (Consider, for example, the fact that Bobby Fischer wasn't considered a genius outside of his chosen field.)
Tell me about it. The brightest minds are working on ads, and we have AI playing social games.
Can AI make the world better? It can, but it won't since we are humans, and humans will weaponize technology every chance it gets. Of course some positive uses will come, but the negative ones will be incredibly destructive.
Just because you haven't seen humongous publicity stunts involving pratical uses of AI doesn't mean they aren't being deployed. My company using similar methods to warn hospitals about patients with high probability of imminent heart attacks and sepsis.
The practical uses of these technologies don't always make national news.
I'm sure you would also have scoffed at the "pointless impractical, wasteful use of our brightest minds" to make the the Flyer hang in the air for 30 yards at Kitty Hawk.
Exactly. To my not-very-well-informed self, even AlphaGo Zero is just a more clever way to brute-force board games.
Side observers are taking joy in the risker plays that it did -- reminded them of certain grand-masters I suppose -- but that still doesn't mean AGZ is close to any form of intelligence at all. Those "riskier moves" are probably just a way to more quickly reduce the problem space anyway.
It seriously reminds me more and more of religion, the AI area these days.
>Also it got better than humans in those games from scratch in about 4 hours whereas humans have had 2000 years to study them so you can forgive it some resource usage.
Most humans don't live 2000 years. And realistically don't spend that much of their time or computing power on studying chess. Surely a computer can be more focused at this and the 4h are impressive. But this comparison seems flawed to me.
You're right, though the distinction with the parent poster is that AlphaGo Zero had no input knowledge to learn from, unlike humans (who read books, listen to other players' wisdom, etc). It's a fairly well known phenomenon that e.g. current era chess players are far stronger than previous eras' players, and this probably has to do with the accumulation of knowledge over decades, or even hundreds of years. It's incredibly impressive for software to replicate that knowledge base so quickly.
Not so much from the accumulation of knowledge because players can only study so many games. The difference is largely because their are more people today, they have more free time, and they could play vs high level opponents sooner.
Remember people reach peak play in ~15 years, but they don't nessisarily keep up with advances.
PS: You see this across a huge range of fields from running, figure skating, to music people simply spend more time and resources getting better.
But software is starting from the same base. To claim it isn't would be to claim that the computers programmed themselves completely (which is simply not true).
Sure, there is some base there, and a fair bit of programming existed in the structure of the implementation. However, the heuristics themselves were not, and this is very significant. The software managed to reproduce and beat the previous best (both human and the previous iteration of itself), completely by playing against itself.
So, in this sense, it's kind of like taking a human, teaching them the exact rules of the game and showing them how to run calculations, and then telling them to sit in a room playing games against themselves. In my experience from chess, you'd be at a huge disadvantage if you started with this zero-knowledge handicap.
> In my experience from chess, you'd be at a huge disadvantage if you started with this zero-knowledge handicap.
One problem is that we can't play millions of games against ourselves in a few hours. We can play a few games, grow tired, and then need to go do something else. Come back the next day, repeat. It's a very slow process, and we have to worry about other things in life. How much of one's time and focus can be used on learning a game? You could spend 12 hours a day, if you had no other responsibilities, I guess. That might be counter productive, though. We just don't have the same capacity.
If you artificially limited AlphaGo to human capacity, then my money would be on the human being a superior player.
All software starts with a base of 4 billion years of evolution and thousands years of social progress and so on. But Alpha Zero doesn't require a knowledge of Go on top of that.
Stockfish is not designed to scale to supercomputing clusters or TPUs, Alpha Zero wasn't designed to account for how long it takes to make a move, fair fight was hard to arrange.
There's discussion here https://chess.stackexchange.com/questions/19366/hardware-use...
AlphaZero's hardware was faster and Stockfish had a year old version with non optimum settings. It was still an impressive win but it would be interesting to do it again with a more level playing field.
> Also it got better than humans in those games from scratch in about 4 hours whereas humans have had 2000 years to study them so you can forgive it some resource usage.
Few would care. Your examiner doesn't give you extra marks on a given problem for finishing your homework quickly.
Just because alpha zero doesn't solve the problem you want it to doesn't mean that advancements aren't being made that matter to someone else. To ignore that seems disingenuous.
I'm sure the same could be said for early computer graphics before the GPU race. You don't need Moore's Law to make machine learning fast, you can also do it with hardware tailored to the task. Look at Google's TPUs for an example of this.
If you want an idea of where machine learning is in the scheme of things, the best thing to do is listen to the experts. _None_ of them have promised wild general intelligence any time soon. All of them have said "this is just the beginning, it's a long process." Science is incremental and machine learning is no different in that regard.
You'll continue to see incremental progress in the field, with occasional demonstrations and applications that make you go "wow". But most of the advances will be of interest to academics, not the general public. That in no way makes them less valuable.
The field of ML/AI produces useful technologies with many real applications. Funding for this basic science isn't going away. The media will eventually tire of the AI hype once the "wow" factor of these new technologies wears off. Maybe the goal posts will move again and suddenly all the current technology won't be called "AI" anymore, but it will still be funded and the science will still advance.
It's not the exciting prediction you were looking for I'm sure, but a boring realistic one.
> Funding for this basic science isn't going away.
What make this 3rd/4th boom in AI different?
The other AI winter, the funding for these science went from well funded to little funding.
I'm skeptical, with respect of course, on your statement because it doesn't have anything to back that up other than it produce useful technologies. Wouldn't this statement imply that the other previous AI which experience AI Winter (expert system, and whatever else) didn't produce useful enough technologies to have funding?
I'm currently on the camp of there is going to be an AI Winter III coming.
> None_ of them have promised wild general intelligence any time soon.
The post talk about Andrew Ng wild expectation on other things such as radiologist tweet. While it's not wild general intelligence. What I think the main article and also I am thinking is the outrageous speculation. Another one is the tesla self driving, it doesn't seem to be there yet and perhaps we're hitting the point of over promise like we did in the past and then AI winter happen because we've found the limit.
The previous AI winters were funded by speculative investments (both public research and industry) with the expectation that this might result in profitable technologies. And this didn't happen - yes, "the other previous AI which experience AI Winter (expert system, and whatever else) didn't produce useful enough technologies to have funding", the technologies developed didn't work sufficiently well to have widespread adoption in the industry; there were some use cases but the conclusion was "useful in theory but not in practice".
The current difference is that the technologies are actually useful right now. It's not about promised or expected technologies of tomorrow, but about what we have already researched, about known capabilities that need implementation, adoption, and lots of development work to apply it in lots and lots of particular use cases. If the core research hits a dead end tomorrow and stops producing any meaningful progress for the next 10 or 20 years, the obvious applications of neural-networks-as-we're-teaching-them-in-2018 work sufficiently well and are useful enough to deploy them in all kinds of industrial applications, and the demand is sufficient to employ every current ML practitioner and student even in absence of basic research funding, so a slump is not plausible.
I've recently had a number of calls from recruiters about new startups in the UK in the AI space, some of them local and some of them extensions of US companies. Some of them were clearly less speculative (tracking shipping and footfall for hedge funds) while others were certainly more speculative sounding. The increase of the latter gives me the impression that there is a bit of speculation going on at the moment.
A lot of this is because there is a somewhat mis-informed (which we will be polite and not call 'gullible') class of investors out there, primarily in the VC world, that thinks that most AI is magic pixie dust and so 'we will use AI/DL' and 'we will do it on the blockchain' has become the most recent version of 'we will do it on the web' in terms of helping get funding. Most of these ventures will flame out in 6-12 months and the consequences of this are going to be the source of the upcoming AI winter OP was talking about.
Strangely enough he didn’t speak at all about waymo self driving cars that are already hauling passengers without a safety driver. Given that he needs to hide the facts that go against his narrative I don’t really think that what he is convinced of will become reality.
In a very confined area. He mentions similar issues with Tesla's coast-to-coast autopilot ride: The software is not general enough yet to handle it. That seems to be the case for Waymo as well.
And how is this a failure of AI?
The most optimistic opinions on where we would see autonomous car were on the 2020s.
Instead we have autonomous car hauling people on the streets without any safety driver since 2017. And if everything goes accordingly their plan they will launch a commercial service by the end of the year in several US cities.
To me it seems a resounding success, not a failure.
> The most optimistic opinions on where we would see autonomous car were on the 2020s.
Sure, keep moving timelines. It's what makes you money in the area. I am sure when around mid-2019 hits, it will suddenly be "most experts agree that the first feasible self-driving cars will arrive circa 2025".
> BUT it cost petabytes and petabytes of flops, expensive enough that they released a total of ten or twenty Alpha Go Zero game to the world
Training is expensive but inference is cheap enough for Alpha Zero inspired bots to beat human professionals while running on consumer hardware. DeepMind could have released thousands of pro-level games if they wanted to and others have: http://zero.sjeng.org/
I am 100% in agreement with the author on the thesis: deep learning is overhyped and people project too much.
But the content of the post is in itself not enough to advocate for this position. It is guilty of the same sins: projection and following social noises.
The point about increasing compute power however, I found rather strong. New advances came at a high compute cost. Although it could be said that research often advances like that: new methods are found and then made efficient and (more) economical.
A much stronger rebuttal of the hype would have been based on the technical limitations of deep learning.
> A much stronger rebuttal of the hype would have been based on the technical limitations of deep learning.
I'm not even sure how you'd go about doing that. You could use information theory to debunk some of the more ludicrous claims, especially ones that involve creating "missing" information.
One of the things that disappoints me somewhat with the field, which I've arguably only scratched the surface of, is just how much of it is driven by headline results which fail to develop understanding. A lot of the theory seems to be retrofitted to explain the relatively narrow result improvement and seems only to develop the art of technical bullshitting.
There are obvious exceptions to this and they tend to be the papers that do advance the field. With a relatively shallow resnet it's possible to achieve 99.7% on MNIST and 93% on CIFAR10 on a last-gen mid-range GPU with almost no understanding of what is actually happening.
There's also low-hanging fruit that seems to have been left on the tree. Take OpenAI's paper on parametrization of weights, so that you have a normalized direction vector and a scalar. This makes intuitive sense for anybody familiar with high-dimensional spaces since nearly all of the volume of a hypersphere lies around the surface. That this works in practice is great news, but leaves many questions unanswered.
I'm not even sure how many practitioners are thinking in high dimensional spaces or aware of their properties. It feels like we get to the universal approximation theorem and just accept that as evidence that they'll work well anywhere and then just follow whatever the currently recognised state of the art model is and adapt that to our purposes.
> A much stronger rebuttal of the hype would have been based on the technical limitations of deep learning.
Who's to say we won't improve this though? Right now, nets add a bunch of numbers and apply arbitrarily-picked limiting functions and arbitrarily-picked structures. Is it impossible that we find a way to train that is orders of magnitude more effective?
To me, it's a bit like the question "Who's to say we wont find a way to travel faster than the speed of light?", by which I mean that in theory, many things are possible, but in practice, you need evidence to consider things likely.
Currently, people are projecting and saying that we are going to see huge AI advances soon. On which basis are these claims made? Showing fundamental limitations of deep learning is showing we have no idea how to get there. How to get there yet, indeed, just we have no idea how to do time travel yet.
Overhyped? There are cars driving around Arizona without safety drivers as I type this.
The end result of this advancement to our world is earth shattering.
On the high compute cost. There is an aspect of that being true but we have also seen advancement in silicon to support. We look at WaveNet using 16k cycles through a DNN and offering at scale and competitive price kind of proves the point.
The brain most likely has much more than a petaflop of computing power and it takes at least a decade to train a human brain to achieve the grandmaster level on an advanced board game. In addition, as the other comment says, they learn from hundreds or thousands of years of knowledge that other humans have accumulated and still lose to AlphaZero with mere hours of training.
Current AIs have limitations but, at the tasks they are suited for, they can equal or exceed humans with years of experience. Computing power is not the key limit since it will be made cheaper over time. More importantly, new advances are still being made regularly by DeepMind, OpenAI, and other teams.
Sure, but have you heard about Moravec's paradox? And if so, don't you find it curious that over the 30 years of Moore's law exponential progress in computing almost nothing improved on that side of things, and we kept playing fancier games?
Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.
What do you think of recent papers and demos by teams from Google Brain, OpenAI, and Pieter Abbeel's group on using simulations to help train physical robots? Recent advances are quite an improvement over those from the past.
I'm skeptical, and side with Rodney Brooks on this one. First, reinforcement learning is incredibly inefficient. And sure, humans and animals have forms of reinforcement learning, but my hunch it that it works on an already incredibly semantically relevant representation and utilize the forward model. That model is generated by unsupervised learning (which is way more data efficient). Actually I side with Yann Lecun on this one, see some of his recent talks. But Yann is not a robotics guy, so I don't think he fully appreciates the role of a forward model.
Now using models for RL is the obvious choice, since trying to teach a robot a basic behavior with RL is just absurdly impractical. But the problem here, is that when somebody build that model (a 3d simulations) they put in a bunch of stuff they think is relevant to represent the reality. And that is the same trap as labeling a dataset. We only put in the stuff which is symbolically relevant to us, omitting a bunch of low level things we never even perceive.
This is a longer subject, and a HN is not enough to cover it, but there is also something about the complexity. Reality is not just more complicated than simulation, it is complex with all the consequences of that. Every attempt to put a human filtered input between AI and the world will inherently loose that complexity and ultimately the AI will not be able to immunize itself to it.
This is not an easy subject and if you read my entire blog you may get the gist of it, but I have not yet succeeded in verbalizing it concisely to my satisfaction.
I was thinking just that when reading the paragraphs about the uber accident.
There's absolutely nothing indicating that future progress is not possible, precisely because of how absurd it seems right now.
Retrospectively it might sound that the Japanese were partially right in pursuing "high performance" computing with their fifth generation projects [1] but the Alpha Zero results are impressive beyond the computing performance achieved. It was a necessary element but not the only one.
We very well might be in a deep-learning 'bubble' and the end of a cycle... but I don't think this time around it's really the end for a long-while, but more likely a pivot point.
The biggest minds everywhere are working on AI solutions, and there's also a lot in medical/science going on to map brains and if we can merge neuroscience with computer science we might have more luck with AI in the future...
So we could have a draught for a year or two, but there will be more research, and more breakthroughs. This won't be like the AI winters of the past where it lay dormant for 10+ years, I don't think.
Moore's law (or at least, the diminishing one) is not relevant here because these are not single threaded programs. Google put 8x on their TPUv2 -> v3 upgrade; parallel matrix multiplies at reduced precision are a long way away from any theoretical limits, as I understand it.
The first generation TPUs used 65536 very simple cores.
In the end you have so many transistors you can fit and there are options on how to arrange and use.
You might support very complex instructions and data types and then four cores. Or you might only support 8 bit ints, very, very simple instructions and use 65536 cores.
In the end what matters is the joules to get something done.
We can clearly see that we have big improvements by using new processor architectures.
The author is clearly informed and takes a strong, historical view of the situation. Looking at what the really smart people who brought us this innovation have said and done lately is a good start imo (just one datum of course, but there are others in this interesting survey).
Deepmind hasn't shown anything breathtaking since their Alpha Go zero.
Another thing to consider about Alpha Go and Alpha Go Zero is the vast, vast amount of computing firepower that this application mobilized. While it was often repeated that ordinary Go program weren't making progress, this wasn't true - the best, amateur programs had gotten to about 2 Dan amateur using Makov Tree Search. Alpha Go added CNNs for it's weighting function and petabytes of power for it's process and got effectiveness up to best in the world, 9 Dan professional, (maybe 11 Dan amateur for pure comparison). [1]
Alpha Go Zero was supposedly even more powerful, learned without human intervention. BUT it cost petabytes and petabytes of flops, expensive enough that they released a total of ten or twenty Alpha Go Zero game to the world, labeled "A great gift".
The author convenniently reproduces the chart of power versus results. Look at it, consider it. Consider the chart in the context of Moore's Law retreating. The problems of Alpha Zero generalizes as described in the article.
The author could also have dived into the troubling question as of "AI as ordinary computer application" (what does testing, debugging, interface design, etc mean when the app is automatically generated in an ad-hoc fashion) or "explainability". But when you can paint a troubling picture without these gnawing problems appearing, you've done well.
[1] https://en.wikipedia.org/wiki/Go_ranks_and_ratings