I went through Stanford CS when those guys were in charge. It was starting to become clear that the emperor had no clothes, but most of the CS faculty was unwilling to admit it.
It was really discouraging. Peak hype was in "The fifth generation: artificial intelligence and Japan's computer challenge to the world" (1983), by Feigenbaum. (Japan at one point in the 1980s had an AI program which attempted to build hardware to run Prolog fast.)
Trying to use expert systems for medicine lent an appearance of importance to something that might work for auto repair manuals. It's mostly a mechanization of trouble-shooting charts.
It's not totally useless, but you get out pretty much what you carefully put in.
Not exactly an expert system, but during my PhD I contributed to a natural language parsing/generation system for Dutch written mostly in Prolog with some C++ for performance reasons. The only statistical component was a maxent ranker for disambiguation and fluency ranking.
No statistical dependency parser came near it accuracy-wise until BERT/RoBERTa + biaffine parsing.
Oh yeah, the good hand crafted grammars are really good. For my PhD I worked in a group that was deep in the DelphIN/ERG collaboration, and they did some amazing things with that.
To be fair the performance of rules or Bayesian networks or statistical models wasn't the problem (performance compared to existing practice). DeDombal showed in 1972 that a simple Bayes model was better than most ED physicians in triaging abdominal pain.
The main barrier to scaling was workflow integration due to lack of electronic data, and if it was available, interoperability (as it is today). The other barriers were problems with maintenance and performance monitoring, which are still issues today in healthcare and other industries.
I do agree the 5th Generation project never made sense, but as you point out they had developed hardware to accelerate Prolog and wanted to show it off and overused the tech. Hmmm, sounds familiar...
The paper of Ueda they cite is so lovely to read, full of marvelous ideas:
Ueda K. Logic/Constraint Programming and Concurrency: The hard-won lessons of the Fifth Generation Computer project. Science of Computer Programming. 2018;164:3-17. doi:10.1016/j.scico.2017.06.002 open access: https://linkinghub.elsevier.com/retrieve/pii/S01676423173012...
The early history of AI/cybernetics seems poorly documented. There are a few books, some articles and some oral histories about what was going on with McCulloch and Pitts. It makes one wonder what might have been with a lot of things. Including if Pitts had lived longer, been able to get out of the rut he found himself in the end (to put it mildly) and hadn’t burned his PhD dissertation, but perhaps one of the more interesting comments that is directly relevant to all this lies in this fragment from a “New Scientist” article[1]:
> Worse, it seems other researchers deliberately stayed away. John McCarthy, who coined the term “artificial intelligence”, told Piccinini that when he and fellow AI founder Marvin Minsky got started, they chose to do their own thing rather than follow McCulloch because they didn’t want to be subsumed into his orbit.
The early history of AI/cybernetics seems poorly documented.
I guess it depends on what you mean by "documented". If you're talking about a historical retrospective, written after the fact by a documentarian / historian, then you're probably correct.
But in terms of primary sources, I'd say it's fairly well documented. A lot of the original documents related to the earlier days of AI are readily available[1]. And there are at least a few books from years ago that provide a sort of overview of the field at that moment in time. In aggregate, they provide at least a moderate coverage of the history of the field.
Consider also that the term "History of Artificial Inteligence" has its own Wikipedia page[2] which strikes me as reasonably comprehensive.
[1]: Here I refer to things like MIT CSAIL "AI Memo series"[3] and related[4][5], the Proceedings of the International Joint Conference on AI[6], the CMU AI Repository[7], etc.
I went through Stanford CS when those guys were in charge. It was starting to become clear that the emperor had no clothes, but most of the CS faculty was unwilling to admit it. It was really discouraging. Peak hype was in "The fifth generation: artificial intelligence and Japan's computer challenge to the world" (1983), by Feigenbaum. (Japan at one point in the 1980s had an AI program which attempted to build hardware to run Prolog fast.)
Trying to use expert systems for medicine lent an appearance of importance to something that might work for auto repair manuals. It's mostly a mechanization of trouble-shooting charts. It's not totally useless, but you get out pretty much what you carefully put in.