Prolog, not being based on imperative programming or FP, but being based on logic programming, is another example in regards to this topic. Prolog can be very concise and clear for a specific set of problems, but reasoning about the time complexity of an algorithm implemented in it can often be difficult at best.
Sorry, apparently there was a "true" implementation in Haskell in one of the answers, though I am not sure how the lazy evaluation of Haskell is handled.
I think you are correct, perhaps even being too careful in your claims.
I think functional and imperative programming, in practice, tend to have different types of constraints, and there are different things that are easy to reason about in them, and encourage different reasoning methods and approaches, and ways of proving properties about them.
I think that for learning functional programming, being all-in on purely functional programming and never having side-effects or mutation, can be useful. But for practical programming, functional programming is not in itself a virtue just by itself. One should always consider the trade-offs of the specific case one is facing if one is not learning, as such. If a project has decided on a trade-off where purely functional programming is chosen, then maybe continuing with pure FP is fine; but even in such a case, a function that from outside is purely functional, but uses imperative programming internally, for instance for the sake of optimization, can make sense.
There are different advantages and disadvantages to having: pure FP; imperative programming; or different kinds of mixes of FP and imperative programming.
Overall, my approximate opinion is that it does not make sense to be dogmatic about functional programming, except when learning FP.
https://stackoverflow.com/questions/7717691/why-is-the-minim... is arguably a relevant example, though a proper in-place implementation in Haskell might be more fair for comparison. Though I am not certain whether Haskell's lazy evaluation would help or hinder analysis, Ocaml or SML might be better languages for implementing and analysing an in-place implementation variant.
While I only watched 25%-50% of the linked talk by Casey Muratori, spread out here and there, and fast-forwarded through the rest, I did not like his talk. And it reflects on this blog post as well. Casey Muratori obviously spent a lot of time on it, but programming and computer science is a huge field, and it is possible to spend a lifetime on even a part of one aspect of it.
In that talk, Casey Muratori refers to Simula, a PDF of it can be found at https://www.mn.uio.no/tjenester/it/hjelp/programvare/simula/... . You may want to use an OCR tool on that PDF, for instance ocrmypdf is available for a number of Linux distributions. I am not sure if it is the same version of Simula as what is being discussed, but it does have the "connection" statement, at PDF-page 56, which has "inspect" as the first syntactical element. That does look vaguely similar to the pattern matching of ML, but it does not AFAICT support a number of significant features that many love about modern pattern matching, such as nested patterns. Does it have field bindings as part of a pattern, and matching against specific constant values, or only matching the class type? I am not sure if it supports exhaustiveness checking. Does it mandate a finite number of possibilities, to help exhaustiveness checking? And the "connection" statement has two variants. AFAICT, it is the kind of abstraction that is primitive enough that one can get close to its functionality with "switch" in C++ together with a type-cast, and a far cry from what Standard ML (later?) supported. In that light, it might not be surprising that it was not included in C++.
> Changing the order of clauses does not change the meaning of the program, because Hope's pattern matching always favors more specific patterns over less specific ones.
This is different from modern pattern matching, where the order (AFAIK generally across modern languages) does matter.
I am not sure that Casey Muratori did a good job of researching this topic, but I am not sure if and how much I can fault him, since the topic is complex and huge and may require a lot of research. Researching the history of programming languages may be difficult, since it would both require a high technical level and also have to be focused on history. One could probably have several full-time university positions just spending their time researching, documenting and describing the history of programming languages. And the topic is a moving target, with the professionals having to have a good understand of multiple languages and of programming language theory in general, and preferably also some general professional software development experience.
All in all, the data types and pattern matching of the 1970s might be extremely different from the discriminated unions and pattern matching of the 1990s. C++ also does not have garbage collection, which complicates the issue. Rust, for instance, that also does not have garbage collection, has different binding modes for the bindings in pattern matches.
It is important to note that subtyping and inheritance are different. And even FP languages can use subtyping.
I think both Casey Muratori (and Graydon Hoare, if he has not already read it) could be interested in reading the book Types and Programming Languages, even though that book is old by now and may not contain a lot of newer advancements and theory. I also think that Casey Muratori could have benefited (in regards to this talk, at least) from learning and using Scala and its sealed traits in regards to pattern matching, if I recall correctly, Scala had as one of its objectives to attempt unify OOP and FP. I do agree that OOP can be abused, and personally I am lukewarm on inheritance, especially as direct modelling of a domain as discussed in the talk without deeper thought whether such an approach is good relative to other options and trade-offs. But subtyping, as well as objects that can be used as a kind of "mini-module", is typically more appealing than inheritance IMO. "Namespacing" of objects is also popular.
Some theory and terminology also discuss "open" and "closed" types.
And, after all, Haskell has type classes, which is not OOP, but is relevant for ad-hoc polymorphism (is Casey Muratori familiar with type classes or ad-hoc polymorphism?), Rust has traits, not quite the same as type classes but related. Scala has various kinds of implicits in regards to that. And Rust also has "dyn" traits, not so commonly used, but are available.
> What do they mean by "n+k patterns"? I guess it's the second line, but I don't get what might be wrong with it. Could anyone explain what is the issue there? Why aren't these n + k patterns allowed any more in Haskell 2010?
HN is a censorship haven and all, but I'd like to point out just one thing:
>Compare this with languages like Zig, Rust, and Python that have 1 compiler and doesn't have any of the problems of C++ in terms of interop and not having dialects.
For Python, this is straight up just wrong.
Major implementations: CPython, PyPy, Stackless Python, MicroPython, CircuitPython, IronPython, Jython.