Depending on your background, learning either will be beneficial in that you'll learn about a C-like memory model. My experience is that Go is way easier to get started with (both in terms of the language and the libraries). It has a certain concurrency model baked into the language (which in Rust is available in the standard library). It's kind of a lower-level Python. Also I'd say that if you're not familiiar with either language, Go programs are probably easier to read, which can make a difference if you're not working alone on a project.
Rust takes a bit more effort to initially get results, but once you get there, you work in a language that gives you expressive ways to create abstraction (this is a matter of taste maybe), generates generally faster code, and does not incur overhead by garbage collection. Also, there are ways to write asynchronous code in an elegant way.
The price is that you spend more time learning about structuring programs in a way that the compiler accepts. Once you are past that point, you'll be at least as productive as in Go.
I used to dabble with Go and like it, but once I had to write more code, I found it more tedious compared to Rust. But as I said, more readable to the uninitiated, that was one of the Go design goals.
Rust is also poison for the mind. Sweet, sweet poison.
I’m currently back to Python and losing my mind about doing error handling, as well as enforcing correct usage of my library API on the call site. Doing these things well (best?) in Rust is baked into that language’s DNA (Result, Option, newtype pattern, type state pattern, …). It’s painful to go without once you’ve seen the light.
I love pydantic and use it whenever I can, but that decorator seems a bit misguided, no? It requires type hints anyway, by which point mypy would catch issues with types mismatches statically, before runtime. That would seem strictly more useful. Maybe I’m missing an aspect of what the decorator solves though.
Pydantic enforces types when you're working with a Pydantic objects. However, it doesn't help with functions that accept vanilla types as parameters. validate arguments checks that the callers parameters matches the function's type hints.
This is helpful because a static analyzer like mypy won't catch the wrong type being passed in all situations.
I have to apologize, I misread the context of "people whining..", which was in fact about those that don't even use the language. If this was not intended to be aggressive, sorry.
Funny though that I did get triggered by it. Out of the Go community I've heard way too often "you don't really need xyz", when they mean "we're not going to support xyz, here's why, and if you disagree, we respectfully ask you to look elsewhere".
To some extent yes, it's certainly influenced by Smalltalk. I just wish the author hadn't focused that much on control flow in that paragraph.
Ruby afaik also supports live programming up to a point (REPL), but it's still not the same live-ness as in Self or Smalltalk.
The author's notion of OO is intentionally narrow. I don't know about Simula, but C++ is still reasonably close to Algol, compared to what Smalltalk and Self bring to the table (the author also mentions programming environments vs. text files).
If you accept that narrow definition at least for the scope of the article, it makes sense.
It's exceptionally narrow, in that it rules out the vast majority of languages that even purists would agree are OO.
E.g. in Ruby you can not take the value of anything and get anything but an object (e.g. integers are objects, true is an object, nil is an object), but Ruby is not an OO language by the article's definition because it fails the part about conditionals.
Even though you can do this in Ruby (probably buggy, just threw it together) - it's just not idiomatic and the language has syntactic sugar for "less OO" forms:
So it might make sense if you accept that narrow definition, but I don't think many people will find that narrow definition to make sense. I certainly don't.
Simula itself was almost a superset of Algol-60. It's pretty much Algol-60 cleaned up a little bit and with a Java-like object model bolted on top. C++ is a direct descendant of that, sometimes even syntactically - e.g. the keywords "class", "new", and "virtual" all come from Simula where they had largely the same meaning. It's close enough to our mainstream OO languages today that Simula code can be easily understood, at least so long as it doesn't use the async features:
TIL that the Clojure rationale doesn't mention or motivate lazy sequences which are all over the place in Clojure.
That laziness (in collection APIs, duh) was a major reason that I stopped using the language more after 1 or 2 years of toying around. Maybe that concept was too foreign for me, but objectively lazy seqs don't compose with other language features like dynamic scoping.
Yet I'm happy I tried it, learned a bunch of stuff along the way.
They have their uses — I went through a similar process as you did, and these days I take care to realize lazy sequences in most places (for example using `into` or `mapv` instead of map), mostly to get localized exceptions in case they happen. But I am also very happy to have lazy sequences when I need them (transducer pipelines with data that doesn't fit in memory).