> After all, there already is implicit trust in the system described, and no decentralization. Or perhaps I haven't understood correctly.
I also don't have all the answers, but thinking more about it, maybe it is the case, that even though there is implicit trust, the third-party freight carriers might not want to depend on a centralised system wholly managed by Walmart? This could be due to opaqueness of such a system.
I am speculating here...but maybe they made every carrier and vendor a client and node of the blockchain network. From which they can add their own transactions as they happen and the entire chain will carry an accurate representation of the transactions from every carrier. Making it easy to collate in almost real-time.
> I don't think anyone who buys an NFT wants that NFT; they just hope to flip it for more money than they spent.
Is this not just how financial assets in general works? Anyone buying Amazon's share are not buying it because they want that share. They buy it because they hope some time down the line (weeks, months, years) they can flip it for more money than they spent.
Yes. But the intrinsic value of a piece of art is tied to someone wanting to own that art for the enjoyment of it. The intrinsic value of a stock is tied to the earning power of the company you own a small piece of. NFTs have no earning power.
NFTs cannot pay dividends and do not have boards of governance.
A share without votes and without dividends would be worth $0.
(Also, many NFT shills and/or shovel salesmen fervently insist that they're selfless patrons of the arts, rather than simply trying to make ponzi cash)
The most obvious example is pervasive type inference. It's not really dynamic, but languages with it look a lot closer to dynamic languages, and the removal of repetitive type information is one of the benefits of dynamically typed languages. It gives them the clean, simple feel that it so attractive to many programmers.
Also, over the past 20 years what you've seen is big advances in the 'semi-static' world like the JVM, where it runs statically typed languages but they have the ability to eval code, redefine their own code on the fly, reflect themselves, they're garbage collected etc. This is kind of a middle ground. It's worth remembering that when Python and Ruby were new, there weren't really any great options if you wanted lightweight syntax with garbage collection. Nowadays there is Kotlin and you can write code that looks very similar to say Ruby, but which often has the performance of entirely statically typed languages.
> This is kind of a middle ground. It's worth remembering that when Python and Ruby were new, there weren't really any great options if you wanted lightweight syntax with garbage collection. Nowadays there is Kotlin and you can write code that looks very similar to say Ruby, but which often has the performance of entirely statically typed languages.
Lisps and MLs offered this before Ruby and Python got popular. Unless you don't consider them great options?
Well, getting into the question of why Lisps and MLs didn't take off is maybe too big a topic for this thread, but clearly the market didn't feel they were great options and still doesn't. Clojure remains a niche language for example.
I suspect one issue is that Lisp never seemed to be well supported on Windows and never came out of the box on Linux, except perhaps for Guile, but Guile never reached any kind of critical mass despite being promoted by the GNU project. Maybe one reason is the lack of learning materials. Even today, although Guile has an initially pretty and appealing website, clicking "tutorials" reveals a complete lack of interest in growing that community - there is only one single tutorial, which is about how to embed Guile as a scripting language into a C program!
I remember learning Python in the 1990s. The learning materials were excellent. Java was also famous for extensive tutorials and learning materials (they've lost that in recent years, the modern Java docsites are just piles of specifications, but when Java was interested in growth they had it). In the end these things matter more than the exact nature of a runtime or type system.
That's a good point. I never really understood why GNU didn't use more Lisp, as the idea of "C for when performance is needed, Lisp for the rest" sounds great, but if tutorials weren't there that explains it. I learned programming later (~2010) and I remember Python being very easy to learn and install on Windows.
Some Lisps were probably passable options in the early 90s, but as the sibling comment says, the lack of documentation and tutorials makes it not all that beginner friendly, along with the dizzying array of what to even choose if you want to use "a" Lisp.
No ML was a realistic workable option back then at all. Haskell is fine now, but was barely introduced in 1993. OCaml has become a workable option almost entirely due to the gargantuan effort of Jane Street, but again, it wasn't then. Standard ML remains terrible. Library support is sparse, there is almost no community outside of academics, no consistent implementation of the standard basis, compilers are wildly different from each other. The REPLs tend to be great, but it's very difficult to get from a set of source files to a portable executable, whereas with Python and Ruby, just writing the source files already gets you that.
That's fair for Lisp, I though having a standard made it better and the main choice was between Common Lisp and Scheme, but from what I see different people use different standards for scheme.
> OCaml has become a workable option almost entirely due to the gargantuan effort of Jane Street, but again, it wasn't then.
What do you mean by this? I'm aware of dune and opam, but people were using C and C++ without equivalents before without problems. Python's package managment and building is also not that good, even today. I don't have a strong grasp of the history of OCaml so maybe they released Core, Base and Async really early compared to batteries and Lwt? But outside of that, basic OCaml with a makefile doesn't sound worse than C/C++.
> Standard ML remains terrible. Library support is sparse, there is almost no community outside of academics, no consistent implementation of the standard basis, compilers are wildly different from each other.
That's fair, my point is more that I don't really understand why no big company ever picked it. Considering how much companies invested in their tooling (Google with Java, Go, Dart, Python, C++ ; Facebook with PHP and C++ ; etc), they could have made something great.
> it's very difficult to get from a set of source files to a portable executable, whereas with Python and Ruby, just writing the source files already gets you that.
Depends on your definition of portable executable. I've always found deploying and distributing Python and Ruby painful, at least to end users. The best in class experience here for me is Go, it's great for end users, great for servers, cross compilation works well.
Not really. It’s a terrible approach that permits some really, truly awful patterns that the designers should have known better than to permit. But it’s not really “go is dynamic!”
they are "duck typed" .... so an object automatically implements an interface if it has methods with the right signatures to satisfy the interface.
It's handy if you need to add interfaces onto code you don't control (say, library code) as you can avoid a whole layer of adapter types / functions that you need in true statically typed languages.
But I'm curious if its a great idea in the long run to have interfaces being "accidentally" implemented by objects. Doesn't seem like it would stand up well to refactoring or various other scenarios.
Than it’s the age-old war between nominal and structural typing. The latter can be really comfortable and productive, but as you note, it may be less safe (eg. just because I have a next method doesn’t mean I want to make it an iterable or something like that. It can easily break class invariants)
Isn't this more "structurally typed" rather than "duck typed"? From what I've seen, duck typing implies dynamism, while structual typing implies static types. Go is clearly statically typed. So no, Go interfaces are not dynamically typed, they are structurally typed, which is static.
Dynamic didn’t really go anywhere, but more appropriate examples that have now pervaded the language are await in general and IDisposable for ref struct. If it has a GetAwaiter it can be awaited, with no interface. If it has a Dispose() it can be disposed with no interface.
Interestingly, this sort of structural typing was already present in C# 1.0 with the foreach statement allowing anything with a GetEnumerator() method that returns something that looks like an IEnumerator.
This was discussed in 2011 in a StackOverflow question [0], including a link to a blog post by one of the language designers [1].
As far as I understand it it went to wherever there are still poor souls writing COM Interop stuff primarily. Luckily I've never had to touch it so outside of Dapper for SQL I've never really had cause to use dynamic. I imagine there are Windows shops using it heavily though.
Rust traits have the opposite behavior. If it says it’s an iterable, it must quack like an iterable.
Something that quacks like an iterable cannot be used as an iterable, unless the trait is explicitly stated — unlike python, where it simply has to quack.
Typeclasses/traits are not duck typing. A type statically needs to implement an abstract data type (with its methods and invariants). This is more of a categorization/composition effort.
afaik Rust traits were directly inspired by the typeclass system in Haskell, which came about as a type-safe way to do the kind of ad-hoc polymorphism that you can do with interfaces in object-oriented languages like Java. Here's [1] a good talk by SPJ:
What feature of rust traits make them "dynamic" as opposed to the traditional notion of typed interfaces? The latter had been part of static languages for quite a while.
I'm probably way in over my head with making a good case for how this resembles dynamic languages, but what I had in mind was trait objects. You can box a struct and refer to it as a trait object (aka type erasure). Thereafter, the receiver doesn't care _what_ concrete implementation it gets, just that it implements a certain trait.
What you describe are existential types, a feature that every statically-typed language with parametric polymorphism has, either through some explicit syntactic feature (Rust traits, Haskell type classes, ML modules), or even without, as from basic logic and the Curry–Howard correspondence you can always encode an existential using universals.
Some static languages had existentials even before they got parametric polymorphism (Go interfaces). This is why we could do so much in Go without generics!
Rust traits are not subtyping, Rust's type system is principal (although with subtyping you could achieve the same thing as described by the OP, e.g. in C++, C#, etc, which only have subtyping but no existentials exposed as a syntactic feature).
Thanks. Just a quick look through and it makes more sense now. Theres also a few places where you comment saying a borrow occured before the borrow occurs (in others the comment appears afterwards, as I would expect) - i.e. the comments are in the wrong place / inconsistently placed.
Obviously it doesnt affect the output of the code, but as a tutorial its probably better to correct that.
EDIT: further nitpicks, i would advise against calling the borrower the "borrowing owner", and other such terms - owner has a very specific meaning when it comes to memory management, etc.
Thanks for the feedback. I just quickly went through the code examples and tried to sanitise them as much as I can now. As soon as I have more free time on my hands, I will revisit and update the naming based on your feedback. Thanks!
I also don't have all the answers, but thinking more about it, maybe it is the case, that even though there is implicit trust, the third-party freight carriers might not want to depend on a centralised system wholly managed by Walmart? This could be due to opaqueness of such a system.