I'm not sure how you deduced that I'm actually a vi user, but it's interesting to see that vim has similar support but using separate linear undo and redo commands g- and g+ . undo-persistance and undofile seem to be orthogonal to that though, I'm even less sure why you're bringing that up.
Also reminder that it works with mconnect [1], an implementation in vala.
I'm linking a fork since the original repo hasn't seen any recent commits (at least in master, looks like the owner has been trying to rewrite it first in Rust and now in Go)
I quite liked typingclub [0]. It's separated in levels and from time to time includes typing based minigames. It incrementally includes letters every couple of levels and makes you practice with words that only include what you learned so far.
Is there any real use case of having filenames with newlines? Everytime I recall that we have to design programs around that I wonder why it's possible in the first place.
The only invalid character in a path is \0 (which of course would terminate the string immediately), and a particular filename cannot contain /, or be "." or "..". Doesn't even have to be unicode. Literally any other bytes.
EOT or ctrl-D only has significance when typed into a tty. Once it has turned into a character it is as harmless as any other byte value, it doesn't end anything by itself.
Doesn't the article show that newline isn't harmless at all?
Of course EOT doesn't end anything by itself, nor does 0x0a end a line by itself -- all that happens through code that interprets those characters in a particular way, so talking about the "danger" of a character in absence of any code that operates on it is meaningless. In the presence of code, on the other hand, "harmless" in the extreme sense means "there exists no code that will act up when presented this character", which the article shows to be wrong.
I think it is good that it is so flexible, because you never know what kind of data you may want represented on your filesystem. I would rather that there be as few restrictions as possible.
There are cases where you will encounter lots of nasty filenames, especially if you are handling user generated content, like scraping from YouTube or Instagram.
It doesn't directly help UTF8, since all the bytes it uses for encoding non-ASCII have the high bit set.
It might directly help with UTF16, I'm not sure.
But the general idea of "block only a few specific characters (\0 and /) and allow all the rest" does help with UTF8. If the designers said something like "only ASCII letters and number and dashes and underscores" then that would block UTF8, and we might end up with something like URL hostnames, where you use punycode to encode non-ASCII into ASCII.
The point is that unix behaviour is to treat filenames as byte strings, so no particular encoding is mandated by the kernel or by most tools. That made the transition to utf-8 fairly painless.
Not filtering untrusted inputs, and not escaping or handling them correctly is how you write insecure software. Arbitrary input guarantees (unless very strict, then that's indirectly filtering inputs anyways) don't change that.
Why does that make it easier to write insecure software? Which is easier: dealing with bytes, only 2 of which have special meanings (/ and \0) or dealing with a ton of different character classes, each of which you have to think about and code for. The second case happened with URLs, so there's all sorts of weird rules about you can have a ? in this section but not that section, and percent encoding and punycode and stuff like that.