Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty much each point raised in this post(?) are correct, current and relevant even 26 years later.

POSIX is a monolith and really deserves to be improved. It's been around forever, yes. It will probably keep on being around forever, yes.

    Take the tar command (please!), which is already a nightmare where lower-case `a' means "check first" and upper-case `A' means "delete all my disk files without asking" (or something like that --- I may not have the details exactly right). In some versions of tar these meanings are reversed. This is a virtue?
Raise your hand if you've never broken Grep because the flags you gave it didn't work. Anyone? Congratulations, you've worked on a single version of grep your entire life. Have a cookie.

Pretty much the only consistent grep flag I know is -i. There's never been a standard for naming and abbreviating flags, which means that for EACH program you will have to learn new flags.

This becomes truly terrible when you get around to, say, git and iptables. Have you ever tried to read git documentation? It is the most useless godawful piece of nonsense this side of the Moon.

There's Google now, which means that the fundamental design issues of POSIX will probably never get issued. "Just google it and paste in from stackoverflow" is already standard, and people are already doing that for 5-10-year-old code/shell commands. What about 10 years from now, will googling best DHCP practices still find that stupid post from 2008 that never got actually resolved? How about 20 years?

I have honestly no idea how to even start fixing the problem. A proper documentation system would be a start.



> Have you ever tried to read git documentation? It is the most useless godawful piece of nonsense

I ran "man git" for the first time ever.

https://www.man7.org/linux/man-pages/man1/git.1.html

Heey, that's actually pretty good! I don't think it's "godawful". In the second sentence it recommends starting with gittutorial and giteveryday, for a "useful minimum set of commands".

https://www.man7.org/linux/man-pages/man7/gittutorial.7.html

https://www.man7.org/linux/man-pages/man7/giteveryday.7.html

I must admit, I still occasionally (regularly?) search for "magic incantations", particular combinations of flags for sed, git, rsync, etc. But the man pages are my first go-to, and they usually do the job as a proper documentation system. It's better than most software I've worked with outside (or on top) of the OS, with their ad-hoc, incomplete or outdated docs.


The issue with git is that no matter how well documented, the user interface is horribly designed. For starters, how many different things does "git checkout" do, and how many of them actually reflect an intuitive meaning of "checking out" ?


> the user interface is horribly designed

I see this type of remark against git quite often on HN and I think it's exaggerated.

I agree some of the porcelain are misleading and overloaded as convenience functions such as checkout, however a decent chunk of it is inline with the underlying data structure. Nothing is perfect, and git is pretty damn good - horribly designed? no, could do with some breaking porcelain re-writes? sure.


The git UI is absolutely horribly designed, as demonstrated by the mercurial or darcs UIs which, while completely different, were significantly easier to discover, intuit and remember.

> git is pretty damn good

UI-wise, it really is not.


The concept of the git staging area is utterly superfluous. All local changes that are propagated to the version control system should go directly into a durable commit object, and not to a pseudo-commit object that isn't a commit, and that can be casually trashed.

That commit object could be pointed at by a separate COMMIT_HEAD pointer. If you have a COMMIT_HEAD different from HEAD, then you have a commit brewing. Finalizing the commit means moving HEAD to be the same as COMMIT_HEAD. (At that time, there is a prompt for the log message, if one hasn't been already prepared.)

Your "staged" changes are then "git diff HEAD..COMMIT_HEAD", and not "git diff --cached".

Speaking of which, why the hell is "cached" a synonym for "in the index"? Oh, because the index holds a full snapshot of everything. But that proliferation of terminology just adds to the confusion.

I can't think of any other area of computing in which "cache" and "index" are mixed up.


I like the staging area, though–at any given time I always have code that I do not want to put in a commit object (perhaps I changed some build flags, or my IDE touched some files I don't care for, or…) However, I do agree 100% that all the terminology is pretty bad.


The sin of Git's staging area is that Git forces it into the default interaction path—requiring you to take it into consideration whether or not you're interested in only committing some of the changes. Git should default to including all changes, and iff you direct it to (i.e. by explicitly specifying `git add`) should you have to take into consideration the notion that some changes are staged and others aren't.


I'm generally also one of the git sceptics - Though I loved staging for a while as for the fine grained control. `git add --all` just might not do the right thing - for paranoids like me. Just recently got to know you can skip the staging by appending the paths of the changed files you want to commit after `git commit -m "awesome commit"` - neat.


You can skip the explicit staging using

  git commit --patch
then interactively select the specific changes by diff hunk.

It combines with --amend.


Would you like the staging area less if it was an object of exactly the same type as a commit, referenced by CHEAD (commit head) instead of HEAD?


Only the workspace can be built and tested, so the workspace is what should be committed. We should be stashing anything we don't want to test and commit yet.


I'm trying to keep out of this fight, but how are you planning on just stashing one hunk without staging?


The stash is another feature that should use the regular commit representation. Well, somehow. Yes, the stash is different in that it preserves (or can preserve) uncommitted changes, as well as staged changes, and it can tell these apart.

However, if the staging area is relaced by a CHEAD commit ("commit head") whose parent is HEAD, then the problem of "stashing the index" completely goes away. You don't stash staged changes because they are already committed into the staging commit CHEAD.

That said, the stash feature could work with this CHEAD. Stashing the staged changes sitting in CHEAD could propagate them into the stash somehow (such as by a reference to that commmit). Then CHEAD is reset to point to HEAD, and the changes are gone. A single stash item consisting of work tree changes and staged changes could simply be an object that references two commits: a commit of working changes committed just for the stash, and a reference to the CHEAD which existed at that time. It could be that one is the parent of the other. So that is to say, a commit is made of the working copy changes, parented at the CHEAD. The stash then points to the SHA of that commit.

Intuitively, I know this would work, because in the existing Git, I could easily implement this workflow instead of using the stash. Given a tree of local changes, I could "stage" some of them by creating a commit with "git commit --patch". Then "stash" rest of them into another commit "git commit -a". Then, create a branch or tag for that two-commit combo, and finally get rid of it with "git reset --hard HEAD^^". Later, I could easily recover the changes from that branch, either by cherry picking, or doing a hard reset to them or whatever.

Speaking of which, an example of how stashes are limiting because they aren't commits, think of how you can't do:

   git reset --hard stash@{0}  # wipe it all away and make it like this stash
You can't do that because a "reset --hard anything" cannot reproduce a state where you have outstanding working copy changes and/or an uncommitted index, but "stash apply" or "stash pop" are saddled with that requirement.

The requirement of reproducing working changes and staged ones from a stash represented as a two-commit combo is very simple. You cherry pick one normally and make it the CHEAD (the aforementioned special head for pointing to a commit being staged). Then the other one is cherry-picked with -n, so it is applied as local changes.


“git stash push --patch” lets you choose hunks to copy into the new stash and clean out of the workspace. It’s pretty similar to “git add --patch” for choosing hunks to stage.



Git has weird terminology. Though a lot of commercial SCMs are also a bit strange. Example: Perforce which has depots and shelves. At least with git, I can create a branch without waiting 2 weeks for the IT department.


Perforce shelves make total sense. "Shelving" something means literally what the command does - setting them aside and saving them.


It “sort of” makes sense. When a company shelves a movie, they’re probably never finishing it. When I shelve my code, I’m probably coming back to it at some point soon. For example, I worked at a place that would have everyone “shelve” all their code before code review. In that context it didn’t make sense to me.


Single data point, but I used hg for three years at work and never warmed up to it the way I've warmed up to git (and that's the "porcelain" too, I've never done a deep dive into the plumbing).


> I see this type of remark against git quite often on HN and I think it's exaggerated.

Indeed, in more erudite forums with smarter users, more level-headed, less biased opinions of git circulate.


In recent versions of git (since 2.23) the two main `git checkout` functions have been split into two newly supported dedicated commands: `git switch` and `git restore`.

Of course the next step is unlearning `git checkout` muscle memory and moving to using `git switch` and `git restore` more regularly.


It's the command that does "Reset working directory/Discard changes/Revert to last commit"! You'd think that's what "git reset" would to, but of course not.


> You'd think that's what "git reset" would to, but of course not.

Ahem, the command for "Reset working directory/Discard changes/Revert to last commit" is "git reset --hard".

That's the one I use.

"git checkout -f" does the same thing, but only because their different functionality coincides when there are no other arguments. When given a non-HEAD commit-id or branch-id they do different things.


That’s a beautiful example of the problems of git :-)


Some time ago, some UI designer asked, on HN, for what open source program should they build a UI to establish their reputation. I suggested "git". That was rejected as too hard. They just wanted to put eye candy on a command, not have to rethink its relationship with the user.


A few years back, a designer waded into the middle of the echochamber on some HN thread about inculcating people from other disciplines. They wrote that "as a designer" they did not consider Git (GitHub?) to be thoughtfully put together or well-suited for the kind of work they do or something like that. It was a short comment, about as long as that, and there was no flaw or faux pas or even anything incendiary about it. HN wasn't having it, though, and downvoted it mercilessly. (There were no responses to say why it had been downvoted; the subthread dead-ended there.) It's things like this that remind me of the now-infamous comment in the Dropbox thread.

I didn't think at the time to bookmark it with my "hn sucks" tag, and over the past year or two, I've tried several times to find it again, for reasons similar to[1], but I've been unable to.

1. https://news.ycombinator.com/item?id=22991033


It was very good idea. Though perhaps establish reputation is probably not a good starting point.


Well they were a UI designer and not a UX designer.


The fundamental assumption is that people will either ask how to do something, or read the documentation/manual. It's not that we'll try to figure out how something work by experimenting.

When I first started using UNIX/Linux after learning the DOS shell, I never said that using commands like rm or mkdir were not intuitive and that it should be like using del or md instead. I just learned the different commands by reading through documentation.


And what if you don't have anyone to ask? The article very clearly states why the documentation is useless for a beginner.

Other OSes have a concept called forgiveness that allows you to easily reverse a change you made explicitly so that you can experiment with it and figure things out for yourself. The problem is that Unix fundamentally doesn't allow you to figure things out by yourself. You absolutely need either a manual (that you will never find if you haven't already been told how to find it), or you need to have someone you can ask questions to.


Is it true to think that because they made some choices early on that those choices forever blemish its value even is said choices are later addressed?

Much of the complaints about checkout have been split to other commands in newer versions. Does this make Git still invalid in your opinion?


I've had a lot of issues with git UI but git checkout seems among the more sane ones. Compared to how, say, git add can remove a file...


git has added `git restore` and `git switch`, intended to replace `git checkout` :)

https://git-scm.com/docs/git-restore https://git-scm.com/docs/git-switch


The one thing that all the negative commentary fails to acknowledge is that even in the face of this somewhat overstated inconsistency across all these command line tools and applications, is that for the knowledgeable and motivated, it is quite simple to wrap the more complex invocations in simplified scripts or, at the other extreme, a completely functional native GUI.

They also fail to acknowledge that contemporary unix, aka Linux in it's many derivations and flavors, is entirely malleable at the source code level by it's users. That is a feature provided by literally no other operating system that is deployable at scale, and is, in fact, the singular feature that drives it's adoption -- not only is it 'free', you can hack it together in any fashion you damn well please, and you can use it to build peer-grade native applications, typically with little more than a tip of the hat as 'overhead'.

tl;DR: Some folks might miss the point because they are not sufficiently motivated to engage the *nix world with the degree of articulation required to tap into it's less than casually accessible capabilities.


Sigh. Why are there still lots, heaps, and tons of horrible, inhumane, broken legacy technology still around in active use? Because its users/proponents are "knowledgeable and motivated" enough to keep pushing through. Sort of a Stockholm syndrome of computing, really.

My brain is really quite small compared to all the knowledge about computers that is out there. And my willpower too is very limited. So I would rather learn things I'd rather like to know, and be motivated to do things I'd rather get done instead of spending those precious resources of mine (and time! I will die in less than 25,000 days, that's a pretty small amount of time, you know) on something of dubious value.


>>"It is very easy to be blinded to the essential uselessness of them by the sense of achievement you get from getting them to work at all."


It's one thing to learn something like physics for dumb engineers. Or Thermodynamics. Mechanical dynamics. Differential equations. Where it's hard to get your brain wrapped around. But there light at the end of the tunnel.

Vs obtuse half broken shit people created out of whole cloth and refuse to fix.


They are still around because, through historical accident, they are what everyone knows and uses.

Making a special snowflake that fits your brain better is good for you, but not necessarily anyone else.

Making something good for everyone will, almost inevitably, become a design-by-committee monstrosity that is as problematic as the tool being replaced.

The truth is, I am skeptical that these tools can be replaced by something that requires no effort to learn. At least, for the tools we already have, if you dont want to learn them, you can roll the dice and copy / paste from google overflow.


> Making a special snowflake that fits your brain better is good for you, but not necessarily anyone else.

Yes, and that's why the parent's argument that "you can hack it together in any fashion you damn well please, and you can use it to build peer-grade native applications, typically with little more than a tip of the hat as 'overhead'" is not a convincing argument. Yes, you can learn a lot about it and tinker with it and "harness its power", as opposed to using something that's less flexible, but is already pretty damn ergonomical, and is much more accessible and easy to learn about (maybe to the point you're not even realizing you're learning).


Some of it is pretty good. But some of it is, or at least was, so legendarily bad that it inspired this:

https://git-man-page-generator.lokaltog.net/


> I ran "man git" for the first time ever.

> But the man pages are my first go-to

It's not strictly a logical contradiction, but doesn't make much sense either.


You got me there. I should have said, man pages are my go-to for POSIX commands.

For Git, I usually turn to online documentation (at https://git-scm.com/docs) or, more often than not, search for keywords and, yes, end up at StackOverflow.

That supports the root-parent comment's point, that there are "fundamental design issues of POSIX", if users new and experienced must resort to such channels. It also implies that "man" as the default documentation system is not sufficiently meeting our needs.


Completely agree. One of the problems is of course the freedom of choice a Unix system gives you. Instead of a single shell with a single set of commands, people can pick and mix. For beginners it's a nightmare but for power users it's, in general, very empowering.

Getting help on Unix commands, particularly in Linux, has always been a mess. On most Linux distros, typing "help" will give you help about the shell built-ins. Then, discovering "man", you soon find that the bundled GNU utils of course would rather you use their "info" system, which in turn may refer to a web page(!) for info.

I remember coming from the Amiga to Linux: I would not have gotten far without word-of-mouth help (and helpful computer magazine articles explaining a lot of the particularities of Unix). The Amiga, on the other hand, was a cheap home computer with an exceptionally thick manual detailing every single command clearly and succinctly.

The Open Group has the POSIX util spec published online[0] and also allow free downloads of it for personal use. Since I discovered it, I find myself using it much more often than man pages. I've made a little alias in bash that launches Dillo with the appropriate command page.

[0] https://pubs.opengroup.org/onlinepubs/9699919799/utilities/c...


> For beginners it's a nightmare

I've taught undergraduate students some basic shell use for being able to compile their C programs. It's not really that bad. You have them use bash and you teach them some basic syntax and a few shell-usable programs, including man. You tell them that there is a lot of things the shell can do that we won't be discussing, so they have to be careful not use arbitrary symbols and to double-quote names. And I also tell them that bash is just one kind of shell, and that some systems have other shells by default, but we are working on a system which defaults to bash. I then tell them to check with "echo $SHELL" if they're on another system than the one we are working on, and if they don't see "/bin/bash" or "/usr/bin/bash" then they should ask someone for help.

That's enough to satisfy newbies in my experience.


> I then tell them to check with "echo $SHELL" if they're on another system than the one we are working on, and if they don't see "/bin/bash" or "/usr/bin/bash" then they should ask someone for help.

Some systems will be weird. EG my default shell is bash, but my interactive shell for all my terminal emulators is fish. So "echo $SHELL" returns "/bin/bash", even from fish.

Of course I know this, I set it up this way deliberately so that only actual interactive shells (or correctly shebanged scripts) would be fish and everything else could be bash. But it would definitely confuse a beginner!


Hmm... interesting. But - echo $SHELL should tell you what the current shell is, not what the default shell is.


$SHELL should typically be set to reflect the configured login shell, I.E. the shell specified in /etc/passwd. Or, as POSIX[0] puts it: "This variable shall represent a pathname of the user's preferred command language interpreter." The currently running shell is not necessarily the "preferred" shell, for example if you're doing "xterm -e zsh" or running a specific ksh script when you normally prefer (log in to) bash.

Being vaguely defined, it is of course open to interpretation and might vary from system to system, thus being a prime example of the kind of frustrating Unix gotchas that spawned the original article.

[0] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1...

Edit: To tie in to my earlier comment - where should I look for info on this variable in Linux? I know it exists, but how do I find out more about the behavior? 'apropos "\$SHELL"' gives me nothing. 'apropos SHELL' gives me a list of commands which doesn't include much of interest. Digging deeper, there's login(1), which briefly touches the subject, and environ(7), which gives a good description of it but of course dives in head-first and starts off by describing an array of pointers. Not overly helpful for the novice.


No, It tells you what program image file is going to be used when a program "shells out" or spawns a shell to run a shell command (e.g. via the ! command in mail or the :shell command in nvi).

It is a way for applications to know where the shell to invoke is, not a way for a user to find out what program xe is using to run commands. echo $0 is more informative on that score.


I didn't find it that hard. Just realize the different backgrounds and cultures that all these projects originate from. They all have their own ways of doing it.

man pages work pretty well for a lot of stuff that is POSIX. C interfaces and basic CLI programs you'll all find documented in man (with documentation beyond what is specified in POSIX).

The GNU folks have tried to push their info system, so maybe you'll find more details there for their commands. I guess most people use man because it's quick to access and a single page is easy to grep through. If that doesn't help, I'll just do a web search.


Been using “man” for years, understand it’s section system and have made a few man pages for various utility programs I’ve made.

The man page for built-ins is never the quick reference I want, so I generally end up falling back to google for that stuff rather than try and figure out where it’s documented.

Anyway, TIL “help” is a command I can try.

I haven’t used it in awhile, but “cht.sh” is basically what I expect “man” to give me, but comprehensively, for everything, and by the program author(s).

Public repos with easily searchable source and a good README are even better tho, of course.


I think the availability of "help" depends on what shell you're using. It's a builtin in bash.


In fish it launches a help web page if you're in a graphical session (but still has a fallback without).


> Completely agree. One of the problems is of course the freedom of choice a Unix system gives you.

This is because Unix (or large parts of the environment) evolved over time rather than being designed at the beginning.

We started with the Bourne shell, then got the C shell, with the Korn shell splitting the difference between the two. Bash then came along taking lessons from each of those three, and then Z shell.

That's a few decades' worth of changes.


The 4 things I love most in Powershell, from the point of view of a maintainer of scripts, is the relative verbosity of commands (at least I don't have to go hunting for obscure acronyms, or recursive puns such as yacc, when I read code), auto-completion an auto-documentation of scripts (when I have to change something), and object pipe (as a maintainer I hate awk and regular expressions in general).

Only defect is they did not go with Yoda speak (Get-<TAB> is a much worse filter that NetA<TAB> to search for Get-NetAdapter for instance).

It's still a bit green on Linux environments, but it already beats many of the alternatives.


You know, it's funny, the topic in this article is the forced abandonment of VMS.

Microsoft at one time had a video interview on their "Virtual Academy" with one of the PowerShell creators, and he talks about how they kept trying to create a "unixy" tool for managing Windows machines, and it never felt or worked right. So then they looked at VMS and realized it was the perfect inspiration.

Most of what you love about PowerShell comes from VMS.


Which also shouldn't be surprising given VMS also had a huge inspiration (and some key figures) on early NT Kernel development. In some ways modern Windows is "son of VMS".


Friend of mine's company guy bought because they had an Ethernet solution for VMS. The company that bought them wanted it ported to NT. He said it was 'almost trivial'


I agree completely. Powershell is a pleasure to write in and read and the various modules for managing different services makes my life way easier.

For anyone who's been put in charge of managing Zoom for their organization, may I recommend: https://github.com/JosephMcEvoy/PSZoom


> the relative verbosity of commands (at least I don't have to go hunting for obscure acronyms, or recursive puns such as yacc, when I read code)

"yacc" is not a command. It's an executable file that's read and executed. In fact, most of the things in a shell script are not commands, but files that are loaded and executed. If someone built a shell with everything built in it'd be a bloated monster full of inconsistencies and incompatibilities.

OTOH, PowerShell has "commands" or aliases named after Unix executables, such as "curl", that don't replicate the switches one would expect from the curl program.

> It's still a bit green on Linux environments, but it already beats many of the alternatives.

Maybe for Windows transplants. In general, not at all.


> Maybe for Windows transplants. In general, not at all.

care to elaborate

In my experience, Powershell is so much nicer than bash that even on my work mac I tend to use it when doing stuff for me (not going to force it on my team)


Powershell originated on Windows and garnered a fan base on the platform. It's not much appreciated outside that niche and most people who like it are people who use or used it on Windows.


I get where you are coming from

I find it odd that you consider Windows and PowerShell a niche, when it seems to have been enthusiastically embraced by the community


yacc is not a recursive abbr, it stands for Yet Another Compiler Compiler.


One of my pet-peeves on a similar note is the inconsistencies between different ssh commands on if it's -p or -P that signifies the target port.

Apart from the obvious compatibility and legacy factor, I think a major reason is that by the time someone has both enough knowledge and experience to formulate a proper solution and have felt the pain-points, they're already deep enough that they've internalized that this is The Way It Is and are somewhat comfortable with it, those annoying flags aside.

We tend to settle on the lowest common denominator, because consistency and time-to-ready trumps any actual improvements. For example, I'd so much prefer vim bindings in tmux but stopped using customizations like that completely since it turns out it's less of a hassle to just get used to the crappy standard ones instead of customizing it on every new host I start up.

If you can't get your friends off Facebook, good luck getting engineers off POSIX.


Argh yes, scp -P vs ssh -p. I get bitten by this regularly as we have something running on a non-standard port.


Two points about bad documentation:

The documentation (and syntax, or lack thereof) of "tc" are significantly worse than git's. Unfortunately, the network management tool you are supposed to use these days, ip, is made by the same people, though somewhat less bad.

Second, take git documentation with humor: https://git-man-page-generator.lokaltog.net/


I don't find `man git` to be bad at all. Git is complex, and its man pages do the right thing by referring to sources for basics info like giteveryday, and referring to in-depth guides as well. Individual man pages are also pretty good, see `man git-rebase`. It starts with an overall explanation of what rebase does, with examples, and then covers configuration options and flags. It's a lot of stuff, but it's pretty good as far as documentation goes.

GNU packages often have documentation that's bad in the typical "Linux docs are bad" way. Try `man less`. First, it commits a grave sin in having a totally useless one-line summary, which reads "less - opposite of more". Funny, but totally useless and doesn't even remotely suggest what the command does (if you know what more is on UNIX, you surely know what less does). Or `man grep`. It's a reference page, very good for knowing what all the options do, but with no useful everyday examples, and with gems like these:

> Finally, certain named classes of characters are predefined within bracket expressions, as follows. Their names are self explanatory, and they are [:alnum:], [:alpha:], [:cntrl:], [:digit:], [:graph:], [:lower:], [:print:], [:punct:], [:space:], [:upper:], and [:xdigit:].

Self explanatory? Yes, if you used grep in the 90s. Is alnum alphanumeric or all numbers? Is alpha short for alphabet, as in what most people would intuitively call letters? What's xdigit? Extra digits? Except digits? Oh, it's hex digits. Pretty obvious that periods and commas are in punct... but also + * - {} are punct, among other stuff.

`man tar` is extremely comprehensive, an impressive reference, but very hard to figure out if you've never used tar.

I've been recently looking at FreeBSD documentation for common commands, and the source code as well. Both are so much better than the usual GNU versions you find on Linux.


> I don't find `man git` to be bad at all. Git is complex, and its man pages do the right thing by referring to sources for basics info like giteveryday, and referring to in-depth guides as well. Individual man pages are also pretty good, see `man git-rebase`. It starts with an overall explanation of what rebase does, with examples, and then covers configuration options and flags. It's a lot of stuff, but it's pretty good as far as documentation goes.

Having used both Mercurial and git, it is my general experience that Mercurial has a much better documentation system. Git's documentation has improved, but mostly only in the more-well-used commands; when you want to reach for more exotic stuff, you start to find that the documentation is too full of jargon.

As a recent example, I wanted to get a list of files managed by git. Since I know mercurial best, I wanted the equivalent of hg manifest. Its documentation is thus:

> hg manifest [-r REV]

> output the current or given revision of the project manifest

> Print a list of version controlled files for the given revision. If no revision is given, the first parent of the working directory is used, or the null revision if no revision is checked out.

This is unusually bad documentation for mercurial--the short description and command name are reliant on jargon, and it's not aliased to "hg ls" or something like that. Okay, how about the equivalent git command? git ls-files looks promising. Here's its short description:

> git-ls-files - Show information about files in the index and the working tree

But its description is, um:

> This merges the file listing in the directory cache index with the actual working directory list, and shows different combinations of the two.

Mercurial suffers from a bit of jargon, but reading its description would enlighten you as to what it does without understanding the jargon. Git's documentation here starts with jargon, and then doubles down on it so that the more I read, the less sure I am about what it actually does. [In the end, by actually running it, I did verify that it's basically the equivalent of hg manifest].

Now both mercurial and git have a glossary (help glossary), but I've never seen anyone actually point a newbie to either one. Of course, here you can also see the world of difference in the documentation quality. Mercurial's glossary entry for the jargon term "manifest" says:

> Each changeset has a manifest, which is the list of files that are tracked by the changeset.

Now compare git's glossary entry for "index":

> A collection of files with stat information, whose contents are stored as objects. The index is a stored version of your working tree. Truth be told, it can also contain a second, and even a third version of a working tree, which are used when merging.

... and that is why people like me say that git suffers from poor documentation.


Git's documentation tends to tell you how it does what it does, without telling you what it's trying to accomplish. It's the equivalent of the classic beginner comment:

a += 5; // Add 5 to a

Entirely accurate, and totally useless.


> There's never been a standard for naming and abbreviating flags, which means that for EACH program you will have to learn new flags.

How is this different than web pages or GUI apps? Everyone is different and a button that does one thing in one app/page does something different in another.

Have you tried to read GUI help files? They are written for 5 year olds and provide nothing you need as a dedicated user. Have you had to inspect the DOM of a website to try and intuit what something does it does not do?

Least with command line apps usually you have a --help or man page.


GUIs have "discoverability" and "affordances". I've haven't read a single help file for any GUI application in 20 years (including apps on Windows, Linux, and Android) and somehow I can navigate and use them perfectly fine.

That's simply impossible with CLIs, you need at least read a "How to Get Started Immediately" note.


PowerShell does an okay job at command line discoverability in my experience. When using a cmdlet I'll think "I hope there's an argument for X" and then I can hit tab after '-' and cycle through all the available arguments. As another post mentioned, this unfortunately falls down a bit with cmdlet names themselves because they start with the verb instead of the noun: Get-<tab> isn't helpful the way NetAdapter-<tab> would be.


Fish is similar in this way, if you're not on Windows and thus don't have access to PowerShell.


PowerShell is available for Linux.

There's also Elvish, Nushell, and a few other attempts in a similar vein.


I take 'similar vein' as grand euphemism.


PowerShell is besides Linux also available on Unix.


Unix is a family of OSes that includes Linux in particular.


Cycling tabs is not effective. PSreadline supports CTRL+SPACE completition - just type - and then CTRL+SPACE and it will show menu with ALL arguments. The same works if you start argument (i.e. -P<CTRL+SPACE> shows all params starting with P)


Ctrl-Space on PowerShell is outstanding, as long as your devs are actually properly commenting their scripts (I assume your in-house PowerShell is put into modules that get installed on user machines).


It has no relation to comments.


grep was standardized around 1990 by POSIX.2. In the last 25 years, I haven't had any problems with the POSIX compatible flags of grep, so maybe those points are not so relevant.

Using grep for '$' without being aware that grep patterns are regular expressions ((g)lobal search for (re)gexp, and (p)rint), and redirecting the output to the printer without first seeing what it might be (e.g. with .. | head -50) is pretty stupid.

Consider that this person's idea of solving the problem of "move occurrences of $ character to a different location within the line" in a bunch of files was to begin by searching for lines containing those $ characters and sending that to a printer. What? How is the hard copy going to help? Are you going to sit there manually typing in those paths and looking for those line numbers, to do the edit? If that is really the VMS way, who wants anything to do with it?


> Using grep for '$' without being aware that grep patterns are regular expressions

There's always `fgrep` or, IIRC, the POSIX-compliant `grep -F`.

More often than not, people don't want regular expressions.


I learned it as meaning—

Global Regular Expression Parser

that seemed plausible enough I never questioned it.


> POSIX is a monolith and really deserves to be improved.

Care to explain what's intrinsically wrong with monoliths? I'd have thought that the most important point of a solution is whether or not it solves the problem, not the arhictecture by which it solves the problem.


I'm not sure it is a monolith. It is a set of standards and that's about it.

But, as far as "what's wrong with monoliths" the biggest issue, IMO, is security. The more code you have, the more likely you are to run into security issues. By their nature, most security problems end up granting all access that a given program has. A monolith, by it's nature, usually has a LOT of permissions and a LOT of code.

Of course, this only matters when security matters. If you are making an app that isn't exposed to the internet then by all means make it a monolith. Otherwise, the best thing you can do for security's sake is to push for microservices with as limited a permission set as possible. That makes it so the exposed surface area is relatively small if any one microservice is compromised. (It's about risk management).

This is also why microkernels are so interesting to me. It's the same problem, a compromised kernel driver can do a whole lot of damage. So how do you solve that? By keeping the "root" kernel at a bare minimum and force drivers to run in user space as much as possible. That keeps drivers with security holes from giving an attacker full system control.


> I have honestly no idea how to even start fixing the problem. A proper documentation system would be a start.

Have you ever heard about OpenBSD?


> POSIX is a monolith and really deserves to be improved.

I want it to be improved but I fear it is becoming irrelevant. There are very few OSes left to be compatible with...


It's depressing to think that the Cambrian explosion that lead to a variety of hardware, software, operating systems, and web browsers and great freedom and power for the end user is gradually getting culled and turning into a monoculture of walled gardens and the end users are just getting screwed.


Linux is considered a walled garden? Really?

I mean yeah, there were more operating systems before, some of which were open.. but I'm not convinced it's necessarily bad to have one open system win.

If it didn't, I'm pretty sure there would be a lot more people using windows servers, which I think would've been far worse for the open community.


To be clear I wasn't calling Linux a walled garden. But I was talking about overall trends. For example, there some are efforts to push Linux in this direction, most recently with some centralized app/package store.

Also Linux Foundation was setup and is funded by big corporations like Microsoft, Google, etc in order to find ways to exert influence over Linux's growth.


> Linux is considered a walled garden? Really?

Kinda, yeah. At least in the Desktop space it seems like it desperately wants to be and Canonical in particular works to push it in that direction. For instance, it is highly discouraged to install software from outside your distro's repository.


Talking about Canonical (which advocates Snaps as a supplement to the distro's repo) and "it is highly discouraged to install software from outside your distro's repository" in the same breath is rather odd.

As is thinking that Linux of all OSes is in any way a walled garden.


Snap is very canonical centric. You cannot set up your own snap store, automatic updates are mandatory, etc. It's is for all intents and purposes a second Ubuntu repo with even stricter control.


Multiple repos using different applications then.

And the general proliferation of Appimages, Flatpak, Nix, Guix, Docker containers, and of course local building of software all tell against the "using software from outside the distro's repos is discouraged" representation.


Of those listed, only AppImage is as easy to publish and install as your average Windows software (Flatpak is a not-so-close second with significantly more limitations). And then you get prominent FOSS developers like Drew DeVault saying that those distribution methods are terrible ideas because they are dangerous. The way things work in the Linux Desktop and its community are just not conducive to simply passing around software without middlemen the way it has been in real personal computing systems since the 80s.


You can make assertions like this if you like, but they're simply untrue. Windows is nightmare to install software on, while on Linux one usually has multiple, easy-to-install-and-keep-updated options (Appimage being the worst choice, because it is the most Windows-distribution-like, and requires the application itself to check for updates etc.).

You can also distribute binaries on Linux easily enough. There's just no general reason to want to do so.


git has quite extensive documentation, and it is, in principle, really useful. The problem is that it suffers from what Geoffrey Pullum called "Nerdview" <https://languagelog.ldc.upenn.edu/nll/?p=276>: It is written from the perspective of the author of the program, rather than the user, and therefore it is easiest to understand if you are already thoroughly familiar with the underlying architecture of git.


When I'm on a strange system for the first time and I need grep, the first thing I'm doing is grep --help or man grep to check to see what I'm doing with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: