No thank you. I appreciate the power, speed, simplicity and flexibility of UNIX/GNU style text tools. I-Also-Don't-Want-To-Be-Locked-Into-This-Ridiculous-Syntax-Nightmare.
When I read I-Also-Don't-Want-To-Be-Locked-Into-This-Ridiculous-Syntax-Nightmare, bash and sed and cut and the likes indeed are the first things which come to mind. I really feel kind of bad sometimes for once having spent time learning them, only to later find out there are many alternatives and nearly all of them have a shorter learning curve. Many of these tools also have zero discoverability as well, meaning you effectively get locked into learning their syntax. And then another one for the next tool. Whereas at least in PS you can try to use common words and tab completion and sometimes get there before having to reach to 'how do I x in y'.
It's a matter of recognizing your use case. If you're going to write a program that you expect to maintain for years, sure, go ahead and make it as verbose as possible. Unix tools support this with long-form flags (usually prefixed with -- rather than -). On the other hand, if you're doing exploration and iterating interactively on the fly (which bash is best at) then you want very terse syntax to keep lines short.
Ever try this on large files? A lot of PowerShell commands I make like this can take minutes to run when a combination of Linux commands and Awk might take a couple of seconds.
Yes: memory ceiling is huge, WinRM is buggy and unreliable, performance is "variable", totally inconsistent handling of dates and times, being .Net it's UTF-16 internally so streams are painful to work with, escape characters argh, variable scoping is horrible, most of the API you have to hit to get stuff done is wrapping ancient bits of weird win32 and DCOM and WMI thus has inconsistent error handling and ability to actually do what it is told.
While powershell has gotten a lot faster in the most recent versions its still pretty slow for anything involving computation; most of the type my equivalent python code beats the pants off of it.
The expressiveness is nice, but oftentimes modules won't support it or require weird ways of using the data to get the performance you want (mostly by dropping out of the pipe.)
The choices around Format- vs Out- vs Convert- are Very Confusing for new people and the "object in a shell but also text sometimes" way of displaying things is weird and until recently things like -NoTypeInformation or managing encoding in files was just pointlessly weird.
The module support and package management is still entirely in the stone ages and I regularly see people patching in C# in strings to get the behavior they want.
"Larger" modules tend to get Really Slow - the azure modules especially are just an example of how not to do it.
The way it automatically unwraps collections is cool, but gets weird when you have a 1 vs many option for your output, and you might find yourself defensively casting things to lists or w/e.
The typing system in general is nice to get started on but declaring a type does not fix it, so assignment can just break your entire world.
There's still a lot to love about the language when you are getting things done in a windows environment its great to glue together the various pieces of the system but I find the code "breaks" more often than equivalent python code.
My philosophy about any shell language is that if performance is a concern, then you should probably use a real programming language for it. Using shell scripting to handle batch processing tasks just creates dependencies on unmaintainable code.
If performance is too bad, you can only use the shell for toy examples, which means there's no reason to have a real shell at all and you might as well go back to COMMAND.COM or equivalent. It's taking the idea of Unix-style scriptable shells and Improving the implementation to the point of unusability.
It's also inhibited by typical corporate crapware, but with a pretty barebones/vanilla config, in PowerShell takes a full four to ten seconds just to open a new session on my work computer.
PowerShell isn't even fast enough for basic interactive usage, never mind batch processing.
Simple tools with simple rules will outlast most of the code we'll all build.
Picking things like grep, awk, and sed means your knowledge will be widely applicable going forward, and many people caring about their performance both backwards and forwards in time means your shell can be pretty fast.
I agree that this is something missing on classic UNIX shells: typed output streams.
I had this discussion a while back on HN, though I can't find it ATM (I wish there was a comment history search function). I am far from the first one who thought of that, and there are a few implementations of this idea.
Searching for that comment, I came across relevant stories:
Typed output streams aren’t missing from Unix. The authors of Unix clearly describe, repeatedly, that Unix tools use plain text as the common format, and that everything should look like a file.
You’re right that standard Unix tools don’t have a concept of types in streams, but that decision got made deliberately. Types and formats got left as output details.
Analogously my Macbook Air doesn’t have a fan, by design, not by accidental omission.
Right, that was an unfortunate choice of words. It's not missing, but leads to (IMO) a proliferation of under-specified text-based data and stream exchange formats.
Having human-readable text as the lowest common denominator is a laudable goal. Shell scripting would however probably be improved if most tools offered alternative typed streams, or something similar. I am not convinced Powershell's approach is the best, but their approach is at least interesting.
Picking nits, but the decision to use plain text as the common data format in Unix does not rule out structured text (e.g. CSV, TSV, XML). Nor does it imply “human-readable.”
The original decision was about not proliferating specialized or proprietary binary formats, which was more of a norm back in the ‘70s than today. The goal was to make small single-purpose tools that communicated through a common interface (files) in a standard format (plain text). Unix succeeded and continues to succeed at that.
Nothing about those design decisions precludes tools using binary formats under Unix — image processing, for example. It just precludes using standard text-oriented tools on those formats.
I wonder what happens, though, if you ask ChatGPT, "Is there a better toothpaste than Colgate"? The instance this question is not answered truthfully, ChatGPT becomes garbage.
They'll spend a lot of effort scrubbing any facts that they don't like or don't suit their agenda and narrative. Whether that's the agenda of advertisers, the powers that be, or the "privileged" classes won't matter.
You think Googles search results are opaque now?? Wait till it's hidden behind a 2 trillion parameter neural network.
And by extension of that, "fake news" will be targeted and the all knowing AI knowledge base will be used to determine what is false or true. This will be the end of disent. When they say we won't own anything, and we'll be happy, they also mean knowledge.
This will be a game of cat and mouse between the search companies and the regulators. It has to be clear it is an advertisement of sorts, at least in most countries, however, there are different ways to do this. It will put a significant number of lawyers' kids through university.
I did once in my last job, because we were told AWS was sunning down SimpleDB. We were able to swap to DynamoDB with relatively little fuss (though obviously not none, no migration is entirely painless) because we abstracted it so well. Turns out that several years later, SimpleDB is still up and running, but it was probably a good migration anyway.
Probably a bigger benefit though, is being able to use a different DB in production (probably big scale) vs on your personal machine while developing.
I did it once in my career. We've had one legacy PHP project designed to work for MySQL. I was in charge of bringing it back to live and I decided to host in on Heroku for the sake of simple deployment. While Heroku does have MySQL addons I opted for using Postgres.
So we've just switched underlying connection and fixed a dozen of (mostly reports) places with plain SQL queries with MySQL-specific syntax. ORM library handled rest just perfectly.
I think it's fairly common to swap out your core datastore when you take your prototype / MVP and bring it to production, or the first time your product experiences hypergrowth. At point the codebase, schema and team are still small enough that this is feasible. In fact I suspect every Firestore app that hits any degree of scale swaps out the DB for at least a subset of the app.
So I agree with you with this one important exception.
I've seen it with postgres-compatible DBs and Cassandra-compatible DBs. Lots of databases build their frontends this way so that people can switch from postgres/cassandra to their product when they need a bigger DB or better performance. I've never seen anyone swap any other kind of database.
I've been on a team that ported a production system from MySQL to PostgreSQL, because we wanted the ability to add columns without downtime (this was a few years ago before MySQL gained the ability) and we wanted to build features using trigram indexes.
TS’s type system is incredible but it only takes one line of dodgy JS to remove any guarantees that the number you are about to use is actually a string.
Sadly only applies to these high end ones; I got a cheap Epson one and found out after moving it to another country, that the cartridges are indeed vendor+region locked.
EcoTank printers don't use cartridges. You fill the ink tanks with ink bottles. They've come down in price, you can buy an EcoTank printer for ~$229. A couple of other companies have followed Epson in offering these kinds of printers. Sadly Brothers' version requires a proprietary cartridge tank.
I looked it up and you are 100% correct. I now object to the existence of the term, as it clearly attempts to imply that legitimate activity is close to black market activity, but this is the correct use of the term as it exists. (Thank you for correcting me; it is far better to object to the right thing.)
Your skin deep assessment of the Apple model so misses the point. What's that old saying? "Less is more". Apple understands this - the Android ecosystem doesn't.
Get-Content .\example.csv | ConvertFrom-Csv | Where-Object -Property color -eq red