From 20y experience and CS degree, I see software engineering as a constant struggle against accidental complexity. Like quicksand, every movement you make will pull you deeper, even swimming in the right direction. And like entropy, it governs all things (there are no subfields that are free of complexity). It even seems impossible to give a meaningful, useful definition, perhaps by necessity. All is dark.
But now and then, something beautiful happens. Something that used to be dreadful, becomes "solved". Not in the mathematical strict sense, but some abstraction or some tool eliminates an entire class of issues, and once you know it you can barely imagine living without it. That's why I keep coming back to it, I think.
As a species, I think we are in the infancy stages of software engineering, and perhaps CS as well. There's still lots of opportunity to find better abstractions, big & small.
I'm an Engineering Manager, and I think I have a similar role just applied to people processes rather than code. One nuance though - a lot of the time I suspect it's deliberate complexity designed to obfuscate how little people actually do.
Well, maybe. It's projection, because I certainly don't make simple processes myself a lot of the time, but I do try to optimize them afterwards. I have a few decades of seeing people implement processes than I've had to use, and then had to simplify as I moved into more senior roles. I've had people push back quite forcefully when I've pointed out they do things like writing reports that no one reads or gathering data that teams ignore. People often fight for added complexity because their perception is that it's important, and that means they must be important because they're the one in control of it.
There is an element of projection because there is in most things people talk about; I'm speaking about this through my filters and biases after all. But it's grounded in a fair chunk of experience.
Maybe you are saying the same thing, but couldn't that be explained better by those people being afraid to be made obsolete? Or at least, afraid if having to retrain?
This was really well written and I agree with you completely. Though I am not so optimistic as a species we have much runway left to get meaningfully much farther out of that infancy.
As tech progresses and those abstractions become substantially more potent, it only amplifies the ability of small groups to use them to massively shape the world to their vision.
On the more benign side of this is just corporate greed and extraordinary amplification of wealth inequality. On the other side is authoritarian governments and extremist groups.
Perhaps, but generally annoying millions of technology people tends not to end well for firms. Usually the market simply evolves to better match the fiscal conditions.
To get perspective(we know what worked), here’s some 50+ years abstractions:
A file is a simple stream of bytes in Unix. (If you think what else it might be then compare to Multics’ segments). Separate processes that may be connected using simple standard I/O streams [pipe] (vs everything is DLL in Multics) — the concept of shell itself (policy vs. mechanism separation http://www.catb.org/esr/writings/taoup/html/ch01s06.html ).
HTML attributes are all strings, so javascript's type coercion in general was this (doesn't just apply to ==) - a way to avoid having to do explicit conversions and make values act semantically equal without having to think about types.
A strict ban has always felt to me like we're leaving behind useful functionality.
I disagree. I worked at a protocol designer and implementor for years before people settled on the message queue as the universal abstraction. at the bottom end dumping serialized objects into tcp connections gets you most of the way. and at the top end there is so much leverage around locality, addressing, and transport that we are leaving a lot on the table.
message queues arent at all bad, but they come with additional complexity (most of it operational), and come with a set of limiting assumptions. so my frustration is that they are now the default answer for everything, and we're ignoring this lovely design space, one that becomes increasingly important when talking about scale.
GP here. I agree, git is the best example in the spirit of my comment.
Maybe the reason it wasn’t pointed out is precisely because it’s so obviously good that it’s no longer a conscious choice, and then we forget life before it. Even those of us who experienced it.
Build tools that enforce hermeticity (cannot depend on files not declared as a dependency) and hashes files (as opposed to using timestamps). This eliminates whole classes of complaints against make.
1. total abandonment of desktop as a platform, and the massive hurdles to distribute desktop software
2. move to Cloud and use electron wrappers because not even MS can bother making native apps on their shitty platform
3. Make Windows so shit that even hardcore power users can’t debloat it.
The moat of Windows is gone. Games, office work, all the classic arguments, have basically vanished in the last 5-10 years. The only surprise is why more people don’t get in the life rafts, when the ship is listing at 45 degrees. Is it because there’s still an army of workers and institutional inertia trained in Active Directory?
Windows persists in the workplace where the cost to replace it is significantly higher than keeping it, and keeping it doesn't cost much to begin with. Part of that cost would be training, yes.
The other part is finding compliant equivalents for the rest of the software they use. If the MFA, VPN, chat, email, etc. are all already vetted and designed to be compatible, there's no way they'd want to switch. Many policies regarding proprietary information disclosure are also built off this ecosystem and the certifications Microsoft's cloud already has.
4. putting Mac users in charge of the UI who are genuinely incapable of understanding how they are breaking continuity.
That's like staffing a neurosurgery department with dentists. Or a dental clinic with neurosurgeons, it does not matter, you can have decades of experience working with a drill in the head area and still be the wrong person for the job.
Continuity with what exactly? IME Windows has been a mish mash of GUI frameworks to the point you teleport through time whenever you click around in control panel, since.. the XP era? I mean, I don’t disagree with you in principle, but the timing is like saying horse carriages aren’t keeping up with cars because they’re designed by car users. The Satya era can be good or bad depending on who you ask, but that’s for Microsoft as a company – windows as a product has had no coherence for a decade+, and that’s generous.
> Is it because there’s still an army of workers and institutional inertia trained in Active Directory?
Yes, that is a huge driver of inertia. I've had to battle that in so many different companies now, and it is absolutely aggravating. That on top of comments about how Linux sucks from someone who either has never used it, or has only used it on a server and thinks that is all Linux has to offer, are absolutely soul destroying.
It's much worse than you think. Press coverage -> manual intervention is at best a bandaid covering up a major wound in a flaw that happens with independent software distribution.
The old model where the user decides which software or apps to run on their machine, is basically already replaced by a whitelist system that is managed by companies who have no interest or obligation to approve developers. Factors like ”being an individual”, an open source developer or god forbid reside outside the USA, you rely on a combination of L1 support doom loops, unjustifiable high recurring prices, kafkaesque and changing requirements, internal inconsistencies. Windows is the worst, but all platforms (except Linux) suffer from this and you can and will get hurt, delayed, and gaslit. If you haven’t, it’s just a matter of time.
I have been blocked for 6 months now with Digicert code cert renewal, for my app Payload, which will never get any media attention. The app doesn’t matter though, the approval process is per-entity (usually, a company). The point is that nobody gives a shit, because they have a monopoly/cartel and they start the validation process after they take your money.
If you are not an app publisher, the best way I can describe it is the ”pre-let’s encrypt” era of SSL certs, but more expensive, strict and ambiguous. In fact, I’ve never gone through any worse approval process in my life, and that includes applying for residency in two countries, business licenses, manual tax filings etc.
Some countries (the EU in general) are already doing things about this. Owning the app store means you are a monopoly and now the only question is are you illegal by the local laws which vary.
You can/should write your congressman (or whatever they are called in your country) and get better laws in place.
You are not wrong that regulation is desperately needed, and that EU is doing good things. However, even the EU which are doing the right thing on an anti-trust pro-competition basis, they fundamentally succumb to the same misconception – that middlemen are necessary at all. The EU doesn’t care about the App Store model, they care about the App Store monopoly. They are right about that, but the solution isn’t alternative app stores - it’s much simpler: the solution is NO App Store.
More specifically, it used to be feasible to distribute software between me (the developer) and my customers (the users) without a mandatory gate keeper that looks at me and decides whether I’m worthy, am from the right country, have good intentions etc. This is currently necessary on all desktop and mobile platforms except Linux. There is exactly 1 gatekeeper per platform (the platform owner who controls your device), except windows, which effectively have like 3-4 CAs that’s shrinking every year due to mergers and private equity ownership.
Software curation and reputation systems can be good, either with whitelists (say steam) or blacklists (say antivirus). I can see some use cases for it, but they should be within user control. What we have now is worse than a fearmongering Stallman rant. It’s incredibly bad, both pragmatically and philosophically.
That's the idea! "Allow" the user to install any apps they choose. (I put "allow" in quotes, to emphasize how bizarre it is that a few platform vendors get to decide what all of humanity is "allowed" to do with their computing.)
GP here. I agree in spirit but there’s a technical difference between ”approved to distribute” and ”approved in an App Store”. Specifically, you can distribute software for Windows and Mac outside of their stores, but you still need to have a code cert which means you’re under their mercy. This is the model Google wanted to transition Android to recently: keeping the APK path (no App Store) but gatekeep developers through signature enforcement etc.
Why not just have the Secure Enclave in the ID card and use NFC to communicate with it? Think about it, you literally have dozens of computers between you and the provider. Routers, middleboxes, load balancers, servers etc, all insecure or untrusted, but somehow my device needs to have their special rootkit and hardware DRM. A separate device that can be provisioned with ID is the least to ask. If the government doesn’t trust me with my device, fine, but then return the favor - I don’t trust them either. Both governments and corporations that are gonna use this have long track records of invasive, often illegal spying - whereas my track record is letting people mind their own business.
This is exactly what the ID cards I'm talking about are. You tap them to the phone or a desktop reader and it works. You just invented something that already exists.
eIDAS just takes this one step further and gives you an option to not have to carry your card with you. But if you refuse to have an attested phone, then you pay those 20EUR to get the ID card (which you probably need for other uses as well) and move on with your life.
> This is exactly what the ID cards I'm talking about are. You tap them to the phone or a desktop reader and it works. You just invented something that already exists.
Great, thanks for clarifying. Please be mindful not everyone are domain experts and we’re all (hopefully) trying to learn.
Now, do you know whether ID cards will work with the proposed German system for e2e online ID verification? My understanding from comments was that it doesn’t, and providers are free to require the app-based version.
In Sweden we have an app-based system now (BankID), and afaik there are no alternatives that work reliably. You have to buy an American phone every few years to participate in basic societal functions. However, the government is ”looking into” decoupling digital identity from (1) banks and (2) mandatory hardware manufacturers (iOS/Android).
Rust is a language for fast prototyping? That’s the one thing Rust is absolutely terrible at imo, and I really like the production/quality/safety aspects of Rust.
> The problem arrises when Bob encounters a problem too complex or unique for agents to solve.
It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.
When he said we need more time to do this properly, he was labelled slow. They pushed him to use AI all day long and told at the all hands that there will be programmers who use AI and those who don't will be left behind. So he said fuck doing it right for the project, let me do it right for myself.
Now he got his promotion, they will hire 3 people in a cheaper location to handle various issues that are coming up (product will always have bugs you see). Given his excellent speed of delivery, they will report to him.
It isn’t. Bob has a different problem: that there are millions of Bobs with access to the same tools, meaning the value of Bobs labor is commodity priced. That may be good for some Bobs and bad for others.
> So if Bob can do things with agents, he can do things.
Yes, but how does he know if it worked? If you have instant feedback, you can use LLMs and correct when things blow up. In fact, you can often try all options and see which works, which makes it ”easy” in terms of knowledge work. If you have delayed feedback, costly iterations, or multiple variables changing underneath you at all times, understanding is the only way.
That’s why building features and fixing bugs is easy, and system level technical decision making is hard. One has instant feedback, the other can take years. You could make the ”soon” argument, but even with better models, they’re still subject to training data, which is minimal for year+ delayed feedback and multivariate problems.
That’s… one 9 of reliability. You could argue the title understates the problem.
> You don't need every single service to be online in order to use GitHub.
Well that’s how they want you to use it, so it’s an epic failure in their intended use story. Another way to put this is ”if you use more GitHub features, your overall reliability goes down significantly and unpredictably”.
Look, I have never been obsessed with nines for most types of services. But the cloud service providers certainly were using it as major selling/bragging points until it got boring and old because of LLMs. Same with security. And GitHub is so upstream that downstream effects can propagate and cascade quite seriously.
> And if this simpler solution was actually better for the company, it should be highlighted[…]
Simpler than what? The reason this phenomenon is so pervasive in the first place is that people can’t know the alternatives. To a bystander (ie managers), a complex solution is proof of a complex problem. And a simple solution, well anyone could have done that! Right?
If we want to reward simplicity we have to switch reference frame from output (the solution), to input (the problem).
I'm (also) an EM, I've been a pure EM in some roles in my career and I really struggle to understand these pain points that many people bring up. Isn't a manager job to know what their managees are focused on over a period of time? Shouldn't be they aware of the projects the team is working on? As EM and most probably previously engineers, shouldn't they know already why simple solutions are good?
But now and then, something beautiful happens. Something that used to be dreadful, becomes "solved". Not in the mathematical strict sense, but some abstraction or some tool eliminates an entire class of issues, and once you know it you can barely imagine living without it. That's why I keep coming back to it, I think.
As a species, I think we are in the infancy stages of software engineering, and perhaps CS as well. There's still lots of opportunity to find better abstractions, big & small.
reply