One of my the papers I share around a lot is "Nobody ever gets credit for fixing problems that never happened (2002)"[1]. I like it because it's not purely about software so the examples resonate better with some exec level people in other teams I work with.
Really you can. You look at the engineers who create steaming piles, and you look at the ones who don't. Over a year or two, the difference is easy to spot. For people who care to spot it.
If there's no competent front-line technical management who can successfully make this simple comparison, then, sure, in that case the team may be fucked.
It's easy to gloss over this assessment but ultimately this needs to be a key decision point for where you choose to work. No matter how well you manage complexity as an IC or a lower tier leader, if your upper tier of leaders don't value it, it won't last. Simplicity IME is not a "tail that wags the dog" concept. It's too easy to stomp out if nobody in power cares.
Yes, I should have added "...this way" because I meant that to address GP's claim of the metric-based numerical measurement.
In general, I agree that you can and should judge (not necessarily measure) thing like simplicity and good design. The problem is that business does want the "increased this by 80%, decreased that by 27%" stuff and simplicity does not yield itself to this approach.
I think this is often true and it's the limiting factor that prevents complexity from spiraling out of control. But there's also a certain type of engineer who generates Rube Goldberg code that actually works, not robustly, but well enough. A mad genius high on intelligence and low on wisdom, let's say. This is who can spin complexity into self reward.
Measure no, but only engineers care about that (and I'm not even saying that they're right, engineers care a whole lot too much about hard data). You can show alternative solutions, estimate, make assumptions, even make up numbers and boom, you have "data" to show you improved things. You don't even have to lie: you can be very open that these are assumptions and made-up numbers, that it's just a story, what's important is that people come out with confidence that thanks to you, things are better by a bit/a lot/enormously.
You can. GitHub is about to hit zero nines of uptime[0].
But feedback like that is far too late to be useful.
Maybe (principal or senior) engineers should be the ones to judge, and be trusted by management that their foresight is worth pushing the deadline?
You can't. You can hypothesize about the counterfactual in which you shipped a "steaming pile of complexity," but you definitionally cannot measure something that does not exist.
Yes, and ironically there are promotion ladders that explicitly call out "staff engineers identify problems before they become an issue". But we all know that in reality no manager-leader is ever going to fix problems eagerly, if they even agree with someone's prediction that that thing is really going to become a problem.
I once used the analogy of the PM coming to the shop with a car that had a barely running engine and broken windows, and he's only letting me fix the windows.
His response: "I can sell a good looking car and then charge them for a better running engine"...
I've found simplicity rarely earns promotions because it's invisible on a P&L and executives respond to hard numbers. In one role I converted a refactor into a business case with a 12-month cost model, instrumented KPIs in Prometheus and Grafana, and ran a canary that cut MTTR by 60% and reduced on-call pages by two-thirds. Companies reward new features over quiet reliability, so slow feature velocity for a quarter while you amortize the simplification. If you want the promotion, make a one-page spreadsheet tying the change to SLO improvements, on-call hours saved, and dollar savings, then own the instrumentation so the numbers are undeniable.
The "time to market" folks finally have everything they could hope for, let's see all of that business value they claim is being missed due to pesky things like security, quality, and scalability checks.
Thanks for the sane take. This article is engagement-porn for every engineer who ever looked at a system they didn't understand and declared they could do it much simpler. It's not because people love to promote complexity-makers, soothing as that thought might be.
> "Reduced incidents by 80%", "Decreased costs by 40%", "Increased performance by 33% while decreasing server footprint by 25%"
My experience is no one really gets promoted/rewarded for these types of things or at least not beyond an initial one-off pat on the back. All anyone cares about is feature release velocity.
If it's even possible to reduce incidents by 80% then either your org had a very high tolerance for basically daily issues which you've now reduced to weekly, or they were already infrequent enough that 80% less takes you from 4/year to 1/year.. which is imperceptible to management and users.
> All anyone cares about is feature release velocity.
And at the same time it's impossible to convince tech illiterate people that reducing complexity likely increases velocity.
Seemingly we only get budget to add, never to remove. Also for silver bullets, if Big Tech promises a [thing] you can pay for that magically resolves all your issues, management seems enchanted and throws money at it.
You can reduce a single type of incident by 80%. The overall incident rate for this particular type wasn't high enough to kill your company, but it's still a big number on your promotion packet.
> I'm convinced someone out there has figured out the playbook for turning cloud credits into serious money.
Yeah, the cloud provider figured that out. They shell out credits fairly freely to almost anyone who asks, knowing that one of two things will happen:
1) The recipient won't have a clue what to do with them, and it doesn't cost them much of anything at all to have offered them.
2) The recipient will find a way to turn them into profit, get themselves tied in to their selected vendor, and then become a profit center for the cloud platform.
Don't get me wrong, it is nice to have the credits. But it is a sales tactic - their cost to acquire you as a customer is just their actual underlying costs on whatever resources you spend. Win-win if you can make something of it, but the reason you aren't finding canned playbooks on what to do with them is that if you really knew how to turn them into profits, you never would have needed them in the first place.
So stop trying to minmax your profits, and just go play with the services. Learn everything you can, spend those credits to self-educate, and now you have more experience and skills to offer your next gig, which probably will pay better than starting a side project anywa.
"Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity."
One post is fine. 4 posts over 2 days, when you have never otherwise engaged with the community here is really sketchy.
Do you know for a fact that your suggestion was the reason it was changed?
If so, a bullet point might be in order. If not, you are being quite presumptuous by claiming it.
Either way, be careful about putting lists of details in your work. If you are at a junior level, it might be appropriate. But at higher levels, when I see lists of details, I question their seniority because it often looks like something that should be a day-to-day task has gotten called out as a large accomplishment.
What a concise explanation of 'survivor bias'. Well done!
The problem is that every bad idea had someone behind it saying it was a great project, and the number of such bad ideas vastly outnumbers the actual success stories. To be fair, if the point is to say "Don't listen to the haters", that remains a good point.
The issue here is that the people commenting on whether something is a good or bad idea usually don't have the necessary insight to give useful comments either way. But with certain trendy topics, many people still feel the need to express their shallow opinions. That is especially true on HN, because many like-minded people will chime in, upvote and increase visibility as long as they themselves feel validated, irrespective of whether what was said is true or not.
In fact I'd love to see an inverse to this list. I.e. shit people celebrated here that failed miserably. Although failure as a business can have many reasons and must not necessarily be due to the core business idea. It's probably much harder to get this data than searching early HN threads for high value IPOs. You'd have to search for popular threads and then track down the companies and find out what happened eventually.
Fair point! For personal blogs or privacy-focused sites, blocking crawlers is a valid choice. But for most businesses, being invisible to the AI-driven search era is a major disadvantage. AIO Checker helps you ensure your site is being seen exactly how you intended—whether that's fully optimized or properly restricted via robots.txt.
You know what, Muhammad? You were actually right to push on this.
Your comment made me run a audit on the codebase, and I actually found a critical IDOR vulnerability. The backend was validating the Stripe payment status, but not tying the sessionId to the specific URL requested. Someone could have used one $4.99 payment to infinitely unlock reports for any URL.
It's patched and secured at the server level now.
Good instincts. Seriously. And keep up the good work with Rust and your LibreUI project, that's impressive for 15.
Anthropic is in the business of using your data to train future releases. There is no contract in place to protect your data, especially for free users. SaaS subscriptions come with contracts. They are not the same.
How is believing that Microsoft is being honest about how they use your private GitHub code and they don’t use it to train Copilot any different than believing Anthropic if you opt out? Every company I listed is training models for their business - I’m not saying they are using your data.
Any company that doesn’t have an enterprise contract with Anthropic and uses Claude Code is an idiot.
But if you really want to have that warm and fuzzy, you can always use Claude Code via an AWS account and Bedrock hosted Anthropic models. I assure you that AWS (former employer) is not using your data when you use Claude with Bedrock/Anthropic to train their models. Amazon may be evil. But they are not stupid.
>Any company that doesn’t have an enterprise contract with Anthropic and uses Claude Code is an idiot.
I understand that working for Amazon will have given you the typical unjustified sense of intelligence and authority, and entirely insular sense of the world, that people tend to have when they work for FAANG, but you need to do your best to fight against it, dude.
You don’t know about every organisation. You don’t know about their risk profiles. Are you saying that the two-person bootstrapped spare-time side-project is the creation of two “idiots” because they don’t have an enterprise agreement with Anthropic? What about the organisations where the code is more-so incidental aspect of their organisation, rather than the secret sauce? You know that this is the vast, vast majority of organisations, right? Do you genuinely think that your code is so precious that anyone else having access to it (let alone munged up in an LLM) will be in any way detrimental to the business? That is very, very, very rarely the case. We’re all capable of reading ‘Designing Data-Intensive Applications’, I assure you.
If you read my initial reply where I said your information is already out there with 100+ SaaS products for the average company.
I agree with you, Anthropic could care less about a two person vibe coded startup that will never go anywhere or a random CRUD app.
But the OP was concerned about big company things. So they should have big company enterprise agreements.
FWIW, I’ve been working for 30 years across ten companies and only 6 of those years are with any company you have probably ever heard of - General Electric when it was still and F10 company and Amazon - it was my 8th company.
I don’t consider myself “ex-FAANG”, it was a job I got at 46 with every intention of only staying for four years. I hate large companies and would rather get a daily anal probe with a cactus then go back to one (Google/GCP was a serious option a year ago). My bread a butter before I went into consulting was small less than 100 person company. Even we had enterprise agreements then
(for those doing the math, I worked in the cloud consulting department at AWS -ProServe. Everyone who works in the department are “blue badge” full time RSU earning employees. Google has a similar department)
Copyright law is not a contract, no. It is statutory. Contracts require agreements and "consideration" in order to enforce the contract. Copyright does does not require these things. So to your second question, I'd argue yes, it was violated. But IANAL, so the courts would need to answer that question.
"Reduced incidents by 80%", "Decreased costs by 40%", "Increased performance by 33% while decreasing server footprint by 25%"
Simplicity for its own sake is not valued. The results of simplicity are highly valued.