If you actually use two 120V circuits that way and one breaker flips the other half will send 120V through the load back into the other circuit. So while that circuit's breaker is flipped it is still live. Very bad. Much better to use a 240V breaker that picks up two rails in the panel.
IME one thing that makes this choice a very difficult one is oncall responsibilities. The thing that incentivizes code owners to keep their house in order is that their oncall experience will be a lot better. And you're the only one who is incentivized to think this way. Management certainly doesn't care. So by delegating the choice to management you're signing up for a whole bunch of extra work in the form of sleepless oncall shifts.
If someone is making the kind of mistakes that cause oncall issues to increase, put that person on call. It doesn't matter if they can't do anything, call them every time they cause someone else to be paged.
IME too many don't care about on call unless they are personally affected.
> If someone is making the kind of mistakes that cause oncall issues to increase
the problem is that identifying the root cause can take a lot of time, and often the "mistakes" aren't clearly sourced down to an individual.
So someone oncall just takes the hit (ala, waking up at 3am and having to do work). That someone may or may not be the original progenitor of said mistake(s).
Framed less blamefully, that's basically the central thesis of "devops". That is the notion that owning your code in production is a good idea because then you're directly incentivized to make it good. It shouldn't be a punishment, just standard practice that if you write code you're responsible for it in production.
Optimal in what sense? In the java shops I've worked at it's usually viewed as a pretty optimal situation to have everything in one language. This makes code reuse, packaging, deployment, etc much simpler.
In terms of speed, memory usage, runtime characteristics... sure there are better options. But if java is good enough, or can be made good enough by writing the code correctly, why add another toolchain?
> But if java is good enough, or can be made good enough by writing the code correctly,
"writing code correctly" here means stripping 95% of lang capabilities, and writing in some other language which looks like C without structs (because they will be heap allocated with cross thread synchronization and GC overhead) and standard lib.
Its good enough for some tiny algo, but not good enough for anything serious.
It's good enough for the folks who choose to do it that way. Many of them do things that are quite "serious"... Databases, kafka, the lmax disruptor, and reams of performance critical proprietary code have been and continue to be written in java. It's not low effort, you have to be careful, get intimate with the garbage collector, and spend a lot of time profiling. It's a totally reasonable choice to make if your team has that expertise, you're already a java shop, etc. I no longer make the choice to use java for new code. I prefer rust. But neither choice is correct or incorrect.
> Databases, kafka, the lmax disruptor, and reams of performance critical proprietary code have been and continue to be written in java.
those have low bar of performance, also they mostly became popular because of investments from Java hype, and rust didn't exist or had weak ecosystem at that time.
> you would know it should not be mixed like in a dj set, and you would not optimize your dj algorithm for it.
Yet the computer program happily tried to do it anyway. It would be much better to fail with a clear error message than to try to proceed and emit garbage.
How many products are actually like that? If I could easily replace github, datadog/sentry/whatever, cloudflare, aws, tailscale that would be great. In my view building and owning is better than buying or renting. Especially when it comes to data--it would be much better for me to own my telemetry data for example than to ship it off to another company. But I don't think you (or anyone) will be vibecoding replacements for these services anytime soon. They solve big, hard, difficult problems.
Github is on the chopping block as a tool (it's sticky as a social network). The other stuff not so much.
The things that are going away are tools that provide convenience on top of a workflow that's commoditized. Anything where the commercial offering provides convenience rather than capabilities over the open source offerings is gonna get toasted.
Even at recent levels of uptime I think it would be very difficult to build a competing product that could function at the scale of even a small company (10 engineers). How would you implement Actions? Code review comments/history? Pull requests? Issues? Permalinks? All of these things have serious operational requirements. If you just want some place to store a git repository any filesystem you like will do it but when you start talking about replacing github that's a different story altogether and TBH I don't think building something that appears to function the same is even the hard part, it's the scaling challenges you run into very quickly.
The future is narrow bespoke apps custom tailored for exactly that one single users use case.
An example would be if the user only ever works with .jpg files, then you don't need to support any of the dozens of other formats an image program would support.
I cannot stress enough how many software users out there are only using 1-10% of a program's capability, yet they have to pay for a team of devs who maintain 100% of it.
"The future" is fiction. It's a blank canvas where you can make a fingerpainting of any fantasy you like. Whenever people tell me about "the future" I know they're talking absolute rubbish. And I also like your fantasy! But it probably won't happen.
I call it "Psychics for Programmers." People will scoff at psychics and fortune telling and palm reading, but then the same people will listen to Elon or some founder or VC and be utterly convinced that that person is a visionary and can describe the future.
It's just reading the room. People hate having to use their computers through the lens of quasi-robot humans (saying that as one of those robots). They hate having to pay monthly just so dumb features and UI overhauls can be pushed on them.
They just want the software to do the few things they need it to do. AI labs are falling over themselves to remove the gate keeping regular people from using their computing device the way they want to use it. And the progress there in the last few years is nothing short of absolutely astounding.
> the progress there in the last few years is nothing short of absolutely astounding
Yet, all the astounding progress notwithstanding, I don't have a suite of bespoke tools replacing the ones I depend on. I cannot say "hey claude, make me a suite of bespoke software infrastructure monitoring and operational tooling tailored to my specific needs" and expect anything more than a giant headache and wasted time. So maybe we just need to wait? Or maybe it's just not actually real. My view is unless you show me a working demo it's vaporware. Show me that the problem is solved, don't tell me that it might be solved later sometime.
And what exactly is preventing you from building bespoke software for "infrastructure monitoring and operational tooling tailed to your specific needs"?
I could certainly imagine building myself some sort of dashboard. It would seem like a prime use case.
You want to hear about a problem solved? Recently I extended a tool that snaps high resolution images to a Pixel art grid, adding a GUI. I added features to remove the background, to slice individual assets out of it automatically, and to tile them in 9-slice mode.
Could I have realistically implemented the same bespoke tool before AI? No.
> And what exactly is preventing you from building bespoke software for "infrastructure monitoring and operational tooling tailed to your specific needs"?
Let's say I emit roughly 1TB of telemetry data per day--logs, metrics, etc. That's roughly what you might expect from medium sized tech company or a specific department (say, security) at a large company. There is going to be a significant infrastructure investment to replicate datadog's function in my organization, even if I only use a small subset of their product. It's not just "building a dashboard" it's building all the infrastructure to collect, normalize, store, and retrieve the data to even be able to draw that dashboard.
The dashboard is the trivial part. The hard part is building, operating, and maintaining all the infrastructure. Claude doesn't do a very good job helping with this, and in some sense it actually hinders.
EDIT: I'm not saying you shouldn't take ownership of your telemetry data. I think that's a strategically (and potentially from a user's perspective) better end result. But it is a mistake to trivialize the effort of that undertaking. Claude is not going to vibeslop it for you.
I agree, that does not seem like a smart undertaking. I was thinking more of a dashboard within the existing software, or above it.
For my use case I wanted bespoke software to work with Pixel art, but obviously I would not try to build Photoshop or Aseprite from scratch. I needed only specific functionality and I was able to build that in a way fitting my workflow better than any existing software could.
I was able to build it with Claude Code and Codex. Maybe the implementation is sloppy, I did not care to check. The program works, and it's like a side project to my side project. It would not have been possible in the past, I would have needed to work with what Aseprite offers out of the box.
I'm basically ignorant of this entire space--I have mostly worked on SaaS products--so please forgive the question if it's too naive but as (the first?) someone who has just experienced this new and rare way of bringing a design to life are there any obvious process/tooling/whatever improvements you noticed that might make it less risky (and therefore less rare)? Reading your blog posts, the crowd supply materials, Xous docs, etc the burning thought at the front of my mind has been "there needs to be a lot more of this". Is there a path towards that?
There's actually a whole space of shared-mask tapeouts. You might have heard of TinyTapeout [1]/LibreLane [2] and the general concept of "MPW" masks - multi-project wafer masks. These effectively share cost among hundreds of developers, bringing the cost of a tape-out down.
If you're lucky enough to have an affiliation with certain institutions, there are programs that basically give academics the experience I had for a nominal fee. TSMC has a finfet program [3] which powers Soclabs [4] to provide an environment that exceeds Baochip's capabilities. If you look through [4] notice the block that says "Users' HW circuits" - that's basically what my logic is on Baochip. The problem with these is you need to be academic and I think there isn't a clear path to commercialization, and of course lots of NDAs. China also has a program called "One Student One Chip" [5] where students can tape out quite sophisticated SoCs as part of their course work.
It's probably just a matter of time before these academic programs yield a commercially compelling chip, and then that would pave a path for a transition program from the academic program to industry.
Another option is, if Baochip is quite successful, it in itself could serve as a "proof point" that may encourage other companies to allow hitchhikers. When the co-designed IP works, then it's a sales upside for the company, so there is some incentive alignment.
The trick is figuring out how to mitigate the possibility that the IP doesn't work, and bridging the gap between people with ideas and people with tape-out experience. I'm lucky in that in my first jobs out of college I did a deep dive into silicon, even designing custom transistor and standard cells for a bespoke nanophotonics PDK that I helped to develop, so I had the shared language to communicate with both classic chip companies and open source community.
There's an enormous cultural gap between the chip community and the open source community, but everyone's curiosity in this thread and participating in this dialog with questions like yours helps close that gap and thus manifest more hitchhiking opportunities in the future.
I read that line and thought "so, the solution is code review?". What has to happen to your processes that code review is not only missing, but unironically claimed to be the solution?
I know there are some companies that never did code review, but this is Amazon. They should know better.
This is going to end either with seniors rubber-stamping absolutely everything without even reading or with seniors blocking most of the slop for no overall productivity gain
Or if review is actually done I think there will be productivity loss. Juniors with help of AI can generate more code than seniors have time to review in full working day. So they won't have time left for any other work...
No, it’s Amazon. If a senior blocks the slop they will be told they are not disagreeing and committing enough. If bugs get through they will be told they didn’t dive deep enough. It’s classic Amazon blame game. Someone gets left holding the bag for impossible asks (hint: it’s never the person doing the ask), and then gets piped and fired.
Top people definitely do if they feel like it, why the heck shouldn't they. There is no shortage of work for those. But its fine if company, via its actions, claims it doesn't want to even retain its top talent. Just market forces and all that.
If you’re a senior at Amazon and your whole job becomes reviewing slop, well, you can likely get another job which does not revolve around reviewing slop. The current market is not great, but it’s disproportionately painful for juniors.
reply