Hacker Newsnew | past | comments | ask | show | jobs | submit | mandus's commentslogin

Good thing git was designed as a decentralized revision control system, so you don’t really need GitHub. It’s just a nice convenience


As long as you didn't go all in on GitHub Actions. Like my company has.


Do you think you'd get better uptime with your own solution? I doubt it. It would just be at a different time.


Uptime is much, much easier at low scale than at high scale.

The reason for buying centralized cloud solutions is not uptime, it's to safe the headache of developing and maintaining the thing.


It is easier until things go down.

Meaning the cloud may go down more frequently than small scale self deployments , however downtimes are always on average much shorter on cloud. A lot of money is at stake for clouds providers, so GitHub et al have the resources to put to fix a problem compared to you or me when self hosting.

On the other hand when things go down self hosted, it is far more difficult or expensive to have on call engineers who can actual restore services quickly .

The skill to understand and fix a problem is limited so it takes longer for semi skilled talent to do so, while the failure modes are simpler but not simple.

The skill difference between setting up something locally that works and something works reliably is vastly different. The talent with the latter are scarce to find or retain .


My reason for centralized cloud solutions is also uptime.

Multi-AZ RDS is 100% higher availability than me managing something.


Well, just a few weeks ago we weren't able to connect to RDS for several hours. That's way more downtime than we ever had at the company I worked for 10 years ago, where the DB was just running on a computer in the basement.

Anecdotal, but ¯\_(ツ)_/¯


An anecdote that repeats.

Most software doesn’t need to be distributed. But it’s the growth paradigm where we build everything on principles that can scale to world-wide low-latency accessibility.

A UNIX pipe gets replaced with a $1200/mo. maximum IOPS RDS channel, bandwidth not included in price. Vendor lock-in guaranteed.


“Your own solution” should be that CI isn’t doing anything you can’t do on developer machines. CI is a convenience that runs your Make or Bazel or Just or whatever you prefer builds, that your production systems work fine without.

I’ve seen that work first hand to keep critical stuff deployable through several CI outages, and also has the upside of making it trivial to debug “CI issues”, since it’s trivial to run the same target locally


> should be that CI isn’t doing anything you can’t do on developer machines

You should aim for this but there are some things that CI can do that you can't do on your own machine, for example running jobs on multiple operating systems/architectures. You also need to use CI to block PRs from merging until it passes, and for merge queues/trains to prevent races.


Yeah agreed, CI infra provides tons of value.

Ended up expanding this little quip into a blogpost to refer to in the future, feedback welcome! https://tech.davis-hansson.com/p/ci-offgrid/


Yes, this, but it’s a little more nuanced because of secrets. Giving every employee access to the production deploy key isn’t exactly great OpSec.


Every Linux desktop system has a keychain implementation. You can of course always use your own system, if you don't like that. You can use different keys and your developers don't need access to the real key, until all the CI servers are down.


Yes. I've quite literally run a self-hosted CI/CD solution, and yes, in terms of total availability, I believe we outperformed GHA when we did so.

We moved to GHA b/c nobody ever got fired ^W^W^W^W leadership thought eng running CI was not a good use of eng time. (Without much question into how much time was actually spent on it… which was pretty close to none. Self-hosted stuff has high initial cost for the setup … and then just kinda runs.)

Ironically, one of our self-hosted CI outages was caused by Azure — we have to get VMs from somewhere, and Azure … simply ran out. We had to swap to a different AZ to merely get compute.

The big upside to a self-hosted solution is that when stuff breaks, you can hold someone over the fire. (Above, that would be me, unfortunately.) With Github? Nobody really cares unless it is so big, and so severe, that they're more or less forced to, and even then, the response is usually lackluster.


Compared to 2025 github yeah I do think most self-hosted CI systems would be more available. Github goes down weekly lately.


Aren't they halting all work to migrate to azure? Does not sound like an easy thing to do and feels quite easy to cause unexpected problems.


I recall the Hotmail acquisition and the failed attempts to migrate the service to Windows servers.


Yes, this is not the first time github trying to migrate to azure. It's like the fourth time or something.


Doesn’t have to be an in house system, just basic redundancy is fine. eg a simple hook that pushes to both GitHub and gitlab


It's fairly straightforward to build resilient, affordable and scalable pipelines with DAG orchestrators like tekton running in kubernetes. Tekton in particular has the benefit of being low level enough that it can just be plugged into the CI tool above it (jenkins, argo, github actions, whatever) and is relatively portable.


I mean yes. We've hosted internal apps that have four nines reliability for over a decade without much trouble. It depends on your scale of course, but for a small team it's pretty easy. I'd argue it is easier than it has ever been because now you have open source software that is containerized and trivial to spin up/maintain.

The downtime we do have each year is typically also on our terms, not in the middle of a work day or at a critical moment.


10:08:19 up 2218 days, 22:11, 4 users, load average: 0.00, 0.00, 0.00

It just workz [;


With a build system that can run on any Linux machine, and is only invoked by the CI configuration? Even if all your servers go down, you just run it on any developers machine.


Reproducible builds have a pretty good track record for uptime :-)


Then your CI host is your weak point. How many companies have multi-cloud or multi-region CI?


This escalator is temporarily stairs, sorry for the convenience.


Tbh, I personally don't trust a stopped escalator. Some of the videos of brake failures on them scared me off of ever going on them.


You've ruined something for me. My adult side is grateful but the rest of me is throwing a tantrum right now. I hope you're happy with what you've done.


I read a book about elevators accidents; don't.


With people properly using them or not?

I am fairly certain that the vast majority comes from improper use (bypassing security measures, like riding on top of the cabin) or something going wrong during maintenance.


elevators accidents or escalator accidents?


elevators. for escalators, make sure not to watch videos of people falling in "the hole".


I am genuinly sorry about that. And no, I am not happy about what I've done.


Not really comparable at any compliance or security oriented business. You can't just zip the thing up and sftp it over to the server. All the zany supply chain security stuff needs to happen in CI and not be done by a human or we fail our dozens of audits


While true, the mistake we made was to centralize them. Just imagine the case if git was a centralized software with millions of users connecting over a single domain? I don't care how much easier it would be, or how flashy it would be, I prefer much to struggle with the current incarnation rather than deal with headaches like these. Sadly, the progress towards decentralized alternatives for discussions, issue tracking, patch sharing and CI is rather slow (though they all do exist) due to the fact that the no big investor invests in them.


Why is it that we trust those zany processes more than each other again? Seems like a good place to inject vulnerabilities to me...


Hi! My name is Jia Tan. Here's a nice binary that I compiled for you!


This isn't really a trust issue. People tend to take shortcuts and commit serious mistakes in the process. Humans are incredibly creative (no, LLMs are nowhere close). But for that, we need the freedom to make mistakes without serious consequences. Automation exists to take away the fatigue of trying to not commit mistakes.


I'm not against automation at all. But if all of the devs build it and get one hash and CI runs it through some gauntlet involving a bunch of third party software that I don't have any reason to trust and out pops an artifact with a different hash, then the CI has interfered with the chain of trust between myself and my user.

Maybe I've just been unlucky, but so far my experience with CI pipelines that have extra steps in them for compliance reasons is that they are full of actual security problems (like curl | bash, or like how you can poison a CircleCI cache using a branch nobody reviewed and pick up the poisoned dependency on a branch which was reviewed but didn't contain the poison).

Plus, it's a high value target with an elevated threat model. Far more likely to be attacked than each separate dev machine. Plus, a motivated user might build the software themselves out of paranoia, but they're unlikely to securely self host all the infra necessary to also run it through CI.

If we want it to be secure, the automation you're talking about needs to runnable as part of a local build with tightly controlled inputs and deterministic output, otherwise it breaks the chain of trust between user and developer by being a hop in the middle which is more about a pinky promise and less about something you can verify.


I don’t use GitHub that much. I think the thing about “oh no you have centralized on GitHub” point is a bit exaggerated.[1] But generally, thinking beyond just pushing blobs to the Internet, “decentralization” as in software that lets you do everything that is Not Internet Related locally is just a great thing. So I can never understand people who scoff at Git being decentralized just because “um, actually you end up pushing to the same repository”.

It would be great to also have the continuous build and test and whatever else you “need” to keep the project going as local alternatives as well. Of course.

[1] Or maybe there is just that much downtime on GitHub now that it can’t be shrugged off


The issue is that GitHub is down, not that git is down.


Aren’t they the same thing? /sarc


You just lose the "hub" of connecting others and providing a way to collaborate with others with rich discussions.


All of those sound achievable by email, which, coincidently, is also decentralized.


Some of my open source work is done on mailing lists through e-mail

It's more work and slower. I'm convinced half of the reason they keep it that way is because the barrier to entry is higher and it scares contributors away.


Well it does prevent brigading.


Wait, email is decentralised?

You mean, assuming everyone in the conversation is using different email providers. (ie. Not the company wide one, and not gmail... I think that covers 90% of all email accounts in the company...)


Email at a company is very not decentralized. Most use Microsoft 365, also hosted in azure, i.e. the same cloud as github is trying to host its stuff in.


365 is not hosted in Azure. Some of the admin portals and workflows are, but the normal-employee-facing applications and APIs have their own datacenters.


For sure.

You can commit, branch, tag, merge, etc and be just fine.

Now, if you want to share that work, you have to push.


You can push to any other Git server during a GitHub outage to still share work, trigger a CI job, deploy etc, and later when GitHub is reachable again you push there too.

Yes you lose some convenience (like GitHub's pull requests UI can't be used, but you can temporarily use the other Git server's UI for that.

I think their point was that you're not fully locked in to GitHub. You have the repo locally and can mirror it on any Git remote.


For sure, you don’t have to use GitHub to be that shared server.

It is awfully convenient, web interface, per branch permissions and such.

But you can choose a different server.


If your whole network is down, and you also don't want to connect the hosts with an Ethernet cable, you can even just push to an USB stick.


I'm on HackerNews because I can't do my job right now.


I'm on HN because I don't want to do my job right now.


I work in the wrong time zone. Good night.


SSH also down


My pushing was failing for reasons I hadn't seen before. I then tried my sanity check of `ssh git@github.com` (I think I'm supposed to throw a -t flag there, but never care to), and that worked.

But yes ssh pushing was down, was my first clue.

My work laptop had just been rebooted (it froze...) and the CPU was pegged by security software doing a scan (insert :clown: emoji), so I just wandered over to HN and learned of the outage at that point :)


SSH is as decentralized as git - just push to your own server? No problem.


Well sure but you can't get any collaborators commits that were only pushed to GitHub before it went down.

Well you can with some effort. But there's certainly some inconvenience.


SSH works fine for me. I'm using it right now. Just not to GitHub!


Curious whether you actually think this, or was it sarcasm?


It was sarcasm, but git itself is Decentralized VCS. Technically speaking, every git checkout is a repo of itself. GitHub doesn't stop me from having the entire repo history up to last pull, and I still can push either to the company backup server or my coworker directly.

However, since we use github.com fore more than just a git hosting it is SPOF in most cases, and we treat it as a snow day.


Yep, agreed - Issues being down would be a bit of a killer.


There is also Pdm now I recently learned, which is supposed to be a more modern alternative. As an alternative to poetry that is; pipenv is yesterday’s solution.

https://pdm.fming.dev/


Ah, that uses PEP-582-style package directories, that's interesting! Though I don't know if we need an alternative to Poetry, I'd rather we just standardized on one tool at this point.


That is actually a nice analogy! No one, not even the pro formula-1 drivers will want to drive a f1 car in regular traffic. Much the same with these languages (apls, lisps, forth, etc); they have their place and role, but is better not used in regular open source or commercial codes.


That is absolutely false. I have seen absolutely horrible "clever" code hacks in Java production code - absolutely opaque and extremely complicated.

Lisp is definitely a good choice for 'regular open source or commercial code'.


I was holding my old iPhone3 in my hands a few days ago (one of my kids have collected a few phones in the house over the years). It felt so good, light and small. I really long for a phone in that form-factor again, the big phones of today is just silly.


My guess it's a proportionally low number. I looked into matlisp when I did my PhD in scientific computing - none of the other people I worked with had any interest in it. I couldn't pursue it since I would have to reimplement or interface each and every library myself, a workload I couldn't justify at the time.

So, although lisp might have been great, in the end I stayed with Python/C++. Guess I'm not the only one with similar experiences.


There are still days when I miss fvwm. Not because it was that great, but because I used it for so long that my fingers still remember the keyboard shortcuts I configured. But I moved to OSX and just had to adapt. I guess moving on isn't always the most important thing, but stay productive. If that mean staying with some old software I'm all for it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: