Hacker Newsnew | past | comments | ask | show | jobs | submit | more prepperdev's commentslogin

Yes. I definitely had the vibe of 'Get the Facts' from the Microsoft Linux-smearing campaign circa 2004 ([1]).

1. https://en.wikipedia.org/wiki/Criticism_of_Linux#Criticism_b...


First of all, kudos for trying to find new ways to fund open source. Those are needed.

This approach might work well for issues which require a lot of thought, but produce a simple fix and can be easily merged

This might go horribly wrong if outside contributors try to implement a major feature to a project which they don't have a good mental model for, produce a large patch that may get rejected by maintainers, or end up being rewritten-by-code-review, which would cost such maintainers more time than if they implemented the feature in the first place.

It would be interesting to follow!


Got to start somewhere, remember that you can always specialise on a project and pick up repeat work.

You could gatekeep certain jobs to certain experience levels as well.

I agree this will be interesting to follow.


Bloomberg maintains an excellent dashboard that tracks vaccination in the US and worldwide ([1]). It shows that California is one of the slowest states with only 37% of received vaccine administered. Top five states are North Dacota (77%), West Virginia (74%), Oregon (61%), South Dacota (61%) and Texas (60%).

1. https://www.bloomberg.com/graphics/covid-vaccine-tracker-glo...


As a Republican in Washington State, I have a respect for Oregon's Democrats - they work hard for their citizenry.


I don't know that percentage of received vaccine administered is quite the right metric: I tend to think doses administered per capita is the key number.

But whatever number you look at, California is a disgrace. We've fucked this colossally. The only states that are worse than we are are poor, rural, and low education.


> The only states that are worse than we are are poor, rural, and low education.

So are all of the top states, to be fair, excepting maybe Oregon (those are somewhat loose terms at a state level).


It's not really true that all of them are: Alaska and North Dakota are actually quite wealthy per capita due to petroleum. Connecticut is also one of the states handling the vaccination quite well, and it's quite urban. But a number of the states doing well with vaccinations are quite poor and rural, and that just makes it all the more damning that California is doing so terribly poorly.

As far as I can tell, it's not anything particularly to do with any demographic factor: other big states like New York and Texas are doing much better than California. States that are both deeply red and deeply blue are doing much better than California. California is just... failing. We're underperforming every state that might reasonably be a comparator.


Excuse me?

While my account is relatively young (13 days), it's not one day old. You can also look into my comments over this time. So, the accusation is clearly off.


snaps are tied to the proprietary Ubuntu store and they are not available on all of the Linux flavors. For instance, I don't see snapd on Alpine Linux:

  $apk search snapd
  $
Not cool, I think.


An alpine Linux core dev appears to maintain the alpine package for Certbot. I don’t see it being deprecated anytime soon.

The snap is very different and relies on systemd which isn’t commonly found in Alpine.


Copying their requirements as text (original grammar preserved, including GB and Gb confusions).

It's a good glimpse of what unlimited cloud budget does to companies. I quote:

"""

* 40x i3.Metal instances (64 vCPU's, 512 Gb ram (more ram, of course, better for those)), 14 Tb nVME e/a (Scylla cluster)

* 70-100x (96 vCPUs, 768 GB RAM) - 4 Tb nVME e/a (PSQL cluster, could use more RAM and CPU, but not easy to get)

* 300-400 various other instances, less picky, generally 8-16 cores w/ 32-64 Gb of ram as available

* Internal traffic ~300-400 Gb of traffic/minute

* External traffic ~100-120 Gb of traffic/minute

"""


Wait, is this a joke? This sounds like the worst built infrastructure I've ever heard of where they had more money than engineering sense.

Also, Gbpm.


I don't believe it's a joke. It still might be a deliberate lie (a random twit is not a real source of knowledge), but so far this is consistent with what we already know. See https://news.ycombinator.com/item?id=25769730


By my calculations, this is somewhere between 11,680 and 18,560 vCPUs and 83,840 to 122,880 GB of RAM they're asking for.

Seems rather excessive for a mid-tier social networking website.


What's that total up to in "expected AWS bill per week", say?


It's at least 6 digits a month for the first item alone not including bandwidth, IP, NATGW, or load balancer


What language is this thing written in that it requires this kind of computing power? Is it an Electron app? /jk


It’s the server side


I know, it was supposed to be a joke but I guess I didn't manage to get it across that way.


I thought it was clearly a joke, implying they had electron running server-side, maybe even running their app through cypress in production for time travel debugging?

But I actually do want to know they language(s) and framework(s).


Recently, I migrated my personal dev laptop from Ubuntu to Alpine Linux. It took a day, but everything works now, including hidpi stuff.

No big issues so far and I am in the process of migrating my home server to Alpine.


Debian policy is very sane (no network access during build), but it does seem like modern software just assumes that the Internet is always available, and all dependencies (including transitive) are out there.

The assumption is a bit fragile, as proven by the the left-pad incident ([1]). I hope that whatever the outcome of the discussion in Debian will be, it would keep the basic policy in place: not relying on things outside of the immediate control during package builds.

1. https://evertpot.com/npm-revoke-breaks-the-build/


Debian is incredibly conservative about versioning/updates and faces a lot of pressure to move faster. I hope they keep the same pace or even slow down.

The world will keep turning.


> Debian policy is very sane (no network access during build)

openaSUSE has that policy, too. And I’m pretty sure the same applies for Fedora.

You don’t want to rely on external dependencies during build that you can’t control.

That would be a huge security problem.


The whole "download during build" thing is a minor issue; k8s, for example, puts all their dependencies in the /vendor/ directory, and AFAIK many toolchains support this or something like it. And even if they don't, this is something that can be worked around in various ways.

The real issue is whether or not to use that vendor directory, or to always use generic Debian-provided versions of those dependencies, or some mix of both. This is less of a purely technical issue like the above, and more of a UX/"how should Debian behave"-kind of issue.


I don't think that aspect of Debian Policy is in any danger of changing, nor should it.


It’s also not very Debian-specific. It applies to openSUSE as well, for example.


Key quote: "We have become extremely dependent on conda and conda-forge. We must think of their sustainability."

Then it talks through the steps being taken to reduce dependency on Anaconda Inc. While the company proved to have a lot of goodwill in the past, there's always future: it gets acquired, and the new owners might not be as well meaning as the current ones.


That sounds great, but it's a very fragile state of things.

While it certainly makes lives of Iranian developers easier, it does not make it a good idea to put their code there: laws change, and quickly sometimes.


Laws change and there are also a bunch of other ways to get banned from your code on GH. And once that happens, you have nowhere to go.

Much easier to migrate to a Gitlab instance. And they know this! Which is why it's so fun to see Github dancing around these issues lately. Finally some healthy competition.

I'd love to know how many times MS have tried to buy Gitlab. :D


[flagged]


Can you please not post in the flamewar style to HN? We're trying for something else here.

https://news.ycombinator.com/newsguidelines.html


You seem to forget about Issues, Milestones, Projects, Wiki docs and more.


Probably a naive question, but is there some way for all these elements to held in a git repo as well? Just move all your issues/trackers to another platform?


There are some approaches, e.g. https://github.com/dspinellis/git-issue

But I don't think there's much standardization around this type of git usage, and I'm not sure how efficient it would be for large repos.


It can probably be done but some projects have thousands of issues, thousands of PRs with dozens of comments in every thread.

It doesn't seem like a good idea to make your git repo store all that data.


and users. GH, GL, BB are kind of dev social netwoks. Project assets can be archivised/mirrored easily with tools or API scripts, but there is no way to link them back to live users. Community needs to be rebuild at new place and thay is lot of effort.


On the other hand, hopefully the more contact Iranians have with the outside world, the more they will petition their government for peaceful relations with other countries. Obviously GitHub access isn't going to make the difference, but rather lots of these kinds of things in aggregate.


You make a good point, I'm surprised to see it downvoted.

It's always a risk to put IP in a bucket you may lose access to just because of politics.


Git is a distributed system, so even if you lose access there isn't a huge data loss.


Unfortunately, github is 50% git, 50% proprietary code that you don't control and can't neatly export your data for other platforms. All these git hosts are walled Gardens. It's a sad state of affairs but not really limited to git (Gmail walled Garden despite email standard, messaging apps, etc).


Github has some great management tools for reviewing code and integrating with various integrations. But so does Gitlab, Bitbucket... and I'm sure there are more. They aren't 1 for 1 replacements, but they do exist. I'd personally recommend against using a ton of integrations that tightly bound you to any service.


Yes, they all have great tools on top of git, but they are all different and hard to transfer between platforms.


Yes, making daily on-premise backups would mitigate the risk of losing source code.

That applies to everyone, not just Iranian developers: setup daily backups of all your code.


Even more than that, as long as one person has the repo cloned you can bootstrap the entire project again, any single clone has the entire project history to the most recent point it was fetched. Git is neat that way.


That's not necessarily true. If your organization has tens of repositories with multiple important branches in each, all odds that at least some of those branches are lost.

Proper backups of all repos are an answer, of course.


The way we use git, master has everything that is production with short-lived feature branches for development work. Not needing to worry about git backups is perhaps the least of the benefits of this approach (and no real drawbacks as far as I can tell).


That’s true if you use it the way its creator does, but not so much when you replace git send-mail with GitHub’s proprietary extensions (issues, etc).


Thankfully git is a distributed VCS. You just push the latest version somewhere else.


Yes, as answered in another branch, it's possible and reasonable to setup continuous / daily backups, if you're using a hosted Git service (Github or not). This will mitigate the risk of losing access to the code.

It's not advised for these Iranian developers to use any Github-specific features, such as issues, wiki, CI, because losing them will cause disruption / knowledge loss.

And then the reason to use Github specifically, instead of something else is quite low.


You don’t understand - there is no need to setup backups. Every user has a full “backup” of the repository (unless using sparse checkouts or other niche configs).


It really depends on how small or large your organization is.

If it's a single repo with a single branch, sure. No need for explicit backups.

If it's tens of repos with multiple important branches in each, then it would be very dangerous to assume that developer machines have all of them.


This is true for the source control aspects of Git, but not all of the project management aspects of GitHub. (wikis, gists, gh-pages: yes. issues, pull requests: no)


Does Iran not have it's own version of online git service after such a long time? I imagine it's not too difficult to set a barebone git hosting service up (without the hub functions, obviously).


I suspect to get the same level of availability and trust folks have in github 100% inside Iran using local hosting providers / infrastructure is actually a bit difficult.

I think it's kinda hard to compete with the big boys with limited resources / footprint even whit / perhaps because of sanctions.


Right? Unless someone was actively squashing sites down, it doesn't take much attach a CRUD to a directory of folders for git repos.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: