Some years ago I thought it would be interesting to develop a tool to make a python script automatically install its own dependencies (like uvx in the article), but without requiring any other external tool, except python itself, to be installed.
The downside is that there are a bunch of seemingly weird lines you have to paste at the begging of the script :D
Helping improve the spec and all is great, but being 100% honest, as a user, I would rather have a type checker I can bend to my needs. As you said, some code patterns in a dynamic language like Python are difficult, or even impossible, to type-check without custom code. Type checkers are becoming more popular than ever, and this implicitly means that these code patterns are are going to be discouraged. On one hand, I believe the dynamism of Python is core to the language. On the other, I would never want to write any collaborative piece of software without a type checker anymore. Therefore, to get the benefits of a type checker, I am occasionally forced to write worse code just to please it.
Considering how fast uv and ruff took off, I am sure you are aware of the impact your project could have. I understand that supporting plugins is hard. However, if you are considering adding support for some popular libraries, IMHO, it would be really beneficial for the community if you could evaluate the feasibility of implementing things in a somewhat generic way, which could be then maybe leveraged by third-party authors.
Out of curiosity, do you have experience with other languages that have type system plugins that you’d hope be used as inspiration for something in Python?
I don’t have any such experience (short of a macro system, which requires code generation or runtime support) and it always makes me curious when people ask for type system plugins whether this is a standard feature in a type system I’ve never used.
To add to the complexity, you have to worry about not just which language you're analyzing, but also which language the type-checker is implemented in.
So if we were to do this for ty, we would have to carefully design the internal data types and algorithms that we use to model Python code, so that they're extensible in a robust way.
But we would also have to decide what kind of Rust plugin architecture to use. (Embed a Lua interpreter? dlopen plugins at runtime? Sidecar process communication over stdin/stdout?)
Solvable problems, to be sure, but it adds to the amount of work that's needed to support this well — which in turn affects our decisions about whether/when to prioritize this relative to other features.
Can you either give some additional details on the code patterns you’re talking about, or link to some ‘typical’ examples? I do appreciate the flexibility of being able to just write code and not particularly be overly sensitive to jumping through typing hoops, but I can’t think of any place I’ve actually used algorithms or specific code patterns that rely on untyped-ness to actually work at run time. I’d be very interested in trying to work through what is actually required to consider these code patterns as well-typed.
IMO creating custom rules is problematic - when projects import external code, rule conflicts become inevitable. C++'s type system might be complex, but at least there's consistency across header files within a project.
Regarding type checkers: while I don't love optimizing code just to make them run faster, most Python patterns can be implemented in statically checkable ways without much compromise. The benefits typically outweigh the costs. Python's dynamic features are powerful but rarely essential for everyday tasks.
This was brought up in one of the previous discussion on HN [1], and people found out that indeed this project seem to have copied the original coreutils. There were some name of variables/constants taken from the original code [2]. Also, I am not implying that they are violating copyright (as someone else said, not doing a clean-room implementation does not necessarily imply violating the original license). However, I find it very sad that they replaced the license and are effectively damaging the GNU project. (It is also a bit sad to see your comment, which expresses a perfectly valid concern, down-voted).
I wonder what is the official position of the GNU project about this though.
More than the limit, what seems absurd to me is that businesses seem to be allowed to refuse cash as a form of payment. I am reading the translated version of the article, and it says:
> A seller can decide not to accept cash above a certain amount. Or not accepting cash at all.
Why would not accepting cash at all be allowed? What is the other alternative? Mastercard, visa etc? If for example, a super market chain stops accepting cash and starts accepting only Visa, and for whatever reason I do not have a visa credit card, or worse I am banned by Visa,then I cannot buy groceries anymore?
Also, I am not exactly sure everybody has access to a credit card. Maybe I am missing something here, but this seems a bit stupid/crazy to me.
Not every detail of a business's operations needs to be regulated by the government. I can think of many undo costs this could impose on random small businesses, stripe + iphone should be more than enough for some random business working at some event, forcing them to carry a change box + manually write down every transcation/receipt, risk theft, etc. Always comes with the best intentions though.
It is tiring that this keeps happening. These media outlet publish misleading titles and abstracts implying that Telegram is not secure, knowing very well that most people don't even read the articles. I assumed good faith for the first ~100 times, but it is always the same :D
Anyway, wouldn't it be better to link to the academic source ( https://mtpsym.github.io/ ), or at least to remove click bait titles?
It's tiring to see people claim Telegram is Secure e.g. "because it hasn't been hacked yet" :D These people don't realize Telegram is front doored by design, it leaks 100% of your chats to Mark Zuckerberg of Russia, just like Facebook Messeger leaks 100% of its messages to Mark Zuckerberg of USA.
I did not claim Telegram to be secure. It has nothing to do with what I said. Moreover, saying that something "is secure" does not make too much sense, without specifying secure against what.
Assuming you are in good faith, I will try to explain better: The title of the article states there are vulnerabilities in the encryption protocol.
According to RFC 4949 a vulnerability is:
> A flaw or weakness in a system's design, implementation, or operation and management that could be exploited to violate the system's security policy.
Clearly stating that there are vulnerabilities in Telegram's encryption protocol raises concerns, a lot of confirmation bias among Telegram haters, and leaves people who only read the titles with the feeling that Telegram encryption is vulnerable to attacks.
However, among the 4 flaws reported by the researchers, 3 are not exploitable ("This attack is mostly of theoretical interest", "Luckily, it is almost impossible to carry out in practice", "Luckily, this attack is also quite difficult to carry out, as it requires sending billions of messages to a Telegram server within minutes") and the other one is about reordering encrypted messages.
Therefore, a more fair headline which would undoubtedly raise less interest could be "Researchers found a way to change the order of your Telegram messages, even if they still cannot read them", or "Researchers found some purely theoretical or almost impossible to carry out vulnerabilities in Telegram's encryption protocol".
And don't even get me started about the fact that literally everybody, including expert security researchers, feel entitled to bash Telegram for having rolled their own crypto at every chance they get.
>leaves people who only read the titles with the feeling that Telegram encryption is vulnerable to attacks.
I agree with you these attacks are not so severe the completely broke Telegram. But it is living proof Telegram authors don't have the know-how on how to implement secure protocols. If you heard some bridge builder had replaced every third bolt with fifty zip-ties, you wouldn't be defending the bridge, you'd want to know who the f is overseeing that project, and ensure the entire design was being reconsidered, and that qualified engineers were working on the fixes.
This set of vulnerabilities isn't an indication that Telegram's encryption is bound to have a breaking vulnerability. It's saying they don't have the qualifications to protect the data we know sits in their server effectively plaintext. And I'm saying effectively, because sure, it's encrypted, but the database key sits in the RAM, 4cm away from the CPU, and is one privilege escalation vulnerability away from compromise.
You using the term "Telegram hater" does disservice to everyone, because your lumping together people with no tech background parroting headlines, and legitimate concerns from people who've actually spent time looking into this on a technical level.
> But it is living proof Telegram authors don't have the know-how on how to implement secure protocols
I strongly disagree with this claim. Can you back your claim with some evidence? The vulnerabilities shown here are mostly purely theoretical, I don't see how this goes to show that Telegram engineers are incompetent.
What I see is that Telegram engineers chose to ignore what the Computer Security academic community regards as best practices, and this has led to an infinite amount of criticism (including by the authors of the vulnerabilities we are discussing). Despite this, in ~8 years since launch, the only serious vulnerability which I am aware of, has been discovered and immediately patched right after Telegram was first launched.
This set of 4 vulnerabilities isn't the issue with Telegram. Vulnerabilities can often be patched. The issue is the fundamental way Telegram functions.
Finally, I'm a bit puzzled, you seem to be "open minded" yet your post didn't even touch on this massive issue of failure to provide E2EE for groups, desktop clients, or anything by default. Were you unaware of it? Or would you argue the endless list of competition that actually does E2EE properly (Signal, Wire, Threema, Element...), over-do security?
You're also not even remotely interested in agreeing with the academic community, but instead just observe and basically imply: "no breaches have been made public, therefore it must be secure". How familiar are you with the field of computer security, do you know how security is quantified?
Let's recap what is happening here, because we are going a bit off-track with this discussion.
My original post was about the fact that I am tired of media outlets making borderline denigratory titles all the time about Telegram.
You replied, stating that I claimed that "Telegram is secure", which I did not do. Then, I tried to clarify my original post.
Then you claim that these vulnerabilities show that "Telegram authors don't have the know-how on how to implement secure protocols". I asked you to back your claim, because I don't see how the discovery of a bunch of "almost impossible to carry out in practice" vulnerabilities might imply that Telegram's engineers are incompetent.
To which you reply that "Telegram isn't end-to-end encrypted by default". Now, unless I am missing something obvious here, you just stated a fact that has no relevance whatsoever with your former claim. The claim to prove was "Trivial vulnerabilities discovered --> Telegram authors are incompetent". Now, if you changed your mind, and want instead to argue that they are incompetent because they did not implement e2ee by default, it's a totally different discussion and has no relation at all with my original post, nor with the article we are commenting (imo).
> Finally, I'm a bit puzzled, you seem to be "open minded" yet your post didn't even touch on this massive issue of failure to provide E2EE for groups, desktop clients, or anything by default. Were you unaware of it?
I am aware of how Telegram works. But why do you suggest I should have talked about this? It is totally unrelated to my original point.
> Or would you argue the endless list of competition that actually does E2EE properly (Signal, Wire, Threema, Element...), over-do security?
I never stated such a thing.
> You're also not even remotely interested in agreeing with the academic community
It's not that I am not interested in agreeing with them. I am openly criticizing the behaviour of some of its members. It's a different thing. But also this is a different discussion, and I should not have included that comment, maybe.
> "no breaches have been made public, therefore it must be secure".
I did not claim this.
> How familiar are you with the field of computer security, do you know how security is quantified?
Please do not patronize me.
Finally, I am not interested in having a discussion that is unrelated with the topic of the article, or my original comment about it (because it would be too long and tiring). However, if you want to know my opinion on all this related issues that you brought up, you can read what I wrote about it here: https://germano.dev/whatsapp-vs-telegram/ (even though this does not talk about Signal or other open source e2ee messengers).
>Now, if you changed your mind, and want instead to argue that they are incompetent because they did not implement e2ee by default, it's a totally different discussion and has no relation at all with my original post, nor with the article we are commenting (imo).
No I didn't change my mind. The incompetence is all around. Both the presense of these vulnerabilities AND the fact Telegram's E2EE is practically non-existent tell of the incompetence. The vulnerabilities here are not the major problem, the major problem is focusing on the vulnerabilities is seeing trees without the forest.
If every time there is a discussion about Telegram's issues and we only focus on the narrow set of already fixed vulnerabilities, there's never place to discuss the elephant in the room, that the whole game is rigged. The backdoor massive, right in front of us, and nobody's doing anything to fix it. These security issues do not matter until the glaring hole is fixed.
>Please do not patronize me.
That wasn't my intention. I was genuinely interested. Because if you look at the infosec bubble on Twitter with big names like Matt Green, JPA et al. they all know about these issues yet don't even bother to name them. It's like the uncle you never talk about.
Given that you wrote your article before Signal had even desktop clients, I don't think it's even remotely up to date to vouch for any kind of fruitful discussion. But! Let me know if you update it at some point, I'm sure I'd like to read it then!
> there's never place to discuss the elephant in the room, that the whole game is rigged. The backdoor massive, right in front of us, and nobody's doing anything to fix it
I am tempted to take the bait, and ask you what would be this massive backdoor, which nobody has time to discuss. If I am guessing right, you are still referring to "no default E2EE".
In that regard, I would encourage you to consider that not everybody has the same security requirements, and many people are fine trusting Telegram and with the security it provides.
Personally, I cannot wait for Matrix to become more widely adopted, and to see the UI/UX of their clients to become remotely comparable with the one of Telegram.
Anyway, since it doesn't seem our discussion is going anywhere, maybe it's time to stop.
Thank you for the chat, I liked how we managed to stay polite even though we completely disagree :)
> Given that you wrote your article before Signal had even desktop clients, I don't think it's even remotely up to date to vouch for any kind of fruitful discussion
Yeah, I intentionally did not want to compare it to Signal (because the article was already too long that way).
>many people are fine trusting Telegram and with the security it provides.
So here's my concern: They would not be fine with waking up one morning with their entire message history out in the open after a massive hack. Surely you can't argue Telegram will never be hacked. Facebook has had multiple data breaches and I've never heard anyone be happy about that. This is what I've had to be second hand witness to https://www.wired.com/story/vastaamo-psychotherapy-patients-... I've seen the devastation someone's most private life out in the open does to them. I can't think of many things more terrifying than that.
There's a reason I made TFC (my work) E2EE by default. There's a reason Signal, Wire, Threema, Element, WhatsApp, Session all felt they didn't want to be liable or user data.
>Personally, I cannot wait for Matrix to become more widely adopted, and to see the UI/UX of their clients to become remotely comparable with the one of Telegram.
Yeah, Element is improving and will gether, and Signal's polishing the UX, hopefully adding the usernames etc by the end of the year.
>Thank you for the chat, I liked how we managed to stay polite even though we completely disagree :)
It may seem an advantage to use the latest cutting edge features of a single platform, instead of using well established standards which are compatible with every browser. Sure, you are going to leave out some small minority of users, but you gain access to many new features.
However, you are helping push the web become an increasingly centralized place, controlled by just a few entities, with interests which are very different from yours.
You may think that there is no harm in doing so. Most people use Chrome anyway. And what difference can one more web app make?
However, it is exactly this laziness by skilled developers, who are the only that understand the problem, which brought us to the current situation. There is no way to fix this problem, if the people that understand it do not take a stand.
Next time that your manager asks you if you can have that sweet feature, instead of saying "sure, we just need to drop support for Firefox", please consider trying to explain what are the consequences in the long term.
I know this isn't easy for many people, which do not feel comfortable questioning orders or plans. However, this is our responsibility. Nobody else is going to care, if we do not care.
Then in October added "Insertable streams is worth prototyping"
Also: "I will remind people that this isn't the place for advocating for what gets implemented in Firefox. This is something that the media team needs to work out."
then "closed 26 Oct 2020"
But, hey, it's June 2021, you just got a new UI redesign which nobody wanted (I guess, except the managers who invented it), and which uses up more vertical space. (Hint: about:config browser.proton.enabled false helps at the moment).
Even the iOS version of Firefox got an UI redesign. One has to care for the priorities! The color of the shed is always the biggest impact a manager can bring!
firefox has serious performance issues, i had to switch to chromium.
i really want to support firefox in my development but their tooling is just not presented in a rational footprint. when i inspect a vue proxy object, i dont want to see all the setters and getters.
they are losing mind share because neight to user facing components normthe dev facing have a well considered presentation or pefformancd
The most money are paid to the same managers in both cases. The total company expenses in the same period mostly aren't dependent on the nature of the changes implemented. The managers just make their managerial decisions what to set as the goals.
And apparently the board thumbs that up. That's the scope of the problem: "look we make the UI changes" is the "color of the shed" easy to understand illusion of "something" being done.
If there wasn't one manager at Mozilla who said to himself maybe in the pandemic with all the working from home stuff being done. Maybe we should look in our webrtc, video codec stack just is sad. No excuse that's just plain bad management.
Money is fungible, employees are not. Whether others at Mozilla may be tasked with implementing this API or Mozilla (or someone else) contracts Igalia to do it, the employees responsible for and qualified to work on UI are still going to get paid and will still have other work to do.
As far as I know Mozilla is not a foundation for charity towards unemployed developers. So their goal is not finding work for their existing employees, their goal is improving the browser in meaningful ways. They're free to lay off and hire people to do this.
Also honestly any developer worth keeping could figure out the task at hand given sufficient time. So to talk about developers as if there's a developer who can only alter the rendering of tabs and what not is kinda silly.
Actually, Apple has many developers on this team, and Firefox is an open source project.
If we could create an feeling that companies that claim to be developer friendly make sure that FF is also compatible, it would be a huge win for all involved.
Firefox is an open source project, but pushing large changes upstream is difficult (and this is true of pretty much any project). Even if Apple had the patches, Mozilla might not take them.
While at first glance it would be strange to expect Apple to make these changes; I don't feel its unreasonable from any perspective. Apple should hold a long-term interest in keeping the web diverse; Safari will never reasonably hold a majority/plurality marketshare, so their second highest priority from a revenue/ux perspective should be "sticking to the standards" and helping toward ensuring web developers don't take on a "Chrome and nothing else" stance.
Granted, its also understandable that Apple officially working on Firefox would be "article on The Verge" level of news and even an armchair commentator would be able to connect the dots from what they're working on to predicting Facetime was coming to web. Though, isn't that what Jobs originally promised? Open source protocol and such?
At the end of the day, I'm sick and tired of the prevailing hyper-endstage-capitalist excuse of "they won't make a billion dollars from doing this, so not only will they not do it, but they SHOULDN'T". Its everywhere on HN, and its actual brain worms. Corporate decisions shouldn't only be analyzed through this lens; there's a far broader humanistic lens that codifies a higher standard that we absolutely can reasonably hold all companies to; not from a legal sense (HackerNews isn't a court and your votes are not a jury decision, some armchair commentators need to be reminded), not even from a general population public relations sense; but from a viewpoint that Ethics is not a democracy, there are some ethical positions that won't make money, aren't required by law, and aren't even popular, but are nonetheless crucially important to avoiding a Blade Runner-eque corpo-cyberpunk future (or, with some very legitimate issues, species extinction or at least achieving and maintaining a high standard of living for most of our species).
The sibling argument of Firefox's CEO making a ton of money being reprehensible is... I mean, jeeze, they make awesome software, open source, freedom respecting, privacy respecting, and manage to pay their leadership & employees well? Isn't that the dream? That should be the goal; not be derided. There's a middleground between hyper-endstage-brain-worm-capitalism and "all software is developed by starving monks in a monastery". I understand its hard to believe this, because it isn't an extreme; its easy to let gravity drag your ethical viewpoint to an extreme on the left or right in this age of outrageous social media, but neither extreme on any ethical dimension is conductive to a positive future for humanity.
A significant number of websites ignore Safari on mobile; not because its Safari, but because its mobile. Not necessarily with a big banner that says "Please use a desktop", but rather a half-assed layout.
Within the US, Safari and Chrome mobile have roughly equivalent marketshare, recently with an edge to Safari. Globally, Chrome mobile is significantly larger than Safari mobile.
None of that actually matters though; Firefox, Safari, and Edge all deploy advanced analytics blocking features which distort their marketshare. In many instances, these blockers self-report their browser as Chrome, as a "blend in with the crowd" strategy.
When it was reasonable for Apple to do so, Apple distributed Safari for Windows. I don't remember now anymore exactly, it could have been even before Chrome on Windows existed? "Apple's Steve Jobs first announced Windows PC support in Safari 3.0 at Macworld Expo in 2007."
"Google Chrome first release: 2008-09-02." "We've used components from Apple's WebKit and Mozilla's Firefox, among others" (1)
If there's ever a reason for Apple to be more involved in a Windows browser, that's still an option.
But implementing some functionality in a third browser on Windows... why should they? The two which already have the feature are already by the competing companies.
There is an interesting inverse correlation of Mozilla CEO salary and amount of users Firefox has.
I think it's normal to pay competitive salary too, but the salary should reflect one's impact on the company. Looking at the current state of Firefox, I can't imagine why their CEO is compensated as they are.
What is their revenue source? And what is the reason cash flows from that revenue source if not Firefox? If Firefox did not exist, Mozilla would have no reason to be considered by anyone about anything.
Exactly: They could equally well just skin Chromium and keep on setting Google as the default search engine. That's exactly my point-- Firefox isn't the revenue source, auxiliary services around it are.
The salary of their CEO has absolutely no bearing on it being open source. None. Look at Tim Apple's salary, and yet Darwin, Webkit, CUPS, and other projects are open source.
I can count the number of developers I've worked with over the last 5 years that care about it working in any browser other than Chrome on one finger.
This idea that only Chrome matters is absolutely coming from the bottom up and when you point out something broken in Safari the first response from them is "Does it work in Chrome?" before they even look at it because they themselves don't even test in a second browser.
That mirrors my experience. It's not POs or PMs that hear about some new niche browser feature only supported by Chrome. It's devs that want to play with the latest toys and kind of look at you weird if you use Safari.
There's an annoying assumption from other devs that I must be using Safari out of ignorance. They quickly get over it, but it's a problematic first impression thing when working with new teams.
> This idea that only Chrome matters is absolutely coming from the bottom up and when you point out something broken in Safari the first response from them is "Does it work in Chrome?" before they even look at it because they themselves don't even test in a second browser.
I would have thought at least iOS Safari would be a major consideration for anyone due to the ubiquity of iOS devices.
> I would have thought at least iOS Safari would be a major consideration for anyone due to the ubiquity of iOS devices.
If web developers give any consideration to iOS, it usually results in a comparison of Mobile Safari to IE 6.
In reality, Google Chrome’s unilateral provisioning of unratified features drives developers to dismiss competing products as obsolete. In this way, Google Chrome advances the “extend” phase of technological dominance while well-intentioned and overworked web developers implement the “extinguish” phase.
1) Smartphone use, both in general and for web browsing, and
2) Spending on smartphones
These have been true long enough and to a large enough degree that they're usually taken as assumed, baseline facts by anyone involved in mobile software products.
The two of which are why companies not only care about them, but, in fact, iOS' numbers are so good on both of those that it can be tempting to go iOS first for many products, if you have to choose only one platform, even if your demographics don't skew iOS.
iOS devices are used more than Android devices, and their owners spend a lot more on average. There are probably several reasons for this and its unclear which is dominant, but in the end, it doesn't really matter why, if you're just chasing the market.
I test mostly on Firefox and Epiphany; I figure if it works there it's going to work just about everywhere.
Safari is a different beast because I don't have a Mac and it's support for a lot of standards is pretty dismal. It's like the IE6 of browsers these days.
I keep the JS simple though and for CSS I keep around a few handy LESS functions so I can get some basic stuff on crap browsers. Stuff like:
.opacity(@default, @percent) {
-webkit-opacity: @default;
-khtml-opacity: @default;
-moz-opacity: @default;
-ms-opacity: @default;
-o-opacity: @default;
opacity: @default;
// ms-filter *SHOULD* work on IE8 & 9 but ... doesn't always
// for me? WTF... anyway (filter should also work). This
// should be listed before filter to be safe
-ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=@percent)";
filter: alpha(opacity=@percent); /* support: IE8 oh god we're all gonna die*/
}
This way I don't rely on some framework like Bootstrap, and I can write fairly simple stylesheets. I used to transpile compliant and legacy sheets and serve different urls depending on user agent strings but that didn't work well and was generally crap so - one it is.
Don't worry, when I transpile I strip my unprofessional comments.
> Next time that your manager asks you if you can have that sweet feature, instead of saying "sure, we just need to drop support for Firefox", please consider trying to explain what are the consequences in the long term.
If the future of the web relies on developers groveling at the feet of a manager, then there's no fight or discussion to be had, because the web has already unequivocally lost. The only thing that's happening is a discussion about whether to parade on the corpse or not.
> If the future of the web relies on developers groveling at the feet of a manager,
I see a lot of this 'devs v suits' type language used on HN, with the implication being that the developers are principled stewards of technology suffering under the cosh of KPI-obsessed MBAs.
What causes this? The majority of product managers I've met have technical backgrounds, and they have also had to cut corners to keep their product roadmaps on track.
From what I've seen, it doesn't matter if they have a technical background. Hubris operates the same way in people—it serves to blind them of all but their own ambitions as they lose a more complete picture of reality in favour of expressing their egos.
At least in cases that line up. I doubt it's so universal. I've seen something quite similar happen first hand. To the point that I'm pretty beside myself about it. Hard to understand if you don't just assign it to them steamrolling anything but their ego. It's the only way you could just let core functions in your core product falter and not have a plan for it.
That said, I don't think that applies to excluding Firefox in this particular case. It doesn't sound permanent, and it sounds like it just hinges on FF catching up their available APIs to suit.
Not only that, but developers have at least as much incentive to push to avoid cross platform implementations. More work, more complexity, bugs, maintenance, etc., and many (most?) do all of their dev and testing on Chrome anyway.
Web monoculture simply has a set of labor/$ incentives builtin. It's the default, and it's hard (and probably getting harder) to appreciate the long term system-wide risk that accumulates by allowing one company to control web standards.
I don't see it as a dev vs. suits issue at all. If anything, in my experience it's people who remember Internet Explorer and people who don't.
Apple's choice here was likely to not release the product at all, or use an open standard that Firefox doesn't yet support, and allow them to support it over time.
Using features that not all browsers have implemented _yet_ isn't always bad for the open web. If the feature is important, the other browsers prioritize it.
Why put all this responsibility on developers? I'm pretty sure none of my former managers could have been swayed by talking about the long-time independence of the web. Usually, the most pressing issue was fixing bugs in prod and delivering features on time.
Firefox is doing quite well with the standards. Chrome is implementing things beyond the agreed-upon standards. Which to some extent has to happen in order to advance standards, but that only works if the changes are agreed upon or at least not disagreed upon by other implementations. These days Chrome is forging ahead even in the face of disagreements (usually on grounds of privacy or security).
And to be clear, Firefox is behind on the relevant standard here. Though even then, it's more nuanced than that: Mozilla is ok with prototyping it even though they would prefer for it to use a more secure mechanism -- see https://mozilla.github.io/standards-positions/#webrtc-insert...
From what I can tell, Mozilla is in the place of playing catch-up because the other players chose to forge ahead without resolving their objections.
[Ok, "our objections". I work for Mozilla. Not in a relevant area until recently, but it looks like I will be doing some very relevant work starting as soon as I close this damn tab.]
To the vast majority of the people that use this on the web, they will do care about your story.
I don’t mean that as an insult; I’m happy there are folks like you with passion in this space.
If you’re old enough, you still wake up in the middle of the night sweating about IE 6 or 7 bugs that HAD to be solved with
brute force even though the feature worked just fine in Firefox and Chrome. After years of struggle, most of the world uses a very compliant and continuously upgraded browser.
It's not about not taking the win. It's about taking the short term win (by contributing to the monoculture dominance) at the expense of a long term loss -- why expect Google to maintain the Web's current advantages when it no longer serves their purpose to? Especially since the writing is already on the wall.
It was a combo of being a fresh/new dev on my part and IE6/7, but I recall spending (wasting) days and days of my life working around IE issues.
I know we don't want another situation of one browser dominating the web but Chrome (and Firefox) improved building for the web so much. I don't know if people forget or weren't around for the IE days but it was absolutely terrible and a life-waste.
This might come as a surprise but not everyone sees the world from a "freedom" perspective. In terms of practical/day to day experience we won something that works over something that didn't.
I agree with you, but think that there are different definitions of freedom. I love open source software but I don't think it's a right or that all software should be OSS. I like the freedom to keep the source of my apps closed if I wish. If I choose to use a closed source browser, I'm not giving up any freedom in my mind, I'm making a choice and a deal.
It's not a "freedom" perspective. Once you get an overlord, it's just a matter of time until it starts abusing you.
Or, in other words, it's just a matter of time until it's Chrome keeping you up at night eating all of your productivity to avoid some defect. It probably won't be a rendering bug, but there will be something there.
IE "worked" too. The main problems came from having to develop for multiple browsers with incompatible implementations. If you restricted yourself to IE, it was quite painless. Any quirk you'd encounter daily was documented, and at the time there were quirks in all implementations anyway.
It does not come as a surprise but saddening, because freedom is the most important thing in life
With out freedom life is miserable, and it saddens me that people seem to be valuing freedom less and less, one day they will look around and ask "how did we get here", and people like me will just shrug and say "you should have listened"
> you are helping push the web become an increasingly centralized place
Does it? Open source standardization is a good thing. Still not sure why html/css/js engine should be the exception. No one is calling for competition for QR code, torrent protocol, or the other billion of very dominant open source projects.
It's not really open source standardization though. Google is in charge, make no mistake. Yes people can take it and tweak it, but the main feature changes are dictated by Google.
This is Apple, the inventor of walled garden tech for consumers. I wouldn't have been surprised if they had some proprietary extensions in Safari that made it so Safari was the only browser that could do it.
What the open source world calls "freedom of choice", the rest of the world calls "waste of time". I'd argue it does more people more good to have a de facto standard based on a close duopoly (Google + Apple, webkit/blink) to code websites and devices against, rather than the clusterfuck that is the WHATWG, W3C, etc. process. The existence of Gecko is nice for Mozilla but a time sink for developers and users, who at the end of the day just want to look up restaurant menus or buy tickets or check their email instead of fiddling with browser idiosyncrasies.
If Mozilla moved to some Webkit/Blink/Chromium derivative like everyone else, the world could standardize on that renderer and they are still free to "innovate" on the browser UI/chrome surrounding that engine and differentiate themselves that way.
As it is, Gecko adds nothing to the web ecosystem anymore and wastes everyone's time.
What a deeply ignorant perspective. Too infuriating to ignore. Firefox users constitute <1% of my traffic. If you had any context for how widely browsers diverged on webrtc features (especially video), you'd realize that ff support could easily add months and months to dev time. I'm sure it hasn't escaped your notice that apple makes no mention of safari in their announcement.
my webrtc-based video conference app doesn't currently support ff and never will unless compatibility with chrome's implementation comes around. My manager would have me committed if I tried to pull that shit, and I'm already notorious for refusing to do things on principle. Suggesting that we don't support firefox because I'm lazy? No, vendors force us to choose, and if the alternative is NO video? Here on earth, where we're trying to cultivate a competitive advantage and survive as a business venture, that's an incredibly easy choice.
edit: other commenters have made the point much more elegantly than I but I leave my words here as a testament to how infuriated I am at this condescending suggestion.
I configure FF to use a different user agent, as a security measure. It's a short hop from browsing the Internet with FF to disabiling browser identification.
For a long time I did the same because some websites would refuse to load when they thought you weren't running Chrome even though it would have worked just fine in FF.
You praise HN because we can have "meaningful" discussions, yet in the next sentence you ask people not to upvote an article just because the author is not great at communicating his thoughts, and makes some grammar mistakes, completely ignoring the "meaning" of the article.
You feel the need to "urge the people" not to upvote it. However, wouldn't it be more "meaningful" to explain why you do (or do not) agree with what is said, instead of just complaining about grammar?
I personally found the article interesting, even though it's poorly written. Instead of ordering others around, what about urging yourself to refrain from commenting unless you have something meaningful to add to the discussion ;)
The goal of modernizing coreutils is great, and doing it in rust is even better. However, it makes me very sad that this is licensed under the MIT licence.
Being licensed under the GPL is an essential part of the GNU project and their philosophy. Of course everyone is free to do as they please. However, IMHO, if one appreciates the GNU project and the ideals it stands for, then, maybe, it could be preferable not to rewrite parts of it under weaker licences that go directly against their mission!
Your argument seems to imply that GNU was the progenitor of the utilities. A quick sample of the first 10 utilities from the first column of http://maizure.org/projects/decoded-gnu-coreutils/ on my machine showed 80% of the utilities were copied from existing sh utilities. Sure, GNU has added features, but there is nothing unseemly about creating new utilities using a different licence that are largely compatible - that is exactly how GNU coreutils came to life in the 90s.
arch - GNU
chgrp - Descended from chgrp introduced in Version 6 UNIX (1975)
comm - Descended from comm in Version 2 UNIX (1972)
dd - Descended from dd introduced in Version 5 UNIX (1974)
du - Descended from du introduced in Version 1 UNIX (1971)
factor - Descended from sort in Version 4 UNIX (1973)
head - Descended from head included in System V (1985)
join - Descended from join introduced in Version 7 UNIX (1979)
ls - Spiritually linked with LISTF from CTSS (1963)
They'll stop doing that if most code is GPLv3. If most code is GPLv3, we're all better off – but while people are still prepared to work around GPL-rejecting companies, those companies can still reject the GPL.
E.g. they don't get to sue you for using it and not opening your own software that links to it, since you can buy a license allowing you to do just that.
That's also why some GPL software is dual licensed. GPL for the masses, and a proprietary license allowing you to do whatever without needing to follow the GPL if you can afford it.
Ah! You mean a special kind of proprietary license for them. That makes sense.
I thought you meant they would just use copyrighted code that wasn't under a GPL (which would be just as illegal and probably more dangerous in terms of enforcement).
Do they forbid using the code, or incorporating the code into their own works? I hear of places forbidding the use of GPL code (even in-house) where they use proprietary software like Microsoft Word with no worries.
No, they might not. Just like Microsoft, they might demand that you don’t release your plugins. Addidionaly, GNU software gives you the alternative of releasing your plugins. But you don’t have to, and they can’t make you do it.
Yeah I really agree here. Especially when it comes to coreutil implementations. I've seen multiple implemented under MIT/BSD and it always makes me a little sad.
I have access to an incredibly high quality easy to use router firmware in the form of OpenWrt by virtue of GPLd coreutils. I wish FOSS developers weren't so afraid of it these days
Agreed. It would also be good of this Rust project to state that all of the utils are original works and no one has peeked at the C sources. Perhaps the maintainers of the C versions should do the same in reverse.
https://github.com/uutils/coreutils/search?q=gnu
"This is the way it was implemented in GNU split."
i dunno maybe there are cases where they copy more than behavior... but interesting to look through
It's pretty easy to ask someone "Hey, can you go take a look at GNU split and see how they handle this case?" and then implement your own clean-room solution afterwards.
Alternately, to solve the problem yourself and mention it to someone and have them say "that's how GNU split does it, so why not?"
‘Clean-room reverse engineering’ refers to the practice of having one person look at disassembled source code, spec sheets, actual behaviour, etc. of some program; describe that behaviour to a second person; and having the second person implement the described behaviour.
> Perhaps the maintainers of the C versions should do the same in reverse.
Nitpick: they don't need to, they can just attribute the Rust developers if needed. They can also just take MIT code and publish it under GPL (reverse is not true).
I didn't mean re-licensing it - the parts that are under MIT stay under MIT, but the patches on it have a different license. That is completely legal and license doesn't forbid this. (ianal, yadda yadda...)
A similar thing happened to OpenOffice / LibreOffice: [0]
> OpenOffice uses the Apache License, whereas LibreOffice uses a dual LGPLv3/Mozilla Public license.
> For some legal reasons, then, anything OpenOffice does can be incorporated into LibreOffice, the terms of the license permit that. But if LibreOffice adds something, take font embedding, for example, OpenOffice can’t legally incorporate that code.
Richard Stalman and others that believe the GPL is generally the best license are not against ever using the MIT license. Stalman has been willing to be pragmatic and support tactical choices of other licenses besides GPL when licensing.
So it is worth having some deeper thoughts about what the implications are for different licenses of coreutils. When does using coreutils create a derivative work that requires GPL?
Tactical considerations exist, but even so, a copyleft license might have better long term results in promoting software Freedom. The FSF supported a non-copyleft license for Ogg Vorbis, because at the time it seemed reasonable that this was the best way to fight the (at the time) patent-encumbered MP3 format. But in practice it had little benefit:
"However, we must recognize that this strategy did not succeed for Ogg Vorbis. Even after changing the copyright license to permit easy inclusion of that library code in proprietary applications, proprietary developers generally did not include it. The sacrifice made in the choice of license ultimately won us little."[0]
I think this is more of a reflection of mainstream OSS culture in general. Rust development currently is heavily tied to proprietary services (e.g. GitHub required for contributions, issues, crates.io login, CI, etc., Discord and Slack communities rather than Matrix, Zulip) and likes to license things under MIT. I think the lack of awareness/care for the big picture of fighting for software freedom is simply not there just as it is missing from mainstream OSS culture. Because convenience and network effect are king.
You do have a point but in case you are saddened by this phenomena, let me just point out that we live in a VERY different age compared to when GNU coreutils were born. Nowadays you only get a few minutes -- or hours if you are lucky -- to answer a ton of fundamental questions like "where do we host code?" and "how do we communicate on this project?" or "how do we do CI/CD?" etc.
The people in the past had all the time in the world to tinker and invent. Maybe I am mistaken though, past is usually looked through rose-tinted glasses right?
But the fact remains: nowadays answering the above questions is beyond my pay grade: in fact it's beyond anyone's pay grade. Services like GitHub are deemed a commodity and questioning that status quo is a career danger.
I really do wish we start over on most of the items you enumerated. But I am not paid to do it. In fact I am paid to quickly select tools and never invent any -- except when they solve a pressing business need and are specific enough for the organization; in that case it's not only okay but a requirement.
Beyond anything else however, we practically have no choice. If I don't host a new company project on GitHub I'll eventually be fired and replaced with somebody who will.
I would also onto your point and say that one reason why Rust is easy to get into is because of the convenience that these semi-proprietary platforms provide.
We here on HN have both the time and ability to set up things like CLI Git, and Matrix. But for a new language, forcing people onto esoteric (& superior) platforms makes them less likely to use them.
It would be nice if Matrix and self-hosted Git were the default, but when acquiring users/programmers is your goal, Rust doesn't have that luxury.
Agreed. Maybe they will tighten things up in the future but when you are after adoption and getting as much help as you can, it's indeed a luxury to be morally idealistic.
Rust uses Github, but could easily switch to a self-hosted platform if Microsoft became opposed to Rust's goals; (and yes, Microsoft is a Rust sponsor, but not an essential one). Cargo has support for alternate registries built in.
Some of community is on Reddit and Discord, but most technical discussion takes place on Discourse and Zulip. The official "user questions" forum is a Discourse instance.
Most subcommunities forming around Rust projects use Zulip instances.
The Rust community uses proprietary services when convenient, but it's hardly dependent on them.
Disagree, just look at bors and the use of Azure CI. It would be a huge PIA to switch.
> hardly dependent on them
I wouldn't say hardly, I think it's more like kinda. GitHub's network effect is pretty strong. Compare the number of contributors to golang for example which is hosted on Google code.
> it could be preferable not to rewrite parts of it under weaker licences that go directly against their mission
+1. It's very sad to see user freedom being thrown away as a goal and replaced by software that becomes unpaid labor for FAANGs. Especially since the SaaS takeover.
GPL does not protect against SaaS provider creating derivative as there is no further distribution of binary or source code. Only AGPL (and other more strict licenses) addresses SaaS derivatives issue.
In the case of coreutils, there is really no issue with presence or absence of GPL. cp is just cp and nobody is going to hang a proprietary extension off the side.
> I'm willing to forgo cloud backups and some usability to have default encryption for all my conversations, which I think is something Signal provides.
Indeed, it does.
> None of these apps are perfect, it comes down to what combination of trade-offs works best for you.
That is exactly the take-home message :)
Anyway, I mentioned that Telegram does not support e2ee for group chats here:
> WhatsApp nowadays has end-to-end encryption enabled by default for all chats, while Telegram has not enabled it by default and does not support it on group chats.
Note however that group chats are even more difficult to handle securely, because in theory you are supposed to verify the identity of every participant.
The downside is that there are a bunch of seemingly weird lines you have to paste at the begging of the script :D
If anyone is curios it's on pypi (pysolate).