Hacker Newsnew | past | comments | ask | show | jobs | submit | jackfraser's commentslogin

This is typical woke-capitalist garbage.

Real juice is just fine to drink - in moderation! Before getting some exercise, it's a great boost, has vitamins, easy energy, and it's an excellent bribe. It's also trivial to get juiceboxes that have extremely simple ingredient lists so you can be reasonably sure you're not going to be feeding your kid anything that kids haven't been consuming for millenia.

Should kids have it every day? No, probably not! They need to be used to drinking water as their fundamental beverage. "Fruity water" as a replacement for normal water simply inculcates them with the idea that plain water is gross and not good enough, and that they should demand some enhanced alternative. It does nothing to stem the habit-based dietary issues people end up with - after all, if you're used to fruity water and the only available options are juiceboxes and water, there's no way you'll be satisfied with the latter.

Increasingly it looks like there's a push to serve our kids things even less organic than we used to. Beyond Meat or maggot burgers, fruity water, it's like there's a war on normal, real products. Why are we assuming these new untested alternatives are somehow better or, at least, not fraught with most of the failure modes of their predecessors?


Aren't we just reinventing the wheel, though? Got your structured data format, now you need parsers (tons available for XML, incl SAX, DOM parsers, SimpleXML, Nokogiri...) a schema and validation tools (XSD), a templating mechanism (XSLT), a query language (XPath), ...

JSON was a reaction to the verbosity of XML, but a better reaction would have been to work harder on our text editors so that working with XML would be just as easy as working with JSON in terms of the numbers of keystrokes needed. Better parser interfaces that help you treat the dataformat more like it's part of the language would also help (i.e. SAX and DOMDocument suck to work with, but SimpleXML is almost idiomatic).


Verbosity certainly is an issue with XML, but far from the only one. IMO the main problem of XML is that it was designed as a markup language, but then misused as a data structure serialization language. When used as a markup language, the distinction between attributes and children is meaningful. When serializing data structures, the dichotomy breaks down. For most subfields of a larger data structure, it's not obvious whether to serialize that subfield as an attribute or as a child. Contrast with JSON, which only consists of obvious data structures.


> not obvious whether to serialize that subfield as an attribute or as a child.

Attributes are just strings, generally for metadata. I'd probably serialize an object from another language more verbosely. This is where an important distinction needs to be made: the XML format you use for config files or for data exchange from your app to others should not necessarily be just a serialized object from the most convenient form inside your application. If you care about the operators of the app, you'll allow them a more concise format for that kind of thing, and use XSD or your own internal mechanisms to turn that into an object you want to actually work with.

It's the sort of problem people have once, abstract away, and move on from.


It's also the sort of problem that just doesn't exist at all if you use a real data serialization format from the get-go.

I'm so frustrated with our collective attitude of building abstractions upon abstractions upon abstractions, without ever stepping back and realizing that we're using the wrong tool to begin with.


This difficulty is of your owm making. An attribute is 'metadata' about the element. A child element is precisely that: another element.


The fact that you put "metadata" in quotes illustrates the problem.


> JSON was a reaction to the verbosity of XML, but a better reaction would have been to work harder on our text editors so that working with XML would be just as easy as working with JSON in terms of the numbers of keystrokes needed.

Isn't that only solving half the problem? XML is also pretty difficult to read


It's not even the important half.

If I have data that I need to send somewhere, and I can create the format for it, that's really easy to do.

The problem, every time, is the reverse; receiving some piece of data and trying to figure out what parts of it I care about. Both XML and JSON allow for schema definitions, but in both cases it fundamentally requires me, as a consumer "grokking" what is being sent. And the verbosity of XML simply makes that harder. Working with either is not _that_ hard (though I have run into XML in the wild that is so large a payload, yet so poorly designed, that there is no good way to process it; I can stream with via SAX without writing my own state handling mechanism, and I can't just deserialize it into an object without massive memory issues at scale); the difficulty really is in containing it in my mind, and JSON simply facilitates that better due to it's simplicity and explicitness (yes, explicitness; in XML it's not clear if a child element should only exist once, or multiple. JSON it's obvious)

Per the OP; I cringe every time I see YAML. Pain to write, pain to read; have to have tooling every time or I get whitespace issues.


What schema definition is there for JSON?


JSON Schema (no points for imaginative names)

https://json-schema.org/


> XML is also pretty difficult to read

I’d say this is schema-dependent. If you’re talking about plist files, sure; those are ugly and unintuitive. But on the whole I find XML far easier to read than JSON. With closing tags, what you lose in terseness is made up for with scannability: it’s easier to understand the document hierarchy at a glance, and find your place again after editing. Whereas with JSON I often have to match curly braces in my head, or add comments like `// /foo` which isn’t even possible outside of JS proper or a lax-parser environment.


Look. I am just a web guy.

But why is XML so freaking great? We can’t even tell if whitespace is significant or not. If a schema says it’s insignificant then that’s that!

https://www.oracle.com/technetwork/articles/wang-whitespace-... That alone is TERRIBLE! (Same problem with YML.) Why should I bother with that? JSON can encode strings, hashes, arrays etc. in a way that’s instantly interoperable with JS and is far far more unambiguous.

What exactly is so great about XML that you can’t do with JSON in a better way? Schemas can be stored in JSON. XPATH can specified for JSON. Seriously I never got the appeal of XML except that it was first.


Some of XML's biggest achievements lie in written documentation formats(DocBook, DITA) where fine-grained markup control is needed and the presentation of the content is secondary to semantic features like footnotes, indexing, etc. These are formats that professional technical writers turn to when Markdown, Word docs or PDF won't quite do the trick.

For a lot of data, XML isn't the right form and buries too much data in hierarchy and tag soups - but it's flexible enough to make it into whatever you want, and since XML was buzzworded and XML libs were some of the easiest things to reach for in the 90's, it got pushed into every role imaginable.


Two specific things about JSON: elements with the same name overwrite each other; and even though parsers are generally good about it, items are not required to be returned in the order they appear in the file.

Oh and there's no comments.


One thing that bugs me about JSON is that it can't easily [0] represent general graph-structured data because its notion of identity is too limited. XML can represent general graphs trivially.

[0] https://realprogrammer.wordpress.com/2012/08/17/json-graph-s...


XML wasn't meant as replacement for JSON, but for HTML without vocabulary-specific parsing rules (eg. SGML DTDs).


As far as I can make it, JSONs popularity grew from it being JavaScript which is, as we know, the knees bees. There was no big thinking in behind the whole thing and the role it plays today was certainly not the intended role (otherwise I cannot explain the non-standard data-formatting that is handled differently everywhere). JSON is more akin to Java RMI than to XML imho.


I think that XML is often used for stuff that it shouldn't be used for. It could be almost OK (there are still a few problems though) for stuff containing text with other stuff inside that may in turn contain text, and so on. For other stuff, JSON or RDF or TSV or other formats can be good.


> JSON was a reaction to the verbosity of XML,

JSON was a reaction to the simplicity of having a data format that JS, ubiquitous on the web, could load via eval (which, once JSON was established, was largely abandoned because it is ludicrously unsafe, but the momentum was already there.)


Agreed. There's tons of employment for people that can work with microcontrollers, FPGAs, ASICs, bespoke control hardware, etc. which are often harder to get into than traditional software engineering.


Tends to pay a lot less, though.


Not a chance. Consider that we have working quantum annealers (the D-Wave machines), but they cost ~20 million dollars; and there are no real working gate model machines beyond a few qubits that don't seem to be particularly useful (i.e. nobody's done anything meaningful with them).

The best bet you have for now if you're interested in quantum computing is to check out D-Wave Leap https://www.dwavesys.com/take-leap and see if you can make an annealer do something useful via their cloud service. If you're solving tough optimization problems it's apparently useful.


Good to know, thanks for the link.


Lame. Is there any real reason to do this? Does the code take a lot of maintenance, aimed as it is against a protocol from 1971? Is there a reason to cut people off from easy interoperability with links on the older parts of the web, many of which surprisingly do still work?

FTP sucks, sure, we get it. No reason to use it now. Still, Google seems to have a mission of deprecating the old Web, from their search results that push that kind of content down, to their browser deprecations of FTP and Flash and Java applets. How is one supposed to even see the old parts of the web anymore?


The code is old, and hard to secure. It lives in the browser rather than a subprocess, so is unsandboxed.

The FTP protocol itself isn't a nice binary protocol - it was designed for humans to type by hand, so has a lot of flexibility, leading to a lot of corner cases in the code.

There is also the fact that the flexibility of FTP allows the browser to attack other devices on the local network. For example, I could navigate an iframe to FTP://evil_payload@127.0.0.1:3389, allowing me to send a possible exploit to your your machine, bypassing firewalls.

Considering how few people use it, and the risks it still poses to everyone, I can see why they want to get rid of it.


There is also the fact that the "modern browser" is no longer a browser but a program which runs remote code with local privileges and sometimes with elevated privileges. The main reason they want to deprecate ftp is the same reason they used for other protocols: it is much easier to control 1 protocol (https) instead of 10 (http, ftp, rss, ntp etc.). Especially when they decide which certificate is trusted in their browser (which browser (engine) happens to be the only one used by the majority).


So you're saying ftp support could have simply been disabled by default where people who need it can simply turn it on? Also, the attack you're describing is a "cross protocol" attack for which modern browsers (at least firefox) have mitigations in place.


If you read the OP it says that their support for FTP is already pretty limited. Doesn't support encrypted FTP (FTPS) or proxies. Kinda goes against the effort they're making to push encrypted connections with HTTP.

They'd have to spend time and effort getting their implementation up to scratch.

Kinda makes sense to scrap the FTP support as there's not much of a user base and leave it to dedicated software like FileZilla/WinSCP/CyberDuck etc. etc.


I'm honestly surprised that FTP is still supported in web browsers. FTP is as non-web as you can go without resorting to Gopher - plus the protocol is Anfient and Broken (and plaintext [shudder]). This shouldn't have been a part of any browser, ever.


There are still sites that have ftp downloads. Sucks to have to install a dedicated client where you could just click and download in the browser.


There are still sites that have magnet: downloads. Sucks to have to install a dedicated client where you could just click and download in the browser. Where's the difference, except for "we've always done this, and we've never done that, therefore this good, that bad"?


Why of course. To assert dominance! and maintain influence.

It's not about "the old parts",it's more like google's elitist engineers do a stats count and notice only 0.1% kf users use a feature so why spend time maintaining it. Awww,they'll get over it. So what if 0.1% of billions is a million people,not like chrome makes money anyway. What will people do? Boycott google products over ftp support?

Please support google product alternatives. This is how AT&T and Comcast became the way they are. A long slippery slope of not really caring about x% of users.


I see you also looked it up after seeing https://news.ycombinator.com/item?id=20710511

Say what you will about fascism, at the very least it sure has a hell of an aesthetic


[flagged]


"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

Particularly, we might add, when the alternative is turning the f-word on somebody.

https://news.ycombinator.com/newsguidelines.html


Downcast for iOS - more configurable than the builtin, unobtrusive, lots of nice features, clean UI.

No problems.


Can you explain to a layman what's odd about it?


Mersenne Twister was maybe the first in a class of random number generators that has lots of state -- an order of magnitude more than previous designs. Each time a number is pulled from it, some of the state is stirred, and the next number comes from mostly other bits, and stirs others. They have to be fast, so you can't touch too many bits per number extracted; taking about the same time for each number is nice, too.

So, one measure of generators is how many numbers you pull before you get the same sequence again. MT's cycle is very long, so in practice you never see a repeat, even if you see the same number many times. (In many simpler generators, seeing 3 then 8 means next time you see 3, the next number will be 8. A great deal of simulation was done with such generators.) The numbers from an MT satisfy many different measures of apparent randomness.

Monte Carlo investigations consume very many numbers. They might use the numbers in a more or less periodic way, so that any match-up between cycles in the problem and cycles in the generator can skew the results. The main MT cycle is very long, so any skewed results probably point to lesser cycles as the bits stirred are later encountered again. But it's hard to imagine a way to detect such cycles deliberately from the bits you get out. Encountering a process that finds them accidentally is amazing.


Fascinating, thanks. Not sure I understood it all, but I appreciate the reply.


You could probably run something on D-Wave Leap and get actual quantum noise or something if you like


> It's time for real world identity linked accounts for participating in online conversations.

You might be able to implement this in a small, affluent European country, but nowhere else. Consider the fact that in the USA, it is a standard talking point of the left that the right's continual push to ask for ID to _vote_ has been branded as a racist policy; it seems that we're simply to accept that large swathes of people will simply not have any ID, whether they're legally present in the country or not.

How do you suggest solving that problem, so as to have some actual decent ID on file with which to back the online ID upon?

And, once you've solved that, how are you going to scale it to impoverished countries that barely have the infrastructure to have any internet access at all, never mind vetting everyone's access to it?

Once you have this ID system, how are you going to mandate that all websites validate access against it? Enforce it at the ISP level? How are people going to offer wifi in coffee shops, etc? Force everything through a big proxy? The scale and expense of the infrastructure involved in doing this is immense. Government IT efforts seem to fail more often than not these days - witness the failures to improve the IRS, or the recent Canadian government payroll system implementation scandal with IBM.

> Second, it reduces the need for online censorship beyond what is required by law.

In America, you have absolute freedom of speech, save of course for speech that is treasonous (i.e. divulging classified information). There is no requirement to censor by law. If someone wants to issue speech you consider hateful, it is not illegal for them to do so. This is not true many other places, and unfortunately may change in the USA at some point in our lifetimes ending one of the greatest freedoms that anyone has ever possessed out of our fear that somehow freedom is going to be what leads to our oppression, and not the giving of too much power to the state. Ironic; even more ironic that the left is the group most siding with big corporations to push this narrative. Never thought I'd see that; not after Occupy Wall Street, but that spirit is thoroughly dead.

> So much of the hateful garbage posted online is only posted in the first place because of anonymity. Remove that, we probably don't get that posting in the first place so we can avoid the messy issue of enforced top down censorship.

Right, because self-imposed censorship where we all go through the day with a fake smile on our faces, only saying the right things, like as though the things we feel safe saying on LinkedIn is sufficient to cover all legitimate human dialogue. Say the wrong thing and disappear into a Kafka-esque nightmare, where you can never get the complete list of all the things you can't say because such a list would be even more dangerous than what you were planning to say in the first place...

No, friend. You're wrong.

Anonymity and free speech are the incredible power to bring light to darkness. They're what enables us to be more than the sum of our parts; what lets those unfortunate in appearance have equal footing to those who are attractive, what lets those who are disabled keep up with athletes. It's what lets us find corruption and root it out more effectively than ever before, and it is this spirit of taking down evil people that is going to be crushed by ideas like yours, not the actual evils that lurk within the hearts of men.


Divulging classified information is not a crime unless you hold a security clearance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: