Hacker Newsnew | past | comments | ask | show | jobs | submit | syg's commentslogin

This kind of elision is implemented.


Yeah this is the bug. My bad, will fix.


Well I'm trying to make it suck less.



To be more precise, aligned to whatever size such that you can guarantee field writes that don't tear. Pointer-aligned is a safe bet. 4-byte aligned should be okay too on 64bit architectures if you use pointer compression like V8 does.

What kind of types did you have in mind? Machine integers and "any" (i.e., a JS primitive or object)?

And yes, in browsers this will be gated by cross-origin isolation.


If the memory layout is fixed and fields are untyped then every field must be at least 8 bytes to potentially hold a double precision floating point value. There would clearly be value in adding typing to restrict field values to 1 or 2 or 4 byte integers to allow packing those fields. But I can see that it would add complexity.


Only if your implementation holds doubles without boxing them. V8 boxes doubles, but JSC and SpiderMonkey do not.


The ability to do unordered operations on shared memory is important in general to write performant multithreaded code. On x86, which is very close to sequentially consistent by default (it has something called TSO, not SC), there is less of a delta. But the world seems to be moving towards architectures with weaker memory models, in particular ARM, where the performance difference between ordinary operations and sequentially consistent operations is much larger.

For example, if you're protecting the internal state of some data structure with a mutex, the mutex lock and unlock operations are what ensures ordering and visibility of your memory writes. In the critical section, you don't need to do atomic, sequentially consistent accesses. Doing so has no additional safety and only introduces performance overhead, which can be significant on certain architectures.


Author here. I hear your feedback about unsafe blocks. Similar sentiment is shared by other delegates of the JS standards committee.

The main reason it is there today is to satisfy some delegates' requirement that we build in guardrails so as to naturally discourage authors from creating thread-unsafe public APIs and libraries by default. We're exploring other ideas to try to satisfy that requirement without unsafe blocks.


There's already a precedent of ownership on transferred objects. Why not have an `.unsafe(cb)` method on the structs? Error if you don't have ownership, then use the callback to temporarily acquire ownership. At least to me, it's more intuitive and seems idiomatic.


bike-shedding but you should consider renaming them from "unsafe" to "volatile" or some other word that expresses that they are not unsafe to the user/browser/os. They are only changeable by other threads.

The word "unsafe" will be picked up as meaning "can infect your computer" which we can already see examples of these messages.


Exactly right. `arr[-1]` means `arr["-1"]` and already does something.


It is also a breaking change to use new syntax and functions since old browser does not support new features. In this perspective `arr[-1]` seems a fair breaking change.


No, because changing browsers to interpret `arr[-1]` as `arr[arr.length - 1]` breaks existing sites that expect `arr[-1]` to be interpreted as `arr['-1']`: That is, the value stored on object `arr` at key name '-1'.

Changing browsers to interpret `arr.get(-1)` as `arr[arr.length - 1]` doesn't affect any old code using `arr[-1]`.

It's not about supporting old browsers. It's about supporting old code.


I think you're confusing your application with the language itself.

Adding new syntax and functions to the language is not a breaking change. Old code will continue to work.

If you start using these new features in your application, and it no longer works on old browsers, then sure that's a breaking change. But that's a choice for you to make. The language is still backwards compatible.


`indexes[haystack.indexOf(needle)] = true`

There's a valid example of code that would be broken (`indexOf` returns `-1` as "not found"). Is it a good way of solving whatever the author was trying to do? Probably not, especially now that sets exist. Is it code you might conceivably find on hunreds of sites across the past decades of the world wide web? You bet.

Yes, we could introduce another "use strict". But we only just got rid of the one via ESM (which enforces strict mode). That was a one-off hacky solution to a hard problem coming off the end of a failed major version release of the language (look up ECMAScript 4 if you get a chance). We don't want to see a repeat of that.


WeakRef and FinalizationRegistry will ship in Chrome 84.


I didn't know it was this close to production finally, thank Jebus!


Early error behavior is proposed to be deferred (i.e. made lazy), not skipped. Additionally, it is one of many things that require frontends to look at every character of the source.

I contend that the text format for JS is no way easy to implement or extend, though I can only offer my personal experience as an engine hacker.


If early error is deferred then it's no longer early... that's all I meant by skipped. It still is a semantic change that's unrelated to a binary AST.


Indeed it's a semantic change. Are you saying you'd like that change to be proposed separately? That can't be done for the text format for the obvious compat reasons. It also has very little value on its own, as it is only one of many things that prevents actually skipping inner functions during parsing.


The gzip point aside (which is not an apples-to-apples comparison as gzipping a big source does not diminish its parse time), I see the response of "JS devs need to stop shipping so much JS" often. My issue with this response is that multiple parties are all trying to work towards making JS apps load faster and run faster. It is easy enough to say "developers should do better", but that can be an umbrella response to any source of performance issues.

The browser and platform vendors do not have the luxury of fiat: they cannot will away the size of modern JS apps simply because they are causing slowdown. There can be engineering advocacy, to be sure, but that certainly shouldn't preclude those vendors from attempting technical solutions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: