Personally I find the advantage of using ES2015 over ES5 marginal for most use cases, so I actually went back to writing "traditional" ES5 JS with require.js imports and only use a JSX->JS transpiler for the React part of my code. This helps me a lot to stay "closer" to my code and reduce the complexity of my build chain.
Many things that ES2015 provides are nice of course and the code looks a bit cleaner, but apart from a few real innovations most changes seem to be syntactic sugar.
Also, I found that each step in my build chain made it more complicated to build and maintain my code, especially for other developers. I eventually even abandoned Gulp (which in my opinion tries to reinvent Unix pipes but does it all wrong) in favor of a simple Makefile that chains a few build commands and uses inotifywait to watch the filesystem for changes in order to automatically rebuild the code during development.
Another thing I do which may shock many JS people is to actually check in the build directory of my setup into version control, because this makes deployment much easier and ensures that I will always have a working version of the code in the repository, even if some external dependencies should change in the future. This also eliminates installing extensive tooling on my production servers, which itself is a large burden and creates many security issues (for a simple setup consisting of rabel, react, require.js and a few support libraries, node.js downloads about 350 MB of source files onto the machine).
I agree, and I think ES5 even has a benefit as a language: it's extremely simple. The trajectory of ES2015 seems to be, basically: add as much cool stuff as possible. Thus increasing the syntactic surface area, and the amount of stuff to learn.
With a slightly more clever and more concise definition of `createElement`, I also have no need for JSX.
I'm fine with that. My editor understands the indentation.
And... such an enormous benefit... there is no compilation step.
Right now at work, our code base takes 20 seconds to compile with Babel. Enough said.
The feeling, after being used to all this transpilation business, of writing code that's just already ready to serve, is very nice. I can even work on it with nothing but a text editor and a web browser.
Unfortunately everyone thinks I am crazy for preferring this.
The dream that designers, being comfortable with HTML, would be able to mess around with JSX is, as far as I can tell, unrealistic anyway, because it's always full of React-specific weird stuff that the designer doesn't understand and doesn't want to mess with.
You should separate your Presentational and Container components [0]. Presentational components can be pure and stateless, containing no/minimal React-specific weird stuff - that should go into the Containers.
I've started going back to straight ES5 for my side projects. It removes the annoying "compile" step, which I find pretty dumb for an interpreted language. For deployment, of course, I concat and minify. But the speed with which I can iterate using vanilla Javascript is refreshing in this day and age.
I would be interested in a compilation step if it gives me something like PureScript, that is, real, serious benefits from the compiler: algebraic data types, type checking, type classes, and so on.
But ES2015 is just a bunch of syntax sugar. Nice syntax sugar, but still. Okay, async/await is significantly useful... but you still have to understand how the promises work under the hood... and I can live with raw promise coding.
There is a strong case for transpiling if you have a node backend. Presently we use ES2015 features on the backend that wont work in the front-end code without a transpiler. It's not really inconvenient, but it would be nice to use many of the ES2015 features in both places. Shared libraries in particular can be built using ES2015 features instead of settling for the lowest-common-denominator. We also use make in our build tooling.
Yes, I am a big fan of checking in the build directory. We work on lots of little one-off apps that need to last for 5-8 years. Having the build dir in the version control makes it so much easier to fix things and update content years down the road. It insures us from things like npm (insert your package manager here) going away. Which I know sounds ridiculous, but try to re-download some essential Flash/Actionscript library from 2009, in 2016.
Just wondering how you deal with all the extra noise in the diff. What happens when multiple people are working on the project, are you not constantly having to deal with merge conflicts? I feel like checking in the build directory makes the diffs pretty much useless.
> Just wondering how you deal with all the extra noise in the diff. What happens when multiple people are working on the project, are you not constantly having to deal with merge conflicts? I feel like checking in the build directory makes the diffs pretty much useless.
Why would you ever bother merging a build artifact? Those files should always be rebuilt fresh before committing. Just quickly do whatever to clear it from the conflict queue, merge the source as necessary, then rebuild. Commit the rebuilt version.
I honestly assumed this was common practice and everyone did it this way. It also makes it easy for someone to clone the repo and run the app without having to download a bunch of npm stuff and figure out how to get the build tool working.
To be fair, if you do it right then npm should be the only dependency, and then installing all other dependencies (locally) and running the build should be handled completely by npm and its scripts. If this is done well then installing and building can be as simple as running two or even one npm commands. There's not really much of a 'figuring out' stage, if you know npm and its common patterns.
If the solution to the perceived complexity of the build is to just commit the built artefacts to source control, it's possible that's a warning sign that the build process itself needs to be better organised and simplified, and make better use of the build tools so they help rather than hinder a developer coming to the project for the first time. That's what they're there for.
It's kind of surprising to read about the same experience and even reaching exactly the same conclusions.
Anyway, I've recently experimented with checking the build directory in only in production/staging branches to avoid constant merge conflicts and the repository bloat. My approach is to have those build-excluding lines in .gitignore file present in development branches and to comment them out in production/staging branches. So far it works well, but it's sometimes quite confusing to other people on the team.
I guess you mean the node_modules folder, where in fact this makes a lot of sense. What OP means is the build folder to which the frontend part of project is compiled.
I dunno about checking in the build directory. Having a working version of the build in the repo isn't necessary when your dependency versions are locked in shinkwrap.json. Therefore your build is completely reproducible. As for eliminating extensive tooling on your prod server, I believe a common practice is to build on your dev laptop and rsync the build directory to the server.
Do you ever run into merge conflicts in the build directory?
> Do you ever run into merge conflicts in the build directory?
Those files should always be rebuilt fresh before committing. Just quickly do whatever to clear it from the conflict queue -- probably some version of "take mine" or "take theirs". Merge the actual source files as necessary, then rebuild. Commit and just blindly clobber whatever existed in the build directory, because you just rebuilt it and you know it's the correct version.
>Another thing I do which may shock many JS people is to actually check in the build directory of my setup into version control, because this makes deployment much easier and ensures that I will always have a working version of the code in the repository, even if some external dependencies should change in the future. This also eliminates installing extensive tooling on my production servers, which itself is a large burden and creates many security issues (for a simple setup consisting of rabel, react, require.js and a few support libraries, node.js downloads about 350 MB of source files onto the machine).
Kudos. I can't remember how many 'open/source/Free' git repo that I cloned can't compile or have problems compiling. Personally I think it is NOT open source unless the user can compile and get an exact copy of the software that is in the app store.
I don't understand why so many people use Gulp and file-system watchers during development. You have a web server, why not use it?
In our apps, when you request /app.js or /app.css or whatever, it goes through the tools (Browserify and SASS in our case) and delivers the transpiled versions on the fly.
We use a little NPM package called Staticr [1] to declare the pipelines so that the on-the-fly version used during development is identical to the production-time one. In production, we simply run Staticr against the pipelines, which produces the necessary minified files.
I agree so much on this. ES2015 is a YAGN not a crucial feature. It's not because you can that you should!
The most simple and flexible way I have found for a setup is Webpack without any of the fancy stuff. Really small configuration file, only support CommonJS modules and CSS imports. Everything else has proven to be unnecessary sugar in my experience.
I do use ES2015 in CLI apps though. Just make sure to lock the Node.js engine at version 4 in the manifest file.
I personally have NPM install packages local to the project and check them in. This allows for managing the dependencies and reproducible builds without the hassle of files that change every commit.
After wrestling with creating a nice gulpfile for the past two days (essentially wasting two days), I think this is the right route. My biggest gripe with gulp is that creating reusable pipelines even with lazypipe is unnecessarily difficult and can make debugging build scripts (!) hard.
This one is rather simple but includes steps for building and optimizing JS and CSS. It also has support for different build environments (Gitboard has a Chrome version and a web version).
I used to go with Makefiles maybe two years ago but there's really no point on using anything else other than npm scripts if you already have a Node.js set of tools (therefore an npm manifest).
One great thing about npm scripts is that every time you run a command Node.js will export the binaries path to your $PATH so you don't have to manually add absolute paths.
The cost of transpiling es2015 isn't very visible here, because the "vanilla es6 TodoMVC" example used [1] doesn't rely heavily on ES6 only features, such as generator functions etc., that aren't just sugar on top of ES5.
Such features are expected to decrease the performance of the generated code significantly and should be taken into account when a transpilation step is involved.
I was confused at first as well, but I guess it doesn't hurt if this generates some more discussion (with the help of paul irish and his fame :D).
Fun fact, I was just about to migrate all of my js to es6.. Might have to investigate further what transpiler to chose instead of blindly do what the mass seems to do.
We've been using Closure to minify our code for years. It's extremely slow (I think it takes a minute or so on our codebase), so we only do it for deploys.
It produces smaller code than minifiers such as Uglify, even though we're not using its coding conventions (Closure is designed to optimize code written in a certain way that allows it to eliminate dead code paths).
We're using Browserify — can you use Rollup with it? Closure's slowness isn't really a problem for us since we only use it for deploys, not while deveoping.
Edit: I see, Rollup is a competitor to Browserify. Looks nice, if somewhat immature. Maybe we'll be able to use it.
Transpiling is temporary. Soon browsers will catch up and many of these tools will no longer be needed. Unless, of course, everyone wants to start using ES7 when browsers actually support ES6. JavaScript is maturing, this is good news. We are lucky so many tools have become available.
Transpilers are here to stay. The plan of a new version every year for ES20xx means that browser javascript support and what devs actually write has to be decoupled, and once everyone is on that workflow, it's hard to see any reason why we'd go back unless javascript stagnates.
Transpiling is here to stay... there is no acceptable ROI for 'move fast and break things'. It's for organizational and economic reasons, not technical ones. Even if the browsers catch up, it can still take a year or more for roll-out updated software in many companies. It doesn't seem like you're aware of the fact that in an enterprise, 99% of users aren't allowed to install and update software on their own.
Perhaps, but I run an entertainment site targeted towards consumers. You'd think that we'd never need to support IE8 or even IE9, right?
Well no, about 10% of our traffic is from people sitting on those two browsers at work. Some of those people are even still running Windows XP, forcing us to use SAN instead of SNI certs.
There's a few traffic spikes in the day. Pre-8am, when people check the site before work. 12pm when people are fiddling around at work, then post 5pm when people are home from work, then 10pm before people sleep.
We even have a web app, but nope, people prefer to use the desktop version even at work.
>Well no, about 10% of our traffic is from people sitting on those two browsers at work. Some of those people are even still running Windows XP, forcing us to use SAN instead of SNI certs.
This depends on many things. If your 90% brings enough profit, then this 10% perhaps could be dismissed, as it might not be worth the developers costs and time to keep support them (that needs an "opportunity cost" calculation).
There's also the fact that that 10% is only gonna go down, never up again.
"Transpiling" may never go away, but it will cease to be the first thing everyone reaches for when web assembly comes out and reaches the point you can use it. It's the first thing right now since it's pretty much the only thing. Web assembly will pick up a lot of the use cases. Something like Coffeescript will probably still target JS by design, something like Emscripten will tend to target web assembly, and where the in between will end up is anybody's guess.
1. JScript added conditional compilation directly in the browser which was IE-only (extend, extinguish). TypeScript compiles to cross-browser JavaScript (which does none of the above)
2. TC39 is supposed to be working in a "pave the cowpaths" (1) mode. Before new features get integrated into EcmaScript, TC39 looks into what the community is already doing (existing cowpaths), then integrates that into the language. Not only that but TC39 can learn from the mistakes of TS and Flowtype when they add type system support in EcmaScript. We are in dire need of one - and thanks to Microsoft's and Facebook's explorations, we now know what kind of type system would work for JS.
(1) For example, we got arrow functions in ES2015 thanks to CoffeeScript.
Not necessarily - specs could come out as standardization of existing behaviour.
More importantly the things ES6 solves are kind of rudimentary (modules, classes, => syntax, actual collections), stuff after that is nice to have but provide diminishing value
Even if the browsers fully support ES6, you'll still want to use optimizing compilers to statically dead-strip unused code and remove the need for separate module loading.
It's worth mentioning that I spoke to Guy Bedford (author of JSPM) after Sam put out this article. He thinks he can get JSPM's times down pretty easily and he's putting some effort into it this week.
I wouldn't let anyone interested in trying out JSPM be jaded by these numbers just yet. JSPM v0.17beta-6 hasn't reached fully stable yet and that's what's being used to generate these #s
The problem with Webpack IMO is that it kinda feels like Grunt sometimes where you find yourself editing a giant nested config.
But it is by far the most flexible tool for the job and setting it up without all the fancy stuff is pretty straightforward.
I think one could transpile to javascript compatible with the Google Closure compiler which does dead code elimination. Someone's probably already on it.
Have they figured out a solution to the string/property access problem preventing us from passing all js through Closure compiler? (To those who aren't aware, Closure compiler is already an excellent compiler stack, but it requires you write a specific subset of javascript[1], so real-world js written without specifically targetting Closure are generally not compatible. Closure compiler is not currently useful to a javascript developer who depends on the npm ecosystem)
[1] e.g. write foo.bar instead of foo['bar'], so Closure compiler can do name mangling and dead code elimination.
edit since i currently sit at zero points: my point being that you can't dead-code eliminate your dependencies, which is the whole point of using closure compiler instead of whatever other toolchain.
Can you explain by what you mean by "requires"? We're using Closure to minify our code, and it works fine. We know that it's not able to do all the optimizations it can (it was made to follow Google's specific conventions, after all), but it still produces smaller code than minifiers such as Uglify, so it is useful.
The Closure compiler is definitely at it's best when you have all optimizations and are JSDoccing like they intend. While you get some out of the box, it's not the full power of it.
That page says there are a number of features they aren't interested in supporting. Is there an actual list somewhere? I don't want to start using this then discover things I wanted to use just aren't supported.
This confused me as well. The point of browserify is that it groks npm versions and wrangles your dependencies. Whether you want to transpile or uglify etc. is orthogonal to whether you need browserify/webpack, isn't it?
There wouldn't be a bundle for Babel to transpile if the code wasn't Browserified first. I don't know much about Closure compiler but apparently it handles both the transpilation step and the bundling step.
I think that closure and other transpilers do "bundle", but only in the sense that they resolve explicit ES6 imports. They don't attempt to grok versions or dependencies the way browserify/webpack do.
Knee-jerk reaction comment when I read "There are a lot of tools to compile es2015 to es5", I had to take a second to realize what was being said.
Getting all the Lego pieces of JS webdev scattered on the floor straight in one's head is sometimes a pain when one's job only has one venturing into the front end every couple months, thus having to relearn all the acronyms and such.
> when one's job only has one venturing into the front end every couple months, thus having to relearn all the acronyms and such.
To be fair, in my experience that holds true for any technology you only touch once every couple months (especially when you then quickly move on to other things again).
Nowadays any a bit complex JS application needs to transcompile. If so then why not do that with TypeScript since it supports ES6 stuff, but in along with that provides unique features such as optional typing which might be very helpful for a large projects and large/distributed teams (due to static typing) and generally for building a maintainable code structure.
All of those only intend to reduce file size or improve backwards compatibility; you don't need those, it just makes things faster and work in a wider range of browsers (notably IE 8 and co).
Unlikely. Unless we were to discard support for all old browsers and all new browsers update automatically and support the same features across devices and OS.
Those things aren't needed to get javascript to work properly, most such "hacks" are necessary because of the DOM, proprietary standards and the ridiculous variety of devices web content needs to be displayed on. Even if you had an alternative language, those problems would still remain.
It's interesting how these tools pretty much ignore the issue of source maps. If you want your code minified and tree-shaken no problem, but if you want to the source maps split into an external map and still have your debugger actually work, well, good luck.
Nothing really, but a lot of users may be stuck on older browsers. With government agencies or banks or the like, they tend to not upgrade until the last day that their current setup is supported. The cost of upgrading infrastructure is far too great to be justified by non-tech savvy higher ups who fail to understand security risks of not regularly upgrading systems.
So, developers are stuck programming for some ancient godawful version of Internet Explorer that barely even supports ES5.
But that's them. I want to develop ES6 in my browser natively and offer my app to people with modern browsers. I don't care about those stuck in the past. Why don't the browsers support it? Browsers support WebGL today where i can run advanced 3D graphics in my browsers. Government workers on Netscape 4.7 won't be able to run but still the browsers have that. Why not ES6? Surely webgl is more complex to implement...
Browsers are adding support now: https://kangax.github.io/compat-table/es6/. Upcoming Chrome releases will have 90%+ compatibility if you only care about bleeding edge browsers.
Yes, and as a corollary to my comment above I do have my own apps that I write that are entirely in ES2015 (aka ES6) and haven't found any features that I really want to use that aren't implemented by the big browsers (Chrome, Firefox, Safari, Edge). No transpiling needed; it seems like transpiling is only really needed if you want to support legacy versions of Internet Explorer.
Only Internet Explorer lags behind, but I'm not opposed to putting a warning when somebody visits with Internet Explorer when its my own little app.
Actually, I firmly believe that this applies to big apps as well. From my experience, if there is an app a user needs and it requires the user to upgrade their browser? They will upgrade their browser. Most don't upgrade because nobody asks them to. If a need arises in a corp environment to use an app with the simple requirement of a modern browser - they will upgrade it. If the apps always "allow" sub par browser support and bend over backwards for it - there will never be a reason for people to upgrade.
Many things that ES2015 provides are nice of course and the code looks a bit cleaner, but apart from a few real innovations most changes seem to be syntactic sugar.
Also, I found that each step in my build chain made it more complicated to build and maintain my code, especially for other developers. I eventually even abandoned Gulp (which in my opinion tries to reinvent Unix pipes but does it all wrong) in favor of a simple Makefile that chains a few build commands and uses inotifywait to watch the filesystem for changes in order to automatically rebuild the code during development.
Another thing I do which may shock many JS people is to actually check in the build directory of my setup into version control, because this makes deployment much easier and ensures that I will always have a working version of the code in the repository, even if some external dependencies should change in the future. This also eliminates installing extensive tooling on my production servers, which itself is a large burden and creates many security issues (for a simple setup consisting of rabel, react, require.js and a few support libraries, node.js downloads about 350 MB of source files onto the machine).