The argument here is that React has permanently won because LLMs are so heavily trained on it and default to it in their answers.
I don't buy this. The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Compilation and bundling made a lot more sense before browsers got ES modules and HTTP/2. Today you can get a long way without a bundler... and in a world where LLMs are generating code that's actually a more productive way to work.
Telling any LLM "use Vanilla JS" is enough to break them out of the React cycle, and the resulting code works well and, crucially, doesn't require a round-trip through some node.js build mechanism just to start using it.
Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way. The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
> The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Having a build step more than pays for itself just in terms of detecting errors without having to execute that codepath. The friction is becoming less and less as the compilation step is increasingly built into your project/dependency management tool and increasingly faster (helped by the trend towards Rust or Go now that the required functionality is relatively well-understood)
> The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.
An LLM can probably generate the ad hoc, informally-specified, bug-ridden, slow implementation of half of React that every non-React application needs very quickly, sure. But can the LLM help you comprehend it (or fix bugs in it) any faster? That's always been the biggest cost, not the initial write.
The problem with React apologetics is that you need to only take a cursory look at literally every production app written in React to see it's terrible and must be abandoned in the long-term.
To see how fast a properly engineered app can be if it avoids using shitty js frameworks just look at fastmail. The comparison with gmail is almost comical: every UI element responds immediately, where gmail renders at 5 fps.
Well yeah, most software is bad. In fact it's so bad that's its almost unbelievable.
We're all used to it and that's fine. But it's still bad. We're still wasting, like, 10,000x more resources than we should to do basic things, and stuff still only works, like, 50% of the time.
GMail is becoming the Lotus Notes of the 21st century. It uses half a gigabyte of RAM, for evey tab. God forbid you need to handle several accounts, i.e., for monitoring DMARC reports across domains.
And IT IS SLOW, despite your experience, which is highly dependant on how much hardware can you throw at it.
> [most used web framework, powering innumerable successful businesses]
> [literally unusable]
It's gotten a lot of critique over the complexity it has over the years, the same way how Next.js also has. I've also seen a frickload of render loops and in some cases think Vue just does hooks better (Composition API) and also state management better (Pinia, closer to MobX than Redux), meanwhile their SFC compiler doesn't seem to support TypeScript types properly so if you try to do extends and need to create wrapper components around non-trivial libraries (e.g. PrimeVue) then you're in for a bunch of pain.
I don't think any mainstream options are literally unusable, but they all kinda suck in subtly different ways. Then again, so did jQuery for anything non-trivial. And also most back end options also kind of suck, just in different ways (e.g. Spring Boot version upgrades across major versions and how verbose the configuration is, the performance of Python and the dependency management at least before uv), same could be said for DBs (PostgreSQL is pretty decent, MariaDB/MySQL has its hard edges) and pretty much everything else.
Doesn't mean that you can't critique what's bad in hopes of things maybe improving a bit (that Spring Boot config is still better than Spring XML config). GMail is mostly okay as is, then again the standards for GUI software are so low they're on the floor - also extends to Electron apps.
My friend, it renders at 15 fps on a literal supercomputer. It takes 30 seconds to load. The time between clicking a button and something happening is measured in seconds. It may be successful, but it is not good.
The problem is that you’ve (and we all have) learned to accept absolute garbage. It’s clearly possible to do better, because smaller companies have managed to build well functioning software that exceeds the performance of Google’s slop by a factor of 50.
I’m not saying RETVRN to plain JS, but clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.
Tried a cleared cache load, open and usable in 3 seconds, loading my work inbox which is fairly busy and not clean.
I'm not sure what FPS has to do with this? Have you some sort of fancy windows 11 animations extension installed that star wipes from inbox to email view and it's stuttering??
I click and email it shows instantly, the only thing close to "low FPS" is it loads in some styles for a calendar notification and there's a minor layout shift on the email.
What / how are you using it that you apparently get such piss poor performance?
> clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.
Nonsense. Apps from all frameworks and none show the same performance issues, and you can find exceptionally snappy examples from almost all frameworks too. Modern webapps are slow because the business incentives are to make them slow, the technology choices are incidental.
Enshitification, I've been using Gmail for decades and it was significantly faster and more responsive in the past. It still works fine tbh, but it did work better. Whether or not something is successful has little to do with its quality or performance these days.
There was also a time where once a website or application loaded, scrolling never lagged. Now when something scrolls smoothly it's unusual, and I appreciate it. Discord has done a really good job improving their laggy scroll, but it's still unbelievably laggy for literal text and images, and they use animation tricks to cover up some of the lag.
That's not quite true. There are a number of languages which compile to JavaScript, e.g., Elm, and provide an API for interacting with the DOM, as well as some kind of FFI.
Couldn’t you use WebAssembly? I think (?) GP’s point is that it would make more sense to use a different language that compiles to WebAssembly. (Or transpile to Javascript I guess, but I don’t know why you’d do that.)
WebAssembly still doesn't have direct DOM bindings. That's at least two levels deeper and several more standards to go after the very basic Wasm GC that was only just recently standardized. For the moment you basically have an FFI/IPC bridge that you can send TypedArray buffers and attempt to UTF-8 decode them and then JSON.parse that on the JS side. (We don't even have strings agreed upon yet, mostly just arrays of bytes. Wasm Strings is a possible standard still in the process.)
Anyone doing serious HTML rendering with WebAssembly today A) has a build step, B) still has a bunch of JS to do memory buffer FFI/IPC and decoding/encoding, C) is usually using some form of Virtual DOM in the Wasm side and the JS side is some version of JSON-driven React/Preact-Lite. It is not today more efficient than React's build process nor React's runtime experience.
_I'm gonna narrow in on the bit about compilation steps_.
Anyone shipping production code will one way of another have some kind of build step, whether that's bundling, minification, typechecking, linting, finger printing files, etc. At that point it makes little difference if you add a build step for compilation.
I'm sympathetic to not wanting to deal with build processes I try to avoid them where I can in my side projects. The main web project I've been working on for the last year has no build step, uses Vanilla JS & web components. But it's also not a consumer facing product.
I think there's friction for sure, but I just can't see this being an issue for most cases where a build step is already in place for other concerns. And Developers are fairly familiar with build steps especially if you do anything outside the web in C/C++ or Java/C# or Rust or whatever.
For release but not for development.
Sufficient for the build step to take a long time and you start to notice the friction.
The web/browser should not rely on bundlers and compilation steps overall. This should remain optional.
Hot-reloads in a modern bundler like Vite will typically be instantaneous. Normally in development, only dependencies are bundles, and the files you write are served as-is (potentially with a per-file compilation step for e.g. jsx or TypeScript). That means that when you save a file, the bundler will run the compiler over that single file, then notify the hot-reload component in the browser to re-fetch it. That would be quick even if it were done in JavaScript, but increasingly bundlers use parts with in Go or Rust to ensure that builds happen every more quickly.
If you've got a huge project, even very quick bundlers will end up slowing down considerably (although hot reload should still be pretty quick because it still just affects individual files). But in general, bundlers are pretty damn quick these days, and getting even quicker. And of course, they're still fully optional, even for a framework like React.
Not really optional for react since it relies so heavily on jsx...
You can write react without it but then is it react? What about the libraries you may want to import or code that a llm will generate for you?
There should be better.
There is an extra thing that the people complaining about the compilation step in react are missing: using c++, for example, if you find an issue, you have to fix the issue, rebuild the thing, then run the thing and *do all the steps required to get your state to duplicate the issue*, just to check you fixed the issue. With react and the other js inspired frameworks and adjacent tooling, you just have to save the file.
With a bundler like Vite or TSX (not the same as the .tsx file extension), you really do just save the file, and everything reloads, usually instantaneously. That said, TS is now supported by default in NodeJS, Deno, and Bun, so if you're doing server-side stuff, you probably don't need a bundler at all, at least for development.
ESM is good enough that vite is not necessary anymore, typescript never was but it won't take longer for some people to come gaslit anybody not claiming it is the best thing to ever happen to web development.
ESM is good, but if you've got more than a few dozen files in a project, you're going to start running into performance issues with complex waterfalls where the browser can't start running a file until it's downloaded all the files it depends on, at which point it finds more files to download, and do on. Lots of projects don't need that amount of complexity, but the ones that do still need a bundler. And even if you don't have a project that big, a bundler can provide a lot of convenience by being able to, say, import different assets and have the bundler automatically provide the relevant URLs for use in the application. You're always trading off convenience for complexity here, but I find Vite and similar tools hit the sweet spot where they make my life easier without bringing in much complexity or overhead. Your experience might vary of course.
Similarly with TypeScript, having worked with and without it, I get so much from it that is a no-brainer for me. But maybe I'm just in the pocket of Big TypeScript and this is more of that gaslighting you were worried about... ;)
Exactly, and I wouldn't miss a chance to give React some crap; when I was learning Java or Swift, the compilation times seemed horrendous. Web developers have it very good with fast incremental compilation, hot reload, etc.
I don't buy it either. The reality is that the people who do hiring don't understand the problems they are working on and which tech stack is appropriate. They might not understand or even like React, but they are going to pick it because they know that they can hire other people who understand it. We will end up with lots of projects in 5-10 years where people will ask "why the hell did you use React for this?" ....actually thats the reality now!
I also think the pitfall that might exist here is the base assumption that developers are allowing the LLMs to make architecture decisions either not addressing the issue at all and just prompting for end results or not making the choice before asking the LLM.
E.g., if most developers are telling their LLMs “build me a react app” or “I want to build a website with the most popular framework,” they were going to end up with a react app with or without LLMs existing.
I’m sure a lot of vibecoders are letting Jesus take the wheel, but in my vibecoding sessions I definitely tend to have some kind of discussion about my needs and requirements before choosing a framework. I’m also seeing more developers talking about using LLMs with instructions files and project requirement documents that they write and store in their repo before getting started with prompting, and once you discover that paradigm you don’t tend to go back.
Yup. The central argument seems to include an assumption that LLMs will be the same tomorrow as today.
I'd note that people learn and accumulate knowledge as new languages and frameworks develop, despite there being established practices. There is a momentum for sure, but it doesn't preclude development of new things.
Not quite. The central argument is that LLMs tomorrow will be based on what LLMs output today. If more and more people are vibe-coding their websites, and vibe-coding predominantly yields React apps, then the training data will have an ever larger share of React in it, thus making tomorrow's LLMs even more likely to produce React apps.
I share your optimism. Once you move up a conceptual layer (from writing code to guiding an LLM), the lower level almost becomes interchangeable. You can even ask the LLM to translate from one language/framework to another.
While I tend to agree, I think there's still an undercurrent of React-like paradigms being strongly preferenced in the training data so assuming LLMs continue to get much better, if you were to build a simple UI toolkit with an LLM, there's a strong chance that over time with accretion you will end up remaking React or any one other framework unless you're particularly opinionated about direction.
I think that while it may be easier to develop with LLMs in languages and frameworks the LLM may “know” best, in theory, models could be trained to code well in any language and could even promote languages that either the sponsoring company or LLM “prefers”.
yea, and models now are so good the difference between writing react or svelte code is moot. maybe 2 years ago choosing react just because an LLM would be better at it would make sense but not today.
(For the AI-sceptics, you can read this as models are equally bad at all code)
Fwiw - I'm hoping it can break out too. But one of the biggest challenges is that last bit "asking it to use vanilla JS" - unsee this all the time in developer relations: getting developers to ask for a specific thing or even have it on their mind to think about using it is one of the biggest hurdles.
> Frameworks are abstractions over a platform designed for people and teams to accelerate their teams new work and maintenance while improving the consistency and quality of the projects. [...] I was just left wondering if there will be a need for frameworks in the future? Do the architecture patterns we've learnt over the years matter? Will new patterns for software architecture appear that favour LLM management?
Are you saying that frameworks might become less important because LLMs can just generate boilerplate code instead? Or do I misunderstand? Personally, if the vibe-engineering future that some executives are trying to foist on us means that I'll be reading more code than I write directly, then I want that code to be _doubly_ succinct.
Maybe in a distant future, but why are so obsessed with the anti-framework sentiment? We don't shy away from a framework when coding in Node, PHP, Java…
Is there something about the web — with its eternal backwards compatibility, crazy array of implementations, and 3 programming languages — that seems like it's the ideal platform for a framework-free existence?
Maybe if we bake all of the ideas into JavaScript itself, but then where does it stop? Is PHP done evolving? Does Java, by itself, do everything as well as you want out of Spring?
The direct semantics of JSX are "transform this syntax into this nested sequence of function calls and this layout of arguments". That's been the case since nearly the beginning. The only real semantics "fights"/"changes"/"React-specifics" you can see in the compiler options in Babel and Typescript: what the function is named and how do you import it. Enough other libraries that aren't React use JSX that it is easy to see what the generic approach looks and find ideas for runtime configuration of "jsx function name" and an import strategy that isn't just "import these hardcoded names from these hardcoded React modules".
> The direct semantics of JSX are "transform this syntax into this nested sequence of function calls and this layout of arguments".
Not exclusively. SolidJS, for example, transforms the syntax into string templates with holes in them. The "each element is a function call" approach works really well if those calls are cheap (i.e. with a VDOM), but if you're generating DOM nodes, you typically want to group all your calls together and pass the result to the browser as a string and let it figure out how to parse it.
For example, if you've got some JSX like:
<div>
<div>
<span>{text}</span>
<div>
<div>
You don't want that to become nested calls to some wrapper around 'document.createElement`, because that's slow. What you want is to instead do something like
This lets the browser do more of the hard parsing and DOM-construction work in native code, and makes everything a lot more efficient. And it isn't possible if JSX is defined to only have the semantics that it has in React.
> You don't want that to become nested calls to some wrapper around 'document.createElement`, because that's slow.
It's really not slow. It might seems slow if you're using react behavior which re-invokes the "render function" any time anything changes. But eventually they get reconciled into the DOM which creates the elements anyway. And most other code bases are not based on this reconciliation concept. So I don't think that's a given.
It's significantly slower than letting the browser do the work for you. Obviously performance isn't the only concern, but (a) as a user, I don't want people wasting my CPU cycles unnecessarily, and (b) I've worked on applications where ever single millisecond counted, and the createElement approach would have made a material difference to the performance of the application overall.
Also, there's no reconciliation happening here. In SolidJS, as well as in Vue in the new Vapor mode, and Svelte, the element that is returned from a block of JSX (or a template in Svelte) is the DOM element that you work with. That's why you don't need to keep rerendering these components - there's no diffing or reconciliation happening, instead changes to data are directly translated into updates to a given DOM node.
But even if you don't need to worry about subsequent re-renders like with VDOM-based frameworks, you still need to worry about that initial render. And that goes a lot quicker if you can treat a JSX block as a holistic unit rather than as individual function calls.
`document.createElement` performance has come a long way in the last few years. The HTML "string" parser is incredibly well optimized in every browser, but the performance difference between smash a string into `innerHtml` and `document.createElement` approaches has shrunk a lot, especially in the time since React started doing so much VDOM work to avoid both tools as much as possible.
The difference shrinks even further with `<template>`/HtmlTemplateElement, its secondary/content `document`s for `document.createElement` and `document.importNode` being faster for cloning+adoption of a `template.contents` into the main document than string parsing.
I've got work-in-progress branch in a library of mine using JSX to build HtmlTemplateElements directly with `document.createElement` and right now `document.createElement` is the least of my performance concerns and there is no reason to build strings instead of elements.
(ETA: There are of course reasons to serialize elements to strings for SSR, but that's handy enough to do with a DOM emulator like JSDOM rather than need both an elements path and a string path.)
We're talking about the behavior of standardized JSX. Different frameworks have different approaches. This supports the only point I'm trying to make here. Which is that there's no expected standard behavior of JSX on which tostandardize.
The library [0] I wrote that uses JSX converts expression attributes into parameter-less lambdas before providing them as function parameters or object properties. This is a different behavior than react's build tools or any of typescripts jsx options. But it's not inconsistent with the spec.
More libraries than React work just fine with the existing Babel and/or Typescript JSX options. snabbdom is a big one to mind that isn't React/Preact, but there are plenty more.
The space that the Babel/Typescript JSX options describe is a constructive space for more than just React.
jsx is not really needed. We have templates. Besides it really is a dsl with a weird syntax.
I'm doubtful it will ever become an ES standard. And for good reasons.
That should be left to the different frameworks to handle.
If you use them raw, yes. They are just the building block you can build upon.
And that's a really good building block. You can create your own parsers. I am doing exactly this for a framework that has yet to be released, full disclosure.
Makes html clearly html, and javascript fully javascript. No bastardization of either one into a chimera.
And the junction of the two is why the custom parser is required. But it is really light from a dev experience.
what about the value of abstraction to readability and maintainability? Do you really want to be stuck with debugging/upgrading and generally working with such low level vanilla js code when elegant abstractions are so much more efficient ?
Abstraction for its own sake, especially with js frameworks, doesn't make anything more readable or maintainable. React apps are some of the most spaghetti style software I've ever seen, and it takes like 10 steps to find the code actually implementing business logic.
Some of that is the coding standards rather than the framework. I think Dan Abramov did a hang-up job on React, but his naming conventions and file structure are deranged.
Unfortunately there isn't any one preferred alternative convention. But if you ignore his and roll your own it will almost certainly be better. Not great for reading other people's code but you can make your own files pretty clear.
What "naming conventions and file structures" are you referring to? I don't think Dan ever really popularized anything like that for _React_.
If you're thinking of _Redux_, are you referring to the early conventions of "folder-by-type" file structures? ie `actions/todos.js`, `reducers/todos.js`, `constants/todos.js`? If so, there's perfectly understandable reasons why we ended up there:
- as programmers we try to "keep code of different kinds in different files", so you'd separate action creator definitions from reducer logic
- but we want to have consistency and avoid accidental typos, especially in untyped plain JS, so you'd extract the string constants like `const ADD_TODO = "ADD_TODO"` into their own file for reuse in both places
To be clear that was never a requirement for using Redux, although the docs did show that pattern. We eventually concluded that the "folder-by-feature" approach was better:
which is what we later turned into "Redux slices", a single file with a `createSlice` call that has your reducer logic and generates the action creators for you:
Do they do this notably worse than say a Spring boot API or a Vue frontend? I don't think this is a React thing. Those spaghetti projects would be so with or without React.
I've been leaning more on web components as an abstraction here, once an LLM can take care of their boilerplate they're a pretty nice way to modularize frontend code.
> The argument here is that React has permanently won because LLMs are so heavily trained on it and default to it in their answers.
I can't find the author making that argument. Can you point to where they're declaring that React has permanently won?
> The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
This is orthogonal to what the article is addressing.
> Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way
If you didn't read the article, I think you should. Because this is generally the conclusion that the author comes to. That in order to break out of React's grip, LLM's can be trained to use other frameworks.
> If the industry continues its current focus on maintainability and developer experience, we’ll end up in a world where the web is built by LLMs using React and a handful of libraries entrenched in the training data. Framework innovation stagnates. Platform innovation focuses elsewhere. React becomes infrastructure—invisible and unchangeable.
So I guess I'm in agreement with the author: let's actively work to make that not happen.
I think a more interesting (and significant) question is whether there can ever be a new programming language.
Like, if you really believe that in the future 95% of code will be written by LLMs, then there can never be a Python 4, because there would be no humans to create new training data.
To me, this is evidence that LLMs won’t be writing 95% of code, unless we really do get to some sort of mythical “AGI” where the AI can learn entirely from its own output and improve itself exponentially. (In which case there would still wouldn’t be a Python 4, it would be some indecipherable LLM speak.) I’ll believe that when I see it.
My hunch is that existing LLMs make it easier to build a new programming language in a way that captures new developers.
Most programming languages are similar enough to existing languages that you only need to know a small number of details to use them: what's the core syntax for variables, loops, conditionals and functions? How does memory management work? What's the concurrency model?
For many languages you can fit all of that, including illustrative examples, in a few thousand tokens of text.
So ship your new programming language with a Claude Skills style document and give your early adopters the ability to write it with LLMs. The LLMs should handle that very well, especially if they get to run an agentic loop against a compiler or even a linter that you provide.
When LLMs write and maintain code, does the programming language they use even matter? Anyway, the inputs to LLMs are all in natural language, and what we get is what we wanted built.
Is it better to specify the parameters and metrics —aka non-functional requirements —that matter for the application, and let LLMs decide? For that matter, why even provide that? Aren't the non-functional requirements generally understood?
It is the specifics that would change—scale to 100K monthly users, keep infrastructure costs below $800K, or integrate with existing Stripe APIs.
> Most programming languages are similar enough to existing languages that you only need to know a small number of details to use them: what's the core syntax for variables, loops, conditionals and functions? How does memory management work? What's the concurrency model?
I think that’s correct in terms of the surface-level details but less true for the more abstract concepts.
If you’ve tried any of the popular AI builders that use Supabase/PostgREST as a backend, for instance Lovable, you’ll see that they are constantly failing because of how unusual PostgREST is. I’m sure these platforms have “AI cheat sheets” to try to solve this, but you still see constant problems with things like RLS, for instance.
It is not pseudocode? It doesn't have to be something with strict syntax or very limited keywords, but maybe the compiler/linter (llm) could point out when you are being ambiguous or not defining how something should be done if several alternatives are possible.
I don't buy this. The big problem with React is that the compilation step is almost required - and that compilation step is a significant and growing piece of friction.
Compilation and bundling made a lot more sense before browsers got ES modules and HTTP/2. Today you can get a long way without a bundler... and in a world where LLMs are generating code that's actually a more productive way to work.
Telling any LLM "use Vanilla JS" is enough to break them out of the React cycle, and the resulting code works well and, crucially, doesn't require a round-trip through some node.js build mechanism just to start using it.
Call me a wild-eyed optimist, but I'm hoping LLMs can help us break free of React and go back to building things in a simpler way. The problems React solve are mostly around helping developers write less code and avoid having to implement their own annoying state-syncing routines. LLMs can spit out those routines in a split-second.