The author makes the argument that in the age of LLMs more type safe languages will be more successful than less type safe ones. But how does that support the claim that Go is more suitable than JavaScript? TypeScript is more type safe than Go: Go doesn’t validate nil pointers, it doesn’t enforce fields to be set when initializing structs, it has no support for union types. All those things can cause runtime errors that are caught are caught at compile time in TypeScript.
Not sure, but I gave it a shot weeks ago and finally started building something using Rust for a project I've wanted to build for years now, in maybe 12 hours worth of effort total I've probably done several months worth of engineering effort (when you consider I only touch this project in my spare time). Every time I pick up Rust I fight it for hours because I don't do any Rust in my dayjob, but the LLM helps me pick up that Rust nuance slack wherever I fall short and I can focus on key architectural details as I have been obsessing over years now.
A gross mischaracterization of the author's point (the word "type" doesn't even appear in the article). The author focuses on the cost of interpreted languages, which he describes as "memory hungry" and computationally expensive.
That is like saying the kernel/sandbox hypervisor can access those things. The point is that the sandboxed code cannot. In browsers, code from one origin cannot access those things from another origin unless explicitly enabled with CORS.
MCP doesn't force models to output JSON, quite the opposite. Tool call results in MCP are text, images, audio — the things models naturally output. The whole point of MCP is to make APIs digestable to LLMs
An SVG doesn't need to support scripting. When you load an SVG through an <img> tag for example, no <script>s run either (only if you use <iframe>, <object>, or inline in HTML5). When you serve the SVG (or the HTML it is inlined in) with a CSP that doesn't allow inline scripts, no scripts run. It's totally possible to render an SVG without scripts (most SVGs do not contain scripts) and various mechanisms for this are already implemented in browsers.
No shit? I bet that's what I meant when I said "SVG inline needs to support scripting" then?
>It's totally possible to render an SVG without scripts (most SVGs do not contain scripts) and various mechanisms for this are already implemented in browsers.
Yes it is totally possible to render an SVG without scripts, and it is also possible to render them with, hence when I say something like "if Safari's SVG implementation meant that SVG favicons were open to either XML exploits or scripting exploits" that IF is a real important indicator that hey, if they did it as an inline SVG but now it is sitting inside the browser chrome with heightened permissions it would be a problem, furthermore, the XML exploits available in the browser chrome might also be more deadly.
But why would they do this? Hey I don't know, I have noticed that sometimes people do dumb things, including browser developers, or they don't catch edge cases because they don't realize them.
I also noticed that one of the comments as to what had been implemented was support for SVG favicon as a data uri, if an SVG favicon was implemented in this way it might very well be the edge case that the data uri exists as an "inline" image. Seems unlikely because data uri should normally be in an img tag, but I have also experienced some unlikely or unexpected things with data uris before so I would think it a possible place for things to go wrong.
To add anecdotally based on logging on my portfolio site, all major US players (OpenAI, Google, Anthropic, Meta, CommonCrawl) appeared to respect robots.txt as they claim to do (can't say the same of Alibaba).
Sometimes I do still get requests with their useragents, but generally from implausible IPs (residential IPs, or "Google-Extended" from an AWS range, or same IP claiming to be multiple different bots, ...) - never from the bots' actual published IP addresses (which I did see before adding robots.txt) - which makes me believe it's some third party either intentionally trolling or using the larger players as cover for their own bots.
Using residential IPs is standard operating procedure for companies that rely on collecting information via web scraping. You can rent residential egress IPs. Sometimes this is done in a (kind of) legit way by companies that actually subscribe to residential ISPs. Mostly it's done by malware hijacking consumer devices.
Microsoft contributes a lot of web standard implementations upstream to Chromium. They are not just letting Google do all the work as your comment makes it sound like. They could have chosen to do the same with Firefox, which means the reason to fork Chromium and not Firefox had other reasons.
The problem with DNT was that there was no established legal basis governing its meaning and some browsers just sent it by default so corporations started arguing it's meaningless because there's no way to tell if it indicates a genuine request or is merely an artefact of the user's browser choice (which may be meaningless as well if they didn't get to choose their browser).
As the English version of that page says, it's been superceded by GPC which has more widespread industry support and is trying to get legal adoption though I'm seeing conflicting statements about whether it has any legal meaning at the moment, especially outside the US - the described effects in the EU seem redundant given what the GDPR and ePrivacy directive establish as the default behavior: https://privacycg.github.io/gpc-spec/explainer
That's basically how goroutines work in Go. You opt into concurrency with the `go` keyword, while it's blocking by default. While in JS it's concurrent by default and you opt-in to blocking with the `await` keyword. (Except in Go you have true parallelism for CPU-bound tasks too, while in JS it's only for I/O)
Both have their pros and cons. I've seen problems in Go codebases where some I/O operation blocks the main thread because it's not obvious through the stack that something _should_ best be run concurrently and it's easy to ignore until it gets worse (at which point it's annoying to debug).
Any public information eventually is priced into the stock price by the market.
Say you buy the stock even though you didn't read the DEI statement, but other people bought the stock before you having read it. Their purchases drove up the stock price, so you had to pay more for the stock. You got defrauded of the delta. Especially if now it comes out, the price goes down, and you make losses.
I think the kind of teams that always stay on top of the latest TypeScript version and use the latest language features are also more likely to always stay on top of the latest Node versions. In my experience TypeScript upgrades actually more often need migrations/fixes for new errors than Node upgrades.
Teams that don't care about latest V8 and Node features and always stay on LTS probably also care less about the latest and greatest TypeScript features.
I work on a large app that’s both client & server typescript called Notion.
We find Typescript much easier to upgrade than Node. New Node versions change performance characteristics of the app at runtime, and sometimes regress complex features like async hooks or have memory leaks. We tend to have multi-week rollout plans for new Node versions with side-by-side deploys to check metrics.
Typescript on the other hand someone can upgrade in a single PR, and once you get the types to check, you’re done and you merge. We just got to the latest TS version last week.