One of the core things Node.js got right was streams. (Anyone remember substack’s presentation “Thinking in streams”?) It’s good to see them continue to push that forward.
I think there are several reasons. First, the abstraction of a stream of data is useful when a program does more than process a single realtime loop. For example, adding a timeout to a stream of data, switching from one stream processor to another, splitting a stream into two streams or joining two streams into one, and generally all of the patterns that one finds in the Observable pattern, in unix pipes, and more generally event based systems, are modelled better in push and pull based streams than they are in a real time tight loop. Second, for the same reason that looping through an array using map or forEach methods is often favored over a for loop and for loops are often favored over while loops and while loops are favored over goto statements. Which is that it reduces the amount of human managed control flow bookkeeping, which is precisely where humans tend to introduce logic errors. And lastly, because it almost always takes less human effort to write and maintain stream processing code than it does to write and maintain a real time loop against a buffer.
Streams have backpressure, making it possible for downstream to tell upstream to throttle their streaming. This avoids many issues related to queuing theory.
That also happens automatically, it is abstracted away from the users of streams.
A stream is not necessarily always better than an array, of course it depends on the situation. They are different things. But if you find yourself with a flow of data that you don't want to buffer entirely in memory before you process it and send it elsewhere, a stream-like abstraction can be very helpful.
Why is an array better than pointer arithmetic and manually managing memory? Because it's a higher level abstraction that frees you from the low level plumbing and gives you new ways to think and code.
Streams can be piped, split, joined etc. You can do all these things with arrays but you'll be doing a lot of bookkeeping yourself. Also streams have backpressure signalling
Backpressure signaling can be handled with your own "event loop" and array syntax.
Manually managing memory is in fact almost always better than what we are given in node and java and so on. We succeed as a society in spite of this, not because of this.
There is some diminishing point of returns, say like, the difference between virtual and physical memory addressing, but even then it is extremely valuable to know what is happening, so that when your magical astronaut code doesn't work on an SGI, now we know why.
This sounds like a breath of fresh air as a disenchanted Spotify user. My only hesitation is that I’ve lost touch with collecting music. I used to rip CDs and download music and curate a library etc, but I’ve lost my collection and collecting habits since adopting streaming. How do people collect music nowadays? Is there a legit way (fairly compensating artists) to do it?
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
Wow, this looks awesome. Been using Temporal, but this fits so perfectly into my stack (Postgres, Pydantic), and the first-class support for DAG workflows is chef's kiss. Going to take a stab at porting over some of my workflows.
Seems like you should be correct. A shadcn button is just react, tailwind, and @radix/react-slot. But if you simply create a new shadcn Next.js template (i.e. pnpm dlx shadcn@latest init) and add a button, the "First Load JS" is ~100kB. Maybe you could blame that on Next.js bloat and we should also compare it to a Vite setup, but it's still surprising.
Yeah, but my point is that you download the runtime and core of React/Tailwind just once for the whole web page and those should be removed from the test, or at least there should be comparison which includes the both cases.
You only need couple images on your webpage and that runtime size becomes soon irrelevant.
So the question is, that how much overhead are React/Tailwind CSS adding beyond that initial runtime size? If I have 100 different buttons, is it suddenly 10 000 kilobytes? I think it is not. This is the most fundamental issue on all the modern web benchmarking results. They benchmark sites that are no reflecting reality in any sense.
These frameworks are designed for content-heavy websites and the performance means completely different thing. If every button adds so much overhead, of course that would be a big deal. But I think they are not adding that much overhead.
> Yeah, but my point is that you download the runtime and core of React/Tailwind just once for the whole web page and those should be removed from the test, or at least there should be comparison which includes the both cases.
You think a test that is comparing the size of apps that use various frameworks should exclude the frameworks from the test? Then what is even being tested?
Actual overhead when the site is used in reality? How much ovearhead are those 100 different buttons creating? What is the performance of state managing? What is the rendering performance in complex sites? How much size overhead are modular files adding? Is .jsx contributing more than raw HTML for page size? The library runtime bundle size is mostly meaningless, unless you want to provide static website with just text. And then you should not use any of these frameworks.
With OpenAI models, my understanding is that token output is restricted so that each next token must conform to the specified grammar (ie json schema) so you’re guaranteed to get either a function call or an error.
Edit: per simonw’s sibling comment, ollama also has this feature.
Ah, There's a distinction here with model vs model framework. The ollama inference framework supports token output restriction. Gemma in AI Studio also does, as does Gemini, there's a toggle in the right hand panel, but that's because both those models are being served in the API where the functionality is present in the server.
The Gemma model by itself does not though, nor does any "raw" model, but many open libraries exist for you to plug into whatever local framework you decide to use.
I’m missing something. If WebAuthn is “ssh for the web” then why would it matter if Bob was phished and logged into the fake crypto portal running on the raspberry pi? It’s not like the attacker now knows his private key. Is the danger that Bob also would share his crypto wallet keys with the fake site or something?
Attacker is now logged in on the real crypto portal as Bob. SSH equivalent would be like connecting to malicious server with SSH agent forwarding enabled.
I suppose you can completely skip dummy sites when phishing for passkeys since the user doesn't know the password and therefore you don't need him to enter said password anywhere (which is why you needed a dummy site in the first place).
The attacker has access to whatever the passkey was protecting. It's like asking who cares about password phishing. And FWIW a crypto portal in front of something like Coinbase can obviously do a lot of damage since most people do not keep their crypto in their own personal cold storage.