Hacker Newsnew | past | comments | ask | show | jobs | submit | fulafel's commentslogin

Yes, in the headlines the agencies playing adversaries to the common folk are definitely mainly chinese... /s

Depends on how you do the accounting. Are you counting inference costs or are you amortizing next gen model dev costs. "Inference is profitable" is oft repeated and rarely challenged. Most subscription users are low intensity users after all.

Isn't this a wrongly editorialized title - "Reported by Shaheen Fazim on 2026-02-11" so more like 7-day.

It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug

The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN

Edit: fwiw, I'm not the downvoter


It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.

In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.

It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".


Wikipedia: A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it

This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.


> It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug

Zero-day vulnerability or zero-day exploit refer to the vulnerability, not the vulnerable software. Hence by common sense the availability refers to the vulnerability info or the exploit code.


I think the implication in this specific context is that malicious people were exploiting the vuln in the wild prior to the fix being released

Between IRC and Discord/Slack we had XMPP which almost made it, but then Google etc killed support for it.

Python mainly uses reference counting for garbage collction, and the reference cycle breaking full-program gc can be manually controlled.

For RC, each "kick in" of the GC is usually small amount of work, triggered by the reference count of an object going to 0. In this program's case I'd guess you don't hear any artifacts.


This seems to conflate different things.

Interpreted is not a problem from the predictable behaviour point of view. You may get less absolute performance. Though with Python you can do the heavy lifting in numpy etc which are in native code. And this is what is done here, see eg https://github.com/gpasquero/voog/blob/main/synth/dsp/envelo...

Languages that have garbage collection: not going to rehash the standard back-and-forth here, suffice it to say that the devil is in the details.


I was speaking in broad generalities (and did mention Lua as a counter-example).

If you want realtime safe behavior, your first port of call is rarely going to be an interpreted language, even though, sure, it is true that some of them are or can be made safe.


There's a lot of soft-realtime (=audio/video, gaming etc) apps using interpreted languages. Besides Python and Lua, also Erlang.

They don't use python or similar languages in their realtime threads, I would wager.

Oh and of course SuperCollider.

It compiles and sends bytecode to the server, no? I'm quite sure the server at least does not run a plain interpreter, and I know for sure you build a graph there. That's why you can also use it with other languages (Saw a clojure example I think I wanted to give a try)

Generating audio is far from being an "intensive" operation these days.

It has nothing to do with cpu cycles, and everything to do with realtime safety. You must be able to guarantee that nothing will block the realtime audio thread(s), and that's hard to do in a variety of "modern" languages (because they are not designed for this).

I know you are an audio guy, I also wrote low-latency audio software. I was just saying that setting HIGH_PRIORITY on the audio running thread and it's feeding threads is enough, you don't need QNX. Python has the GIL problem, but that is another story.

For a simple audio app like this synth on a modern CPU it's kind of trivial to do it in any language if the buffer is >40 ms. I'm talking about managing the buffers. Running the synth/filter math in pure Python is still probably not doable.


Sure, but 40ms for a synth intended to be played is generally the kiss of death these days, unless you target audience are all pipe organ players ...

Has anyone benchmarked humans on this task? Without visual feedback.

Glaciers wouldn't get inifinitely thick anyway since they're of finite age, but also they flow out to sea. It happens at a very slow, one might even say glacially slow, pace.

(and the poles are very dry, rarely snows)


From safety point of view that's actually good enough for "perfect is the enemy of good" to apply here.

Cryptographic primitives are much much safer in C (and assembly) than protocol handling, certificates etc.

They are basically just "fixed size data block in, fixed size data block out". You can't overflow a buffer, you can't use-after-free etc, you can't confuse inner protocol serialization semantics with outer protocol serialization semantics, you can't confuse a state machine, you can't have a concurrency bug[1] etc.

C memory safety vulnerabilities arise from trying to handle these dynamic things which rustls fixes.

(Also, there are third party crypto providers implemented in Rust)

[1] from memory safety pov; for side channels rust doesn't have advantages anyway


My point is that the article this thread is attached to starts out with how BoringSSL and AWS-LC won't cut it. And when rustls is suggested as an alternative, it's important to point out that it requires precisely those two (either one of them).

The article is about TLS. The arguments against those libs don't apply if using them just for the low level crypto algorithms. (Also of course rustls can use other crypto providers besides those)

Then I'm mistaken. Thanks for clarifying.

The docs start off with server components. Are those still in vogue after the recent security disaster?

ref. https://threatprotect.qualys.com/2025/12/04/react-server-com...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: