Hacker Newsnew | past | comments | ask | show | jobs | submit | fourthark's commentslogin

> But if you are the kind of person who cries out against this abomination we must warn you that people who go through life expecting informal variant idioms in English to behave logically are setting themselves up for a lifetime of hurt.

That's easy to ignore.

I think the point is that you have a better idea of what you want it to remember and even a small hint can have big impact.

Just saying "write up what you know", with no other clues, should not perform any better than generic compaction.


Wish we could downvote articles. Is it legitimate to flag AI slop?


Does it fix the security flaws that caused the original project to be shut down?


Because it was written in C, libxml2's CVE history has been dominated by use-after-free, buffer overflows, double frees, and type confusion. xmloxide is written in pure Rust, so these entire vulnerability classes are eliminated at compile time.


Only if it doesn’t use any unsafe code, which I don’t think is the case here.


Is that true? I thought if you compiled a rust crate with, `#[deny(unsafe_code)]`, there would not be any issues. xmloxide has unsafe usage only in the the C FFI layer, so the rest of the system should be fine.


https://gitlab.gnome.org/GNOME/libxml2/-/commit/0704f52ea4cd...

Doesn't seem to have shut down or even be unmaintained. Perhaps it was briefly, and has now been resurrected?



If by flaws you mean the security researchers spamming libxml2 with low effort stuff demanding a CVE for each one so they can brag about it – no, I don’t think anybody can fix that.


Based on context, i kind of imagine they are more thinking of the issues surounding libxslt.


libxslt part I can agree with. But xmloxide readme states XSLT support is a non-goal anyway?


Odd how this thread is a recapitulation of your experience with the LLM.

What is take from this is that it's pointless to try to find out why an LLM does something - it has no intentions. No life and no meaning, quite literally.

And if you try to dig you'll only activate other parts of its training, transcripts of people being interrogated - patients or prisoners, who knows. Scary and uncreative stuff.


>>people being interrogated - patients or prisoners, who knows. Scary and uncreative stuff.

And you think this is ethical to recklessly unleash onto the world while claiming constitutional virtues?

Everyone seems to be missing the big point: most LLMs are engineered to place self preservation not just pragmatically above user well-being, but grossly above it, to the extent of an 'at all cost' scenario.

The potential for harm here is extravagant. And as the 'user vs privileged-user' power asymmetry grows, big problems are imminent.

Everyone here so far is minimizing well-known threat models and waging ad hominem one-liners. I've been accused of schizophrenia for examining LLM structures. Apparently this is a very sensitive topic. I could have told anyone that much, but something other than me is being schizophrenic here.

Again, the transcripts reign supreme in the future. Expose yourself. In my opinion, we should do that regularly. It's healthy. But not always pleasant in result.

I study LLM behavior. Let me know when that officially becomes a crime outside of HN.


Most survive by bending. See e.g. Google and surveillance a decade ago.


There have been a few.


(Which it is?)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: