Hacker Newsnew | past | comments | ask | show | jobs | submit | happytoexplain's commentslogin

Analogies are often very bad, but this one is truly impressive.


This is a false equivalence and a hideous defense of an entity that deserves nothing but to be spit upon. There is absolutely nothing calling upon you to take this path.

This is deeply ironic, since often it is the people touting "personal responsibility" most loudly who misunderstand the concept most dramatically. They use it as a convenience to make it easy and comfortable to dismiss human suffering, rather than attempting to understand it.

Then when it impacts them directly, they still can’t take any responsibility for anything and it is someone else’s fault.

Which is hilarious, except that these people vote.


Human are imperfect. One of the most critical functions of "the system" is to flatten out the consequences of imperfection across all humans in the system. To deny this is to invite the end.

I appreciate what you're saying holistically speaking, but whenever we talk about the societal consequences of AI in the US, I find it insidious that we focus on the inadequacy of our social safety nets. As if to say: Yes, C-suite, all you have to do is support UBI and you are free to obliterate what remains of the middle class in the United States of America.

I totally agree actually, and I think that having ownership over one's labor is extremely important. Without that self-determinism, people do not have the agency to define their lives, and their political actions become limited by their economic realities.

It is a likely outcome that the wealthy class offers the most meager basic income to avoid revolution but not much more than that.

I think we all need to talk about our leverage as a class of people who work for a living, and I'm not seeing nearly enough discussion about it. When Amodei talks about displacement of labor, he doesn't acknowledge how much trauma that economic displacement can cause and how many years that bell can ring.


I'm not really sure how this one ends. I'm afraid there will be violence, but I'm much more afraid that there won't be.

I guess they can also do the same without supporting UBI, so there's that..

Wait, how does fiber cut cholesterol?

The article is a little densely worded.


iirc, from older articles, which differ from this nice result, bile acids contain cholesterol(s) and they're generally reabsorbed in the intestines, so the fiber is conjectured to bind with some before reabsorption, expelling the bound fraction of circulating cholesterol in feces.

this result in the paper is very interesting in the conjecture is that the gut microbiome is altered in a beneficial way, and that the effect (with the resulting lowering of cholesterol) persists for weeks after even 2 days of oats.


We know almost nothing about how digestion works, but fiber has the added benefit of lining your intestines, preventing the absorption of some nutrients. It also helps push things through, so they spend less time sitting around being absorbed.

I assume they meant "it will never replace Git for syncing Obsidian".

Apparently "life restoration" is a standard term for depictions of extinct life. I never knew.

Tangential - I'm aware of various types of, let's say, "swappability" that Unicode defines (broader than the Unicode concept of "equivalence"):

- Canonical (NF)

- Compatible (NFK)

- Composed vs decomposed

- Confusable (confusables.txt)

Does Unicode not define something like "fuzzy" equivalence? Like "confusable" but more broad, for search bar logic? The most obvious differences would be case and diacritic insensitivity (e, é). Case is easy since any string/regex API supports case insensitivity, but diacritic insensitivity is not nearly as common, and there are other categories of fuzzy equivalence too (e.g. ø, o).

I guess it makes sense for Unicode to not be interested in defining something like this, since it relates neither to true semantics nor security, but it's an incredibly common pattern, and if they offered some standard, I imagine more APIs would implement it.


I think UCA using a collation tailored for search would be the closest to what you are looking for

The point is that LLMs are easily led by questions and confused by implied premises in ways that humans are not (not that a human will know the answer better, but that a human doesn't "trick" the question-asker in this way). But people asking questions unintentionally use incorrect premises or leading wording all the time. That's why LLMs are inappropriate for domains with a large knowledge gap (a programmer asking about a programming language is a small gap - millions of people asking about nutrition will contain a lot of large gaps). The question asker can't be relied upon to "know what they don't know" and use their own heuristics for deciding how right or wrong the LLM might be (virtually everybody lacks these heuristics - we are much better at modeling humans in our minds when interpreting their communications).

Further, if the information is important (nutrition) and you add liability to the mix (safety and health), you're multiplying how inappropriate it is to use LLMs for the job.


> That's why LLMs are inappropriate for domains with a large knowledge gap (a programmer asking about a programming language is a small gap - millions of people asking about nutrition will contain a lot of large gaps). The question asker can't be relied upon to "know what they don't know" and use their own heuristics for deciding how right or wrong the LLM might be.

Okay, but the question asked was objectively nothing to do with nutrition whatsoever.


The specific (usually humorous) questions-and-answers that make headlines are a distraction. I am not making an attack on LLMs, so a defense is moot. I'm describing an intrinsic quality of (current) LLMs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: