Hacker Newsnew | past | comments | ask | show | jobs | submit | EgregiousCube's commentslogin

Yeah.

If you look a little closely you'll see their current project is to establish the "major questions doctrine," which ultimately reduces executive power by stopping Congress from giving it all to the executive. It looks pro-POTUS when it reduces the power of executive agencies, and it looks anti-POTUS when it reduces the power of executive orders. It's really about resetting what powers Congress can delegate.


If so that’s great. Congress has long become too complacent and willing to just wait for their parties turn to use presidential overreach.


It is not. The conservative justices work to create imperial presidency with no checks, except in major economical issues that threaten to harm themselves.

And even this ruling had 3 of them objecting, claiming tariffs should stand.


This is true; there is additionally a valid argument that there is security benefit to locking down the bootloader. I don’t like locked down bootloaders, but I get the argument.


Yes, locked bootloaders secure the profits of the manufacturers who want to run crapware on your device for their benefit.

The hardware is theoretically yours but they won't allow you to use it in the way you want, it's shocking.


He did change it when paraphrasing, just now :-)

I'm sure it'll be paraphrased to another company in another 30 years.


Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.

It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.


> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.

You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.

This effect may force companies to simply ban chatbots from certain conversation.


The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.

I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.


Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.


It's possible (and in fact the law) that the journalist against whom a search warrant is issued is suspected of aiding in the leak or committing a crime, though. I don't think we yet know that she's not in that category; only that she claims that she was told that she wasn't the focus of the probe and was not currently formally accused of a crime.


Why would paying everyone $300M across the board be healthier than using it as a tool to (attempt to) attract the best of the best?


The article you linked shows 12-13% autism-positive rate over N~100 cases, in the UK - and it doesn't distinguish, in the free abstract at least, between minor/moderate/severe, or comorbidities among that population.

I agree that we should be kind to individuals and that understanding an individual's problems can help with that. That said, this paper does not appear to provide convincing evidence that autism is a major contributor to homelessness.


I was pretty careful not to say that autism causes homelessness. Rather, that a significant portion of the homeless have autism.

The abstract says the same thing.


It looks like it's a third-party UI, her Mastodon client, using the description metadata in a way that kind of makes it look like that metadata is part of the post.

Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.


Thanks for the explanation, that makes a lot of sense. I'll bet that when it's not a sensitive topic, this totally goes unnoticed by a lot of users. Frustratingly, I would imagine that the response from most people would just be that the LLM summarizations / metadata tagging should be censored in "sensitive cases," but will otherwise be accepted by the user base.


If anything there's an interesting angle in the facts of this story about a new form of "mansplaining," but it's the algorithm doing "robosplaining" for the human race.


But what was the SLA?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: