If you look a little closely you'll see their current project is to establish the "major questions doctrine," which ultimately reduces executive power by stopping Congress from giving it all to the executive. It looks pro-POTUS when it reduces the power of executive agencies, and it looks anti-POTUS when it reduces the power of executive orders. It's really about resetting what powers Congress can delegate.
It is not. The conservative justices work to create imperial presidency with no checks, except in major economical issues that threaten to harm themselves.
And even this ruling had 3 of them objecting, claiming tariffs should stand.
This is true; there is additionally a valid argument that there is security benefit to locking down the bootloader. I don’t like locked down bootloaders, but I get the argument.
Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.
It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.
If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.
You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.
This effect may force companies to simply ban chatbots from certain conversation.
The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.
I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.
Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.
It's possible (and in fact the law) that the journalist against whom a search warrant is issued is suspected of aiding in the leak or committing a crime, though. I don't think we yet know that she's not in that category; only that she claims that she was told that she wasn't the focus of the probe and was not currently formally accused of a crime.
The article you linked shows 12-13% autism-positive rate over N~100 cases, in the UK - and it doesn't distinguish, in the free abstract at least, between minor/moderate/severe, or comorbidities among that population.
I agree that we should be kind to individuals and that understanding an individual's problems can help with that. That said, this paper does not appear to provide convincing evidence that autism is a major contributor to homelessness.
It looks like it's a third-party UI, her Mastodon client, using the description metadata in a way that kind of makes it look like that metadata is part of the post.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
Thanks for the explanation, that makes a lot of sense. I'll bet that when it's not a sensitive topic, this totally goes unnoticed by a lot of users. Frustratingly, I would imagine that the response from most people would just be that the LLM summarizations / metadata tagging should be censored in "sensitive cases," but will otherwise be accepted by the user base.
If anything there's an interesting angle in the facts of this story about a new form of "mansplaining," but it's the algorithm doing "robosplaining" for the human race.
If you look a little closely you'll see their current project is to establish the "major questions doctrine," which ultimately reduces executive power by stopping Congress from giving it all to the executive. It looks pro-POTUS when it reduces the power of executive agencies, and it looks anti-POTUS when it reduces the power of executive orders. It's really about resetting what powers Congress can delegate.