Hacker Newsnew | past | comments | ask | show | jobs | submit | pseudalopex's commentslogin

> Supposedly OpenAI had the same terms as Anthropic (according to SamA).

He said human responsibility. Anthropic said human in the loop.

And Anthropic refused to say any lawful purpose would be allowed reportedly.



> The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that.

You learned this where?


I’m reading between the lines of the involved parties’ various statements, but there’s also this: https://x.com/UnderSecretaryF/status/2027594072811098230

> I’m reading between the lines of the involved parties’ various statements

You should have said this.

> https://x.com/UnderSecretaryF/status/2027594072811098230

Thank you.


It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those. And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.

From the referenced tweet;

who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.

Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

And the US government should have precisely none of that, regardless of whether they’re red or blue.


> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

I don't think that's the case. Amodei is worried that AI is extraordinarily capable, and our current system of checks and balances is not adequate yet to set the proper constraints so the law is correctly enforced. Here's an excerpt from his statement [1]:

  > Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Let's do this thought exercise: how long would it take you, using Claude Code, to write some code to crawl the internet and find all the postings of the HN user nandomrumber under all their names on various social media, and create a profile with the top 10 ways that user can be legally harassed? Of course, Claude would refuse to do this, because of its guardrails, but what if Claude didn't refuse?

[1]https://www.anthropic.com/news/statement-department-of-war


And that’s where the authoritarian in you is shining through.

You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).

One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.

You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.

SMH


> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.

> And the US government should have precisely none of that, regardless of whether they’re red or blue.

This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please

I hope you A: aren't a U.S. citizen, and B: don't vote.

If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.


> It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those.

No. Altman said human responsibility. Anthropic said human in the loop.

> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.

All but confirmed was not confirmed.


I don’t understand your first comment. At that point, Altman’s tweet didn’t exist yet, and is immaterial to the reading of Anthropic’s and Hegseth’s statements.

To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.

We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.



The Department of Defense demanded contracts which would allow any lawful use. Anthropic refused to allow mass domestic surveillance or fully autonomous weapons until fully autonomous weapons could be more reliable.[1]

[1] https://www.anthropic.com/news/statement-department-of-war


Anthropic's statement specified mass domestic surveillance. Not all domestic surveillance. And fully autonomous weapons with today's systems. Not automatic targeting. And not never.

Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.[1]

[1] https://news.ycombinator.com/newsguidelines.html


My apologies, I looked at the guidelines.

> Given that Airport security wasn't implemented until after (and because of) 9/11

You were confused because this belief was false.


> Isn't "non-employing business" an euphemism of sorts for "Uber driver"?

It was older than Uber. But it includes Uber drivers now.


> I'm really surprised Substack thinks Australia's social media laws apply to them.

Why would they not?


Because the laws, as I understood them, apply to platforms with social interaction with strangers. (Watching YouTube is ok - logging into YouTube is not.) Whereas I understand Substack to be essentially a one-way channel.

Substack has comments.

The law restricted accounts. Not reading. But this was not what you said surprised you.


Yeah but I was unable to even read the article.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: