Way too risky to use Google services like this tied to your primary account. There’s too much risk of cross damage. Imagine losing access to your Gmail because some Gemini request flags you as an undesirable. The digital death sentence of losing access to your email with a company that notoriously has no way for the average human to contact a human is not worth the risk.
Posting it here as a top-level comment as many people asked why boycott just openAi:
-----
openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.
In the nicest possible way, this is basically the oldest lesson there is.
You weren’t happy because you optimized your feelings or had the right opinions. You were happy because you stopped focusing on yourself and became responsible for other people. Six kids needed you, in the real world, every week. That kind of outward focus kills emptiness fast.
Chasing happiness, moral righteousness, or political engagement just loops you back into your own head, helping people doesn’t. Feeling good is a side-effect of being useful, not the goal.
> more stringent safeguards than previous agreements, including Anthropic's.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
This has much broader implications for the US economy and rule of law in the US.
If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?
From that same X thread: Our agreement with the Department of War upholds our redlines [1]
OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.
If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.
(Disclosure, I'm a former OpenAI employee and current shareholder.)
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
2 countries with the best war technologies on earth must work together to have a war with embargod-country-for-decades.
And those 2 counties are founder of Board of Peace.
This is only a surprise to HN, because all the other threads about the corrupt US regime have been flagged before. I guess now is a good time as any to start paying attention. Who would've thought that attention is all you need?
I recall someone (name escapes me at the moment) defining WW3 as ignition in 5 flashpoints between belligerent groupings:
- Eastern Africa esp. Sudan, which we all nearly universally ignore
- Israel Iran
- Russia and a neighbor which we know today is Ukraine
- Pakistan Afghanistan India
- China Taiwan Plus Plus
Attributes that distinguish WW3 from previous world wars were IIRC: Contained conflagration, short targeted exchanges, probability of contamination low, material possibility of nuclear escalation. Case in point: North Korea developed nukes without being invaded, and now that they have nukes, other countries are watching and seeing that NK won't be invaded. What lesson do those other countries draw? And what of a world in which many potential belligerents hold nukes? Hiroshima weeps.
I'd like to add an important attribute here: The revolution will be live-streamed, more-or-less. And essentially none of us will know the truth, even the reasons. I predict this fact will not distress many people, such is the state of humanity.
So to the 7 or so decades of stability we and our ancestors enjoyed, here's looking at you, going down me. But Brettonwoods serves the present the least of any time since its creation. Case in point, w.r.t. eastern Africa, the geopolitical bounds of those ~4 countries seems likely meld to a degree. If we are indeed heading into WW3, I expect the world map to be redrawn afterwards, and the only lessons learned is how to win better in future.
And if we are, while disgruntled old geriatrics go at each others throats via their youthful proxies, I greatly prefer the nukes rust in peace.
Reminds me of Blaise Pascal's quote: 'All human evil comes from a single cause, man's inability to sit still in a room.' Aspiration, you gotta take care man, it just might kill ya.
I accidentally hit the wrong button a few weeks ago and upgraded to Tahoe. I didn't think it was that big a deal at the time, I'd just been putting it off.
But having used it for a few weeks now I can confirm it is a strict downgrade over Sequoia for me. I use none of the new features it has introduced, and the changes to existing features are just worse.
Some UI animations are slow and jittery - and this is on an M4 Pro. The Finder has gone from fine to janky once again, especially with horizontal scroll. The window corners and mouse interactions are indeed annoying (I'd assumed the many complaints were at least slight hyperbole). Left-aligned window titles are unbalanced and ugly. I've had weird (visual) app duplication issues with the Application smart-folder in the Dock. Cross-device copy-paste SEEMS to be more flaky than usual. And most petty of all I really don't like the new icons - especially the Trash icon for some reason.
Yes, our masters once again embarrass us unworthy peons with their endless grace, generosity and forebearance. How lucky we are to entrust our data and our lives to them!
My Bona fides: I've written my own Mathematica clone at least twice, maybe three times. Each time I get it parsing expressions and doing basic math, getting to basic calculus. Then I look up the sheer cliff face in front of me and think better of the whole thing.
There is an architectural flaw in Woxi that will sink it hard. Looking through the codebase things like polynomials are implemented in the rust code, not in woxilang. This will kill you long term.
The right approach is to have a tiny core interpreter, maybe go to JIT at some point if you can figure that out. Then implement all the functionality in woxilang itself. That means addition and subtraction, calculus, etc are term rewriting rules written in woxilang, not rust code.
This frees you up in the interpreter. Any improvements you make there will immediately show up over the entire language. It's also a better language to implement symbolic math in than rust.
It also means contributors only need to know one language: woxilang.
No need to split between rust and woxilang.
> OpenClaw has nearly half a million lines of code, 53 config files, and over 70 dependencies. This breaks the basic premise of open source security. Chromium has 35+ million lines, but you trust Google’s review processes. Most open source projects work the other way: they stay small enough that many eyes can actually review them. Nobody has reviewed OpenClaw’s 400,000 lines.
This reminds me of a very common thing posted here (and elsewhere, e.g. Twitter) to promote how good LLMs are and how they're going to take over programming: the number of lines of code they produce.
As if every competent programmer suddenly forgot the whole idea of LoC being a terrible metric to measure productivity or -even worse- software quality. Or the idea that software is meant to written to be readable (to water down "Programs are meant to be read by humans and only incidentally for computers to execute" a bit). Or even Bill Gates' infamous "Measuring programming progress by lines of code is like measuring aircraft building progress by weight".
Even if you believe that AI will -somehow- take over the whole task completely so that no human will need to read code anymore, there is still the issue that the AIs will need to be able to read that code and AIs are much worse at doing that (especially with their limited context sizes) than generating code, so it still remains a problem to use LoCs as such a measure even if all you care are about the driest "does X do the thing i want?" aspect, ignoring other quality concerns.
Writing code by hand and managing the mental model of its execution and architecture is one of the few remaining joys of my day job, apart from delivering a good product people want and use and being helpful. Even the small things, the tedious chores of refactoring or scaffolding that initial bit of CRUD boilerplate are steps that matter to me. The callouses matter. The tedium matters. These moments of pain and drudgery inform me on what to do differently next time in a way I worry I would not appreciate otherwise, were specific tools thrust upon me.
I remain because I remain hopeful the pendulum will swing the other way someday.
Steve Jobs is famous for his 1996 quote about Microsoft not having taste (https://www.youtube.com/watch?v=UiOzGI4MqSU). I disagree; as much as I love the classic Mac OS and Jobs-era Mac OS X, and despite my feelings about Microsoft's monopolistic behavior, 1995-2000 Microsoft's user interfaces were quite tasteful, in my opinion, and this was Microsoft's most tasteful period. I have fond memories of Windows 95/NT 4/98/2000, Office 97, and Visual Basic 6. I even liked Internet Explorer 5. These were well-made products when it came to the user interface. Yes, Windows 95 crashed a lot, but so did Macintosh System 7.
Things started going downhill, in my opinion, with the Windows XP "Fisher-Price" Luna interface and the Microsoft Office 2007 ribbon.
No wonder they think they’re close to AGI when they think we are that stupid.
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.
Seems to be a weak pitch for an Israeli startup called Factify. Their new document type is also closed sourced which seems like an obvious showstopper for a ubiquitous global document replacement, especially in today's extremely heated and untrustworthy environment.