I think the most valuable thing here is to not jump to a negative assumption about people, something I wish it followed more closely in its other points. Virtually anyone who has a very different perspective than the group will face friction, and handling that friction gracefully isn't something that comes naturally to most people. People can get stuck in a pattern of handling the friction poorly, but the group as a whole also has the opportunity for grace and understanding that can diffuse the problem, if that is something that is valuable to them.
I'm someone who is good at those situations, and what I've learned is that no matter how much you disagree, there's always something that you can agree on. If you're stuck in disagreement, zoom out, and try to move back to a position that you both can agree on.
It's also important to not compromise on values you find personally fundamental for the sake of "finding common ground". It depends on the matters being discussed. Assume good faith, attempt to find common understanding by zooming out, but stand firm when you have zoomed out as far as you feel comfortable. If you push past that, you run the risk of validating insane or dangerous behavior or opinions.
When I say find common ground I mean things that you (both) already agree with, i.e. it's bad to kill people, it's good to help people in need.
It wasn't my intention to advocate for 'compromising on values' rather, I think the best way to do any discussion is being honest, and that starts with being honest about your values.
I think the whole point of my method is to identify who is the person that's compromising their values, i.e. someone who agrees with "it's good to help people" but then disagree with social healthcare shows that somewhere on the imaginary line between helping people and social healthcare that person flips their opinion, which is incredibly helpful information in debating.
yes, this. zooming out doesn't mean moving away from my values, but moving away from the disagreement, to facets that we agree on. then build rapport on that, and figure out what causes the difference in opinion.
I'm out of the loop on Claude, hasn't it always been possible to use the Anthropic API with a tool like OpenClaw, paying per request? Is this limitation just for using your monthly subscription account?
I find it a little bizarre that people have this expectation. You can still pay for compute and use it the way you want by paying for the product you actually want to use. Subscription products like this are not marketed or intended to be used as access to the API, but they also offer access to the API if that's what you want. I'm still not entirely clear why people insist on using their subscription like this, so let me know if I'm missing something.
> I find it a little bizarre that people have this expectation.
Well, enough people complained that Anthropic reversed their stance. Additionally, their primary competitor doesn't have any compute restrictions, which should help clarify why this decision was made.
As someone who has been building ML/AI tools (@ MS & Apple) for almost 25 years, I can say that much of the value of the underlying model comes from the harness. Why shouldn't I be able to use the exact same compute with my own bespoke harness when the compute cost is the same?
The Claude Code team continues to push out half-baked features that literally hamper my ability to use their tools.
If I'm paying $200/month for compute, I should be able to use it however I like.
I'm inclined to agree for that price. Is there something the 200/month subscription gets you that the API doesn't? I still don't know why creating an API key and loading it up with $200 every month is an unfavorable option. Do you expect it would cost more? You might even end up paying less, especially if you can find ways to make it more efficient given that you are using a bespoke harness. I still feel like I'm missing something. If the API costs a lot more for the same amount of usage, that would make sense to me, but that has never been my experience. But I don't have experience with the Anthropic API.
> You don't see how ridiculous that is? No other SOTA model company has these restrictions
You cannot use a ChatGPT subscription with a CLI tool, if you want to build your own harness you have to go through the API. I'm unsure about Gemini. Claude Code seems to be a special case because it is itself a CLI tool and so it becomes much easier to build a custom harness around, but its not surprising or unusual that it would have restrictions.
Subscription products normally have terms of use that limit how you use it that are shaped by the infrastructure they rely on. A harness is often tuned to usage that fits with the constraints of the service, the backend that supports the tool is engineered for that usage. A custom harness could easily bypass that tuning and become unsustainable.
On top of that, the API tends to be a much more flexible product to use directly. I can understand why you'd have more expectations paying for the max product, but this doesn't sound unusual or unreasonable to me.
> You cannot use a ChatGPT subscription with a CLI tool
Okay, I'm done here. You obviously have no idea how this works (I have a ChatGPT subscription that I use with Codex).
I know you're new to HN, but when someone says: I've literally built subscription tools at both Microsoft and Apple for over 25 years, you might want to stop and reconsider if you might be missing something. You are.
It sounds like you might be overthinking it. Slop is pretty noticeable at a glance, and it doesn't really matter if it is AI slop or human slop. On the other end, I have probably enjoyed an article or two that was made partially or entirely by AI and I'm not sure what the downside is.
Honestly, that's the only way I've ever been able to trust the output. Once you go beyond the scope of one file it really degrades. But within a single file I've seen amazing results.
Are you not supposed to include as many _preconditions_ (in the form of test cases or function constraints like "assert" macro in C) as you can into your prompt describing an input for a particular program file before asking AI to analyze the file?
Please, read my reply to one of the authors of Angr, a binary analysis tool. Here is an excerpt:
> A "brute-force" algorithm (an exhaustive search, in other words) is the easiest way to find an answer to almost any engineering problem. But it often must be optimized before being computed. The optimization may be done by an AI agent based on neural nets, or a learning Mealy machine.
> Isn't it interesting what is more efficient: neural nets or a learning Mealy machine?
...Then I describe what is a learning Mealy machine. And then:
> Some interesting engineering (and scientific) problems are: - finding an input for a program that hacks it; - finding a machine code for a controller of a bipedal robot, which makes it able to work in factories;
I'm positive there are use-cases for this tool but after several years of working with LLMs, hallucinations have become a non-issue. You start to get a sense of the likely gaps in their knowledge just like you would a person.
Questions about application settings, for example, where to find a particular setting in a particular app. The LLM has a sense of how application settings are generally structured but the answer is almost never spot on. I just prefix these questions with "do a web search" or provide a link to documentation and that is usually enough to get a decent response along with citations.
Piloting OSS is all the work of any other business and more. It is a more challenging path and all your decisions are out in the open for scrutiny. He made a decision to put his family first, and I respect that. It seems like there is an alternative to selling out, trusting the project to other people committed to OSS. The beauty of OSS is that this path is still available for people.
> We have libraries like SQLite, which is a single .c file that you drag into your project and it immediately does a ton of incredibly useful, non-trivial work for you, while barely increasing your executable's size.
I'm not sure why you believe this is more secure than a package manager. At least with a package manager there is an opportunity for vetting. It's also trivial that it did not increase your executable's size. If your executable depends on it, it increases its effective size.
The often-repeated "wisdom of the crowds" justification is misapplied to online betting markets. Like people, crowds can either be wise or unwise depending on the situation. Famous experiments like guessing how many gumballs are in a jar work because each person who can see the jar has a source of valid information, and in aggregate that can be surprisingly accurate.
You can't assume that the majority of individuals participating in betting markets have a source of valid information. Given the destructiveness of these markets to both individuals and society, the aggregate wisdom of the individuals participating in these markets is highly doubtful. Any meager value above more traditional forecasting does not justify the cost, corruption and a loss of trust in institutions.
Please show the dollar/realized benefit to society VS (in response to OPs statement) the results don't "justify the cost, corruption and a loss of trust in institutions" along with a breakdown of the cost/negatives to society that result from those factors.
This isn't big oil (yet) you can't just externalize all the downside and say the product is a net benefit.
Pish tosh, my dear sir, it's simply common-sense that there are oodles of people out there with secrets that would be completely ethical to distribute and would undeniably better all humankind, but they're sitting on them purely because they haven't figured out how to make a profit from it. /s
In other words, the overlap between these is too small to justify the idea that prediction markets are a net-benefit by default:
1. Is valuable
2. Not already known
3. No current reward mechanism exists (e.g. patents)
reply