Hacker Newsnew | past | comments | ask | show | jobs | submit | HenriNext's commentslogin

Having 4 separate limits that all are opaque and can suddenly interrupt work is not ok.

We don't know what the limits are, what conditions change the limits dynamically, and we cannot monitor our usage towards the limits.

1. 5 hour limit

2. Overall weekly limit

3. Opus weekly limit

4. Monthly limit on number of 5 hour sessions


Claude Code can already show diffs in JetBrains IDEs and VSCode ('/ide' command connects the CLI/TUI to plugin/extension running in the IDE-side).

It can also access the IDEs' real-time errors and warnings, not just compile output ('ideDiagnostics' tool), see your active editor selection, cursor position, etc.


- Forking VSCode is very easy; you can do it in 1 hour.

- Anthropic doesn't use the inputs for training.

- Cursor doesn't have $900M ARR. That was the raise. Their ARR is ~$500m [1].

- Claude Code already support the niceties, including "add selection to chat", accessing IDE's realtime warnings and errors (built-in tool 'ideDiagnostics'), and using IDE's native diff viewer for reviewing the edits.

[1] https://techcrunch.com/2025/06/05/cursors-anysphere-nabs-9-9...


The cost of the fork isn't creating it, it's maintaining it. But maybe AI could help :/


The cost of vscode fork is that microsoft has restricted extension marketplace for forks. You have to maintain separate one, that is the real dealbreaker



Eclipse maintains a public repo.


Forking Linux is very easy; you can do it in 1 hour.


Thanks, one more clarification please. The heading of point #3 seems to mention Google Workspace: "3. Login with Google (for Workspace or Licensed Code Assist users)". But the text content only talks about Code Assist: "For users of Standard or Enterprise edition of Gemini Code Assist" ... Could you clarify whether point #3 applies with login via Google Workspace Business accounts?


Yes it does.


Claude Code was not designed from the ground up to be only an autonomous agent, but it can certainly act as one.

- It has non-interactive CLI functionality (with -p "prompt" option) in addition to the default interactive TUI, making it easy to integrate to workflows.

- It has turn-key GitHub integration (https://github.com/anthropics/claude-code-action).

- It has internal task tracking system that uses ReadTodo/WriteTodo tools to write JSON task lists to `$HOME/.claude/tasks/`, and enabling it to stay on track better than most other tools.

- It has excellent and customisable context compaction.

- And it has flexible permission system that can be used to turn all permissions questions to auto-accept when running in sandboxed environments.

Together those features enable it to be just as autonomous as any GitHub AI bot action hype thing (even though that might not have been its original or primary use).


Yeah, my primary usage pattern for it is purely autonomous for new feature development. I have Claude iterate on a prompt for itself a lot, asking me questions as it goes, then after, I can just say generic things like "Do the thing", "Continue", "Check the repo" and it does the thing, based on R/W Todo and my larger scale todo list for implementation. Also, Claude does have a github action (not that I've tried it though).


It has fine grained permissions configuration file. And every permission question has three answer options: "yes", "yes and don't ask again", "no". And it has option '--dangerously-skip-permissions'. Out of all 20+ AI code tools I've tried/used, Claude Code has the best permission options.


AI Studio uses your API account behind the scenes, and it is subject to normal API limits. When you signup for AI Studio, it creates a Google Cloud free tier project with "gen-lang-client-" prefix behind the scenes. You can link a billing account at the bottom of the "get an api key page".

Also note that AI studio via default free tier API access doesn't seem to fall within "commercial use" in Google's terms of service, which would mean that your prompts can be reviewed by humans and used for training. All info AFAIK.


> AI Studio uses your API account behind the scenes

This is not true for the Gemini 2.5 Pro Preview model, at least. Although this model API is not available on the Free Tier [1], you can still use it on AI Studio.

[1] https://ai.google.dev/gemini-api/docs/pricing


> AI studio via default free tier API access doesn't seem to fall within "commercial use" in Google's terms of service, which would mean that your prompts can be reviewed by humans and used for training. All info AFAIK.

Seconded.


Interesting idea. But LLMs are trained on vast amount of "code as text" and tiny fraction of "code as AST"; wouldn't that significantly hurt the result quality?


Thanks and yeah that is a concern; however I have been getting quite good results from this AST approach, at least for building medium-complexity webapps. On the other hand though, this wasn't always true...the only OpenAI model that really works well is o3 series. Older models do write AST code but fail to do a good job because of the exact issue you mention, I suspect!


Same experience. Especially the "step" comments about the performed changes are super annoying. Here is my prompt-rule to prevent them:

"5. You must never output any comments about the progress or type of changes of your refactoring or generation. Example: you must NOT add comments like: 'Added dependency' or 'Changed to new style' or worst of all 'Keeping existing implementation'."


Curious.. which 'bring your own keys' -style competitors have venture funding?


Is HuggingFace venture funded? Because they have an Apache 2.0 licensed competitor (but it's not very active at a glance).

MSTY is the first one that comes to mind though. And if you're willing to stretch your idea of "what competes with OpenWebUI" I know half a dozen startups that let you pass in some set of keys and "build a multi-agent system" in a GUI usually alongside some pared down chat windows.


> I know half a dozen startups that let you pass in some set of keys and "build a multi-agent system"

Could you give names of those startups?

And yeah, Hugging Face is very much venture funded -- they had a modest 4.5b valuation in the last round.. (I just didn't know they had some competing product).


Lobechat comes to mind.


Thanks. Do you have a source/reference for the venture funding? (Perplexity searched 101 sites and concluded that it didn't find any public information about funding)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: