There's a lot you can do in terms of efficient token usage in the context of Claude Code, I wouldn't be surprised if they soon launch a Claude Code-specific model.
In my experiments, it would be enormously wasteful in token usage, doing things like re-reading all Python scripts in the current folder just to make sure all comments were up-to-date, or it re-read an R script to make sure all brackets were closed correctly. Surely that's where a good chunk of the waste comes from?
In my experiments, it would be enormously wasteful in token usage, doing things like re-reading all Python scripts in the current folder just to make sure all comments were up-to-date, or it re-read an R script to make sure all brackets were closed correctly. Surely that's where a good chunk of the waste comes from?