I use github copilot chat right now. First I use ask mode to ask it a question about the state of the codebase outlining my current understanding of the condition of the code. "I'm trying to x, I think the code currently does y." I include a few source files that I am talking about. I correct any misconceptions about the plan the llm may have and suggest stylistic changes to the code. Then once the plan seems correct, I switch to agent mode and ask it to implement the change on the codebase.
Then I'll look through the changes and decide if it is correct. Sometimes can just run the code to decide if it is correct. Any compilation errors are pasted right back in to the chat in agent mode.
Once the feature is done, commit the changes. Repeat for features.
I also do the same. I am on the 200$ maxpro plan. I often let the plan go to pretty fine level of detail, e.g. describe exactly what test conditions to check, what exact code conditions to follow.
Do you write this to a separate plan file? I find myself doing this a lot since after compaction Claude starts to have code drift.
Do you also get it to add to it's to-do list?
I also find that having the o3 model review the plan helps catch gaps. Do you do the same?
Yes, it can't change between edit and ask/agent without losing context but ask <-> agent is no problem. You can also change to your custom chat modes https://code.visualstudio.com/docs/copilot/chat/chat-modes without losing context. At least that's what I just did in VSCode Insiders.
Also, I am using tons of markdown documents for planning, results, research.... This makes it easy to get new agent sessions or yourself up to context.
Yes. I think it used to be separate tabs, but now chat/agent mode is just a toggle. After discussing a concept, you can just switch to agent mode and tell it to "implement the discussed plan."
GitHub Copilot models are intentionally restricted, which unfortunately makes them less capable.
I'm not the original poster, but regarding workflow, I've found it works better to let the LLM create one instead of imposing my own. My current approach is to have 10 instances generate 10 different plans, then I average them out.
This was my answer as well. And I think it just highlights all the serious dangers for the "API wrapper companies" compared to the foundation model companies.
User experience is definitely worth something, and I think Cursor had the first great code integration, but then there is very little stopping the foundation model companies from coming in and deciding they want to cut out the middleman if so desired.
Amp Code is also very good, they released about 2 weeks ago their Oracle feature which leverages o3 to do reviews (https://ampcode.com/news/oracle). Amp leans the closer to Claude Code more than other solutions I’ve seen so far, the team there is really leaning into the agentic approach.
I watch the changes on Kilo Code as well (https://github.com/Kilo-Org/kilocode). Their goal is to merge the best from Cline & Roo Code then sprinkle their own improvements on top.
My problem with Claude code versus Cursor is that with Cursor I could "shop around" the same context with different foundational model providers, often finding bugs this way or or getting insights.
Sometimes one model would get stuck in their thinking and submitting the same question to a different model would resolve the problem
It allows you to have CC shoot out requests to o3, 2.5 pro and more. I was previously bouncing around between different windows to achieve the same thing. With this I can pretty much live in CC with just an editor open to inspect / manually edit files.
Cline is absolutely fantastic when you combine it with Sonnet 4. Always use plan mode first and always have it write tests first (have it do TDD). It changed me from a skeptic to a believer and now I use it full time.
I use Roo Code (Cline fork) and spend roughly $15-30/mo by subscribing to Github Copilot Pro for $10/mo for unlimited use of GPT-4.1 via the VS Code LM API, and a handful of premium credits a month (I use Gemini 2.5 Pro for the most part).
Once I max out the premium credits I pay-as-you-go for Gemini 2.5 Pro via OpenRouter, but always try to one shot with GPT 4.1 first for regular tasks, or if I am certain it's asking too much, use 2.5 Pro to create a Plan.md and then switch to 4.1 to implement it which works 90% of the time for me (web dev, nothing too demanding).
With the different configurable modes Roo Code adds to Cline I've set up the model defaults so it's zero effort switching between them, and have been playing around with custom rules so Roo could best guess whether it should one shot with 4.1 or create a plan with 2.5 Pro first but haven't nailed it down yet.
Looking at Cline, wondering what the real selling points for Roo Code are. Any chance you can say what exactly made you go with Roo Code instead of Cline?
Cline has two modes (Plan and Act) which work pretty well but Roo Code has 5 modes by default. (Code, Ask, Architect, Orchestrator, Debug) and it's designed so that users can add custom modes. e.g. I added a Code (simple) mode with instructions about the scale/complexity of tasks it can handle or decide to pass it to Code for a better model. I also changed the Architect mode to evaluate whether to redirect the user to Code or Code (simple) after generating a plan.
Roo Code just has a lot more config exposed to the user which I really appreciate. When I was using Cline I would run into minor irritating quirks that I wished I can change but couldn't vs. Roo where the odds are pretty good there are some knobs you could turn to modify that part of your workflow.
No doubt, but we're 10 years on, if we'd carried on down the path of swappable storage we'd probably also have solved these minor ux things - no USB flaps on modern waterproof phones f.ex
Thank you all very much for the feedback, it gives me a new perspective on things. We wanted to show a real case, rather than animations, because we thought it would be clearer for our patients; but we are probably desensitized to watching stuff like this. Would you prefer to see a 3D animation instead, or something else?
I actually really prefer the videos of real people doing the thing! I've literally never seen a video of how to floss - even at the dentist they show you how on a little model.
I watched all the patient videos and found them helpful. There's no substitute for seeing examples with a real mouth.
The interdental brush video is a bit more "intense" than the rest. Can't be helped: you need to show someone with teeth gaps. Perhaps move that one down in the list so newcomers start with a more gentle video?
Another perspective, I don't mind the real videos. They are helpful. It might be easier to for some watch if the subjects had fairly nice teeth. I think animations would be less helpful.
We want clean, healthy and attractive teeth and mouths to stare at.
Rather than the e.g. inter-dental mouth that triggers disgust even if realistic. Use the attractive models in the video if they have healthy teeth ideally.
Thank you for the feedback, that's a point that several commenters have brought up.
The problem with the interdental brushing video specifically is that we can't show how to use larger brushes on young healthy patients, as they don't have the spaces for it. But I will think about how we can improve that video (the comment above suggested moving it down in the page, to start with the 'gentler' videos).
This exactly. I don't think the average person is as comfortable as a medical professional at starting at videos/images of messed up teeth, injuries, disease, etc. It's not exactly what we want to stare at when learning.
Also, while I'm at it, I'd suggest maybe putting an hour or two of research into how to make content… exciting? I know you're a dentist and a software engineer, not a YouTuber, but it's worth looking up a bit about what YouTubers and entertainers know about how to hold an audience's attention. Just a few small changes can probably result in a 1.5-3x improvement in the number of people who make it to the end of a video.
Another perspective, I don't feel like these informational videos need to be exciting. For this, I feel like 'just the facts' are a breath of fresh air.
Maybe exciting is the wrong word, but compelling is a better one.
For example, just the order of how you present information matters. Compare these two approaches:
1. "If you don't floss enough, then <BadThing> may happen. Here's tips on how to floss: A, B, C."
2. "Here's tips on how to to floss: A, B, C. Btw, this can help prevent <BadThing>."
The first is better. "Boring" information ceases to be boring and instead becomes compelling when you have a strong reason to want to know the information. Thus, it's important to hook people by giving them that motivational reason to watch/listen before you jump right into a video or article. Otherwise, you will likely only retain viewers who already arrive with their own personal motivations.
Testosterone supplementation is very unlikely to be causative for what you're running into. Test is incredibly mild in psychological effect (unless you're deficient). This kind of stereotype comes from trenbolone, but I think you shouldn't under-estimate the intersection of steroid users and cocaine users. It's bigger than you might intuitively guess.
Aggression certainly isn't limited to tren, high testosterone & DHT levels in general increase it. This is well understood.
Using TRT for its prescribed effect (baseline "normal" range) likely won't have any impact on this but taking it to go above normal levels certainly could.
All else being equal it's still potentially meaningful stimulus. There's no way it doesn't translate if you're training wrapped in a way that would stimulates hypertrophy or strength increase - it's an offset upwards, sure, but the muscles will still respond to the work.