A loop I've found that works pretty well for bugs is this:
- Ask Claude to look at my current in-progress task (from Github/Jira/whatever) and repro the bug using the Chrome MCP.
- Ask it to fix it
- Review the code manually, usually it's pretty self-contained and easy to ensure it does what I want
- If I'm feeling cautious, ask it to run "manual" tests on related components (this is a huge time-saver!)
- Ask it to help me prepare the PR: This refers to instructions I put in CLAUDE.md so it gives me a branch name, commit message and PR description based on our internal processes.
- I do the commit operations, PR and stuff myself, often tweaking the messages / description.
- Clear context / start a new conversation for the next bug.
On a personal project where I'm less concerned about code quality, I'll often do the plan->implementation approach. Getting pretty in-depth about your requirements ovbiously leads to a much better plan. For fixing bugs it really helps to tell the model to check its assumptions, because that's often where it gets stuck and create new bugs while fixing others.
All in all, I think it's working for me. I'll tackle 2-3 day refactors in an afternoon. But obviously there's a learning curve and having the technical skills to know what you want will give you much better results.
I switched to Zed from a tmux/nvim setup. I think Zed is the first editor I've tried that has a good enough Vim mode for me to switch and keep my built-up muscle memory.
I tried switching to Zed from vim / Idea Suite with IdeaVim, and I was disappointed that I could not just use my .vimrc, have they fixed it yet? It's currently the only blocker for me.
I think there should perhaps be a law that any corporation automatically has a new class of un-tradeable VOTING shares, worth 50% of the overall vote, held by the employees. Everybody with an employment contract with this company is entitled to 1 vote, no more, no less; whether they're the janitor or the CEO.
Employees of a company are the ones who are the most affected by the company's decisions, it's only fair that they have a say.
How much is a vote worth in dollars? Because there would be a market for those votes, not just a spot market for dollars or internal market using vacation days, it would be reflected in salary and benefits and company policy etc.
Couldn’t you just make the voting anonymous to make sure that buying votes isn’t possible? Why wouldn’t I just take your money and still vote however I like?
A law like this just means getting full time employment becomes that much more difficult and the vast majority of people working for a company will be non-voting contractors without benefits. The existing employees would even vote for changes that make full time hiring more difficult in order to avoid diluting their own votes.
It would obviously need to be accompanied with rigorous enforcement of employee classification. I know there would be a bunch of possible ways to game this, so there are a lot of other rules we'd need to add but I didn't want to make my comment too long.
Also, I wouldn't necessarily make a distinction between the full-time employees vs the part-time ones.
I think you’ll find that won’t actually work in practice. Many contract workers are not independent freelancers but actually employees of a different company who contracts the work out as a whole.
For example, a courier company like UPS employs all of its workers but the packages it delivers are for other companies who contract with UPS to do the work. If you force all businesses to employ their own couriers then UPS can’t even exist as a company and small businesses that depend on courier services would simply be unable to function at all.
Both can be true at the same time. Similar to the early days of the Internet, the dot-com bubble eventually popped, but the Internet (and dot-coms, for that matter) didn't go away.
What people are saying is that this mad race to throw cash at anything that has "AI" in it will eventually stop, and what will remain are the useful things.
Yeah, I saw "AI winter" mentioned elsewhere in the thread...
IMO there is a real qualitative difference between AI and crypto in terms of the durable impact it's going to have on the world. Does that mean I've bought into the AI hype? Maybe. But I think the signs are there.
AI winters are a recurring phenomenon, not a myth, and, like the dotcom bust, involve a collapsing hype bubble, reductions in focussed speculative investment in the field, but the technologies that were big during the preceding hype cycle continuing to be important, and develop, though in the case of AI winters often they stop being thought of as AI and just get referred to with a name for the specific technology (often a different one than the main one they were known by in the hype cycle, e.g. “expert systems” from th blast hype cycle are largely “business rules engines” now.)
Googling, there seem to have been two AI winters, the first (late 1970s - early 1980s) when people first figured AI was overhyped, and the second (late 1980s - early 1990s) with the collapse of expert systems. I don't think we are about to get one now - more like AI spring leading to AGI summer.
> If I can't get the materials to repair my building in a hurry, I go outside and I wait. Or I stay inside and I wait. And if I can't do that for my Venusian balloon city, I slowly sink into a zone that melts lead and bakes me alive. And if I get the materials after it has stared sinking, repairing it won't reinflate the balloon and have it rise again, because some significant fraction of the air has leaked out.
It's more similar to a boat than a house. If your boat has a leak, you need to repair it very quickly or it ends up at the bottom of the ocean. Yet we've managed to do it relatively reliably.
Sure. Do that when you're in the middle of an ocean that's a few trillion miles wide. It's not as if you can just dive down to the bottom of the ocean there, mine some bauxite, take it back up to your sinking ship, refine it, manufacture new repair materials for the boat, then repair it, is it?
No, you have to have it shipped from a coast a trillion miles away. So again, where are they manufactured, and how long do they take to get there? Can any of this shit even be made in the vicinity of Venus, where transit times might be non-absurd? There are no recoverable materials on the planet itself, are there?
My recent experience with getting an app deployed from Gitlab to a kubernetes cluster on DigitalOcean was exactly like this. There were like 3 or 4 different third-party technologies I was expected to set up with absolutely no explanation of what problem they're solving, and there was a bunch of steps where I had to supply names or paths as command-line arguments with no guidance on what these values should contain (is it arbitrary? Does it need to match something else?)
Mind you, I have relatively good Docker experience (wrote Dockerfiles, have a pretty extensive Docker-Compose - based home server with ~15 services) so I'm not new to containers at all. But man, the documentation for all these tools was worse than useless.
One area where it really shines for me is personal projects. You know, the type of projects you might get to spend a couple hours on once the kids are in bed... Spending that couple hours guiding Claude do do what I want is way quicker than doing it all myself. Especially since I do have the skills to do it all myself, just not the time. It's been particularly effective around UI stuff since I've selected a popular UI library (MUI) but I don't use it in my day job; I had to keep looking up documentation but Claude just bangs it out very easily.
One thing where it hasn't shone is configuring my production deployment. I had set this project up with a docker-compose, but my selected CI/CD (Gitlab) and my selected hosting provider (DigitalOcean) seemed to steer me more towards Kubernetes, which I don't know anything about. Gitlab's documentation wanted me to setup Flux (?) and at some point referred to a Helm chart (?)... All words I've heard but their documentation is useless to newcomers ("manage containers in production!": yes, that's obviously what I'm trying to do... "Getting started: run this obscure command with 5 arguments": wth is this path I need to provide? what's this parameter? etc.) I honestly can't believe how complex the recommended setup is, to ultimately run 2 containers that I already have defined in ~20 lines of docker-compose...
Claude got me through it. Took it about 5-6 hours of trying stuff, build failing, trying again. And even then, it still doesn't deploy when I push. It builds, pushes the new container images, and spins up a new pod... which it then immediately kills because my older one is still running and I only want one pod running... Oh well, I'll just keep killing the old pod until I have some more energy to throw at it to try and fix it.
TL;DR: it's much better at some things than others.
Totally. Being able to start shipping from the first commit using something like Picocss and just add features helps gets things out of the design stage, but shipping features individually.
Some folks seem to like Docker Swarm before kubernetes as well and I've found it's not bad for personal projects for sure.
AI will always return the average of it's corpus given the chance (or not clear direction in the prompt). I usually let my opinions rip and say to avoid building myself a stack temple to my greatness. It often comes back with a nice lean stack.
I usually avoid or minimize Javascript libraries for their brittleness, and the complexity can eat up more of the AI's context and awareness to map the abstractions vs something it knows incredibly well.
Python is great, but web stuff is still emerging, FastAPI is handy though, and putting something like Pico/HTMX/alpine.js on the front seems reasonable.
Laravel is also really hard to overlook sometimes when working with LLMs on quick things, there's so much working code out there that it can really get a ton done for an entire production environment with all of the built in tools.
Happy to learn about what other folks are using and liking.
I tend to have auto-accept on for edits, and once Claude is done with a task I'll just use git to review and stage the changes, sometimes commit them when it's a logical spot for it.
I wouldn't want to have Claude auto-commit everything it does (because I sometimes revert its changes), nor would I want to YOLO it without any git repo... This seems like a nice tool, but for someone who has a very different workflow.
"Checkpoints for Claude Code" use git under the hood, but stored in .claudecheckpoints folder, to not mess with your own git. Add itself to .gitignore.
It auto commits with a git message for the changes done through MCP locally.
As someone who doesn't use CC, auto-commit seems like it would be the easiest way to manage changes. It's easy enough to revert or edit a commit if I don't like what happened.
It's also very easy to throw away actual commits, as long as you don't push them (and even then not so difficult if you're in a context where force-pushing is tolerable).
True, but it's harder to reject changes in one file, make a quick fix, etc. I like to keep control over my git repo as it's a very useful tool for supervising the AI.
Yeah I basically have Claude commit via git regularly and the majority of the other features described her can be done via git. I agree it's a neat idea for someone though.
Yeah, I think it's pretty clear to a lot of people that LLMs aren't at the "build me Facebook, but for dogs" stage yet. I've had relatively good success with more targeted tasks, like "Add a modal that does this, take this existing modal as an example for code style". I also break my problem down into smaller chunks, and give them one by one to the LLM. It seems to work much better that way.
I can already copy paste existing code and tweak it to do what I want (if you even consider that "software engineering"). The difference being that my system clipboard is deterministic, rather than infinitely creative at inventing new ways to screw up.
- Ask Claude to look at my current in-progress task (from Github/Jira/whatever) and repro the bug using the Chrome MCP.
- Ask it to fix it
- Review the code manually, usually it's pretty self-contained and easy to ensure it does what I want
- If I'm feeling cautious, ask it to run "manual" tests on related components (this is a huge time-saver!)
- Ask it to help me prepare the PR: This refers to instructions I put in CLAUDE.md so it gives me a branch name, commit message and PR description based on our internal processes.
- I do the commit operations, PR and stuff myself, often tweaking the messages / description.
- Clear context / start a new conversation for the next bug.
On a personal project where I'm less concerned about code quality, I'll often do the plan->implementation approach. Getting pretty in-depth about your requirements ovbiously leads to a much better plan. For fixing bugs it really helps to tell the model to check its assumptions, because that's often where it gets stuck and create new bugs while fixing others.
All in all, I think it's working for me. I'll tackle 2-3 day refactors in an afternoon. But obviously there's a learning curve and having the technical skills to know what you want will give you much better results.
reply