For someone who is new in this technically does it work like turn and sturn? Relay communicating over outbound ports? Wouldnt this run into scaling issues?
I think it is true with any distribution channel. When people figure out that it works, then everyone ends up bombarding that channel till it saturates.
Vibe coding is not helping either, I guess. Now it is even cheaper to create assets for the distribution channel.
Kata containers are the right way to go about doing sandboxing on K8s. It is very underappreciated and, timing-wise, very good. With ec2 supporting nested virtualization, my guess is there is going to be wide adoption.
This weekend, I found an issue with Microsoft's new Golang version of sqlcmd. Ran Claude code, fixed the issue, which I wouldn't have done if agent stuff did not exist. The fix was contributed back to the project.
I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.
Did you actually fix the issue, or did you fix the issue and introduce new bugs?
The problem is the asymmetry of effort. You verified you fixed your issue. The maintainers verified literally everything else (or are the ones taking the hit if they're just LGTMing it).
Sorry, I am sure your specific change was just fine. But I'm speaking generally.
How many times have I at work looked at a PR and thought "this is such a bad way to fix this I could not have come up with such a comically bad way if I tried." And naturally couldn't say this to my fine coworker whose zeal exceeded his programming skills (partly because someone else had already approved the PR after "reviewing" it...). No, I had to simply fast-follow with my own PR, which had a squashed revert of his change, with the correct fix, so that it didn't introduce race conditions into parallel test runs.
And the submitter of course has no ability to gauge whether their PR is the obvious trivial solution, or comically incorrect. Therein lies the problem.
This is why open source projects need good architecture and high test coverage.
I'd even argue we need a new type of test coverage, something that traces back the asserts to see what parts of the code are actually constrained by the tests, sort of a differential mutation analysis.
I think the problem is determining who is contributing, intention, and those other nuances take a human’s time and effort. And at some point the number of contributions becomes too much to sort through.
If it's not human effort, it costs tokens, lots of tokens, that need to be paid for by somebody.
The LLM providers will be laughing all the way to the bank because they get paid once by the people who are causing the problem and paid again by the person putting up the "barriers, processes, and mechanisms" to control the problem. Even better for them, the more the two sides escalate, the more they get paid.
So open source development should be more like job-hunting and hiring, where humans feed AI-generated resumes into AI resume filters which supposedly choose reasonable candidates to be considered by other humans? That sounds... not good.
If you used Claude to fix the issue, built and tested your branch, and only then submitted the PR, the process is not much is different from pre-LLM days.
I think the problem is where bug-bounty or reputation chasers are letting LLM's write the PRs, _without_ building and testing. They seek output, not outcomes.
I agree, but that's assuming the project accepts AI generated code, of course. Especially around the legality of accepting commits written by an AI trained on god knows what dataset.
We have been doing this lately; when we hit a roadblock with open source, we run Claude code for fixing OSS issues and contributing back. We genuinely put effort into testing it out thoroughly.
We don't want to bother maintainers, as they can focus on more important issues. I think a lot of tail-end issues and bugs can be addressed in OSS.
We leave it up to the maintainers to accept the PR or not, but we solve our problems as we thoroughly test the changes.
And are you sure that you fixed it without creating 20 new bugs? For the reader this could mean that you never understood the bug, so how you can sure that you've done anything right?
How do you make sure you don't create bugs in the code you write without an LLM? I imagine for most people, the answer is a combination of self-review and testing. You can just do those same things with code an LLM helps you write and at that point you have the same level of confidence.
Yes, that's the fundamental tradeoff. But if the amount of time you save writing the code is higher than the amount of extra time you need to spend reading it, the tradeoff is worth it. That's going to vary from person to person for a given task though, and as long as the developer is actually spending the extra time reading and understanding the code, I don't think the approach matters as much as the result.
This is the fundamental problem. You know what you know, but the maintainer does not, and cannot possibly take the time to find out what every single PR authors knows before they accept it. AI breaks every part of the Web of trust that is foundational to knowing anything.
Using an LLM as an assistant isn’t necessarily equivalent to not understanding the output. A common use case of LLMs is to quickly search codebases and pinpoint problems.
Code complexity is often the cause for more bugs. Complexity naturally comes from more code. It is not uncommon. As they say, the best code I ever wrote was no code.
It is probably not bots. The reach of authors is pretty good. He actually loyal fan followers in india. You can see the same when he shows up on a podcast or talk.
I think theres alot indian developers who are hacker news as well as on github and other forums.
What you can also do is add frontend and backend user to the proxy and then agents won't ever get the actual db user and password. You can make it throwaway too as well as just in time if you want.
Traditionally it was database activity monitoring which kind of fell out of fashion, but i think it is going to be back with advent of agents.
I believe creativity is a skill. Someone who is creative and knows how to approach a complex problem is someone who creates value.
In the end of the day, vibe coding kind of replicates and accelerate your thoughts and ideas by 100x. It removed techincal complexity and mundane tasks from your workflow.
reply