Hacker Newsnew | past | comments | ask | show | jobs | submit | debarshri's commentslogin

For someone who is new in this technically does it work like turn and sturn? Relay communicating over outbound ports? Wouldnt this run into scaling issues?

I think it is true with any distribution channel. When people figure out that it works, then everyone ends up bombarding that channel till it saturates.

Vibe coding is not helping either, I guess. Now it is even cheaper to create assets for the distribution channel.

I think same thing happened with product hunt.


Product Hunt was launched with a non-straightforward approach intended not to appear on the surface as it works behind the curtain.

Add in unemployment rates for devs... Everyone wants to make a side hustle into their job.

Kata containers are the right way to go about doing sandboxing on K8s. It is very underappreciated and, timing-wise, very good. With ec2 supporting nested virtualization, my guess is there is going to be wide adoption.

I am pretty sure Apple containers on MacOS Tahoe are Kata containers

This weekend, I found an issue with Microsoft's new Golang version of sqlcmd. Ran Claude code, fixed the issue, which I wouldn't have done if agent stuff did not exist. The fix was contributed back to the project.

I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.


Did you actually fix the issue, or did you fix the issue and introduce new bugs?

The problem is the asymmetry of effort. You verified you fixed your issue. The maintainers verified literally everything else (or are the ones taking the hit if they're just LGTMing it).

Sorry, I am sure your specific change was just fine. But I'm speaking generally.

How many times have I at work looked at a PR and thought "this is such a bad way to fix this I could not have come up with such a comically bad way if I tried." And naturally couldn't say this to my fine coworker whose zeal exceeded his programming skills (partly because someone else had already approved the PR after "reviewing" it...). No, I had to simply fast-follow with my own PR, which had a squashed revert of his change, with the correct fix, so that it didn't introduce race conditions into parallel test runs.

And the submitter of course has no ability to gauge whether their PR is the obvious trivial solution, or comically incorrect. Therein lies the problem.


This is why open source projects need good architecture and high test coverage.

I'd even argue we need a new type of test coverage, something that traces back the asserts to see what parts of the code are actually constrained by the tests, sort of a differential mutation analysis.


This could have happened before AI agents though, but yes that's another step in that direction.

I think the problem is determining who is contributing, intention, and those other nuances take a human’s time and effort. And at some point the number of contributions becomes too much to sort through.

I think building enough barriers, processes, and mechanisms might work. I don't think it needs to be human effort.

If it's not human effort, it costs tokens, lots of tokens, that need to be paid for by somebody.

The LLM providers will be laughing all the way to the bank because they get paid once by the people who are causing the problem and paid again by the person putting up the "barriers, processes, and mechanisms" to control the problem. Even better for them, the more the two sides escalate, the more they get paid.


So open source development should be more like job-hunting and hiring, where humans feed AI-generated resumes into AI resume filters which supposedly choose reasonable candidates to be considered by other humans? That sounds... not good.

If you used Claude to fix the issue, built and tested your branch, and only then submitted the PR, the process is not much is different from pre-LLM days.

I think the problem is where bug-bounty or reputation chasers are letting LLM's write the PRs, _without_ building and testing. They seek output, not outcomes.


Genuinely interested in the PR, if you would kindly care to link it.


View the projects open pull requests, and compare usernames.

https://github.com/microsoft/go-sqlcmd/pulls


That’s the positive case IMO - a human, you, remain responsible for the fix. It doesn’t matter if AI helped.

The negative case are free running OpenClaw slop cannons that could even be malicious.


I agree, but that's assuming the project accepts AI generated code, of course. Especially around the legality of accepting commits written by an AI trained on god knows what dataset.

We have been doing this lately; when we hit a roadblock with open source, we run Claude code for fixing OSS issues and contributing back. We genuinely put effort into testing it out thoroughly.

We don't want to bother maintainers, as they can focus on more important issues. I think a lot of tail-end issues and bugs can be addressed in OSS.

We leave it up to the maintainers to accept the PR or not, but we solve our problems as we thoroughly test the changes.


And are you sure that you fixed it without creating 20 new bugs? For the reader this could mean that you never understood the bug, so how you can sure that you've done anything right?

How do you make sure you don't create bugs in the code you write without an LLM? I imagine for most people, the answer is a combination of self-review and testing. You can just do those same things with code an LLM helps you write and at that point you have the same level of confidence.

It’s much harder to understand code you didn’t write than code you wrote.

Yes, that's the fundamental tradeoff. But if the amount of time you save writing the code is higher than the amount of extra time you need to spend reading it, the tradeoff is worth it. That's going to vary from person to person for a given task though, and as long as the developer is actually spending the extra time reading and understanding the code, I don't think the approach matters as much as the result.

Pretty much sure did not create bugs. Because I validated it thoroughly, as I had to deploy it into production in a fintech environment.

So I am pretty much confident as well as convinced about the change. But then I know what I know.


This is the fundamental problem. You know what you know, but the maintainer does not, and cannot possibly take the time to find out what every single PR authors knows before they accept it. AI breaks every part of the Web of trust that is foundational to knowing anything.

Using an LLM as an assistant isn’t necessarily equivalent to not understanding the output. A common use case of LLMs is to quickly search codebases and pinpoint problems.

Code complexity is often the cause for more bugs. Complexity naturally comes from more code. It is not uncommon. As they say, the best code I ever wrote was no code.

If the test coverage is good it will most likely be fine.

It is probably not bots. The reach of authors is pretty good. He actually loyal fan followers in india. You can see the same when he shows up on a podcast or talk.

I think theres alot indian developers who are hacker news as well as on github and other forums.


Why do all the comments look exactly like paid comment spam?

On second look. It could be spam. This is disappointing.

Also, just to add to this, to run compile once and run anywhere, you need to have a BTF-enabled kernel.

Exactly, and that's one more reason I went with a userspace proxy — no kernel deps, runs anywhere, way easier to debug.

We do something similar in adaptive [1].

What you can also do is add frontend and backend user to the proxy and then agents won't ever get the actual db user and password. You can make it throwaway too as well as just in time if you want.

Traditionally it was database activity monitoring which kind of fell out of fashion, but i think it is going to be back with advent of agents.

[1] https://adaptive.live


intensification = productivity for me.

I believe creativity is a skill. Someone who is creative and knows how to approach a complex problem is someone who creates value.

In the end of the day, vibe coding kind of replicates and accelerate your thoughts and ideas by 100x. It removed techincal complexity and mundane tasks from your workflow.


Hi, the link goes to the reigtration directly. May be you can change to the landing instead.


Thanks, good catch! Updated to the homepage.

edit: looks like I cant edit it anymore ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: