Hacker Newsnew | past | comments | ask | show | jobs | submit | RhythmFox's commentslogin

Having used agents some I think 'addictive behavior' is really the closest thing to the feeling it gives me as well. I don't find it engaging my critical thinking brain, and in fact it often subverts that in favor of 'get the next dopamine hit faster' behavior (ie just rerun it, leading to the metaphor the OP is using). It takes a conscious effort for me to get back out of that cycle and start thinking of the fine details of what the code really does, or why I wanted it to do that in the first place. I have called it 'smoking vibes' and 'chasing rAInbows' in my sillier moments. It really does feel good... too good :P

How does it 'count almost everything as gambling'? They just said 'non-deterministic' output is gambling-like, that is not 'almost everything'. Most computation that you use on a day-to-day basis (depending on how much you use AI now I suppose) is in all ways deterministic. Using probabilistic algorithms is not new, but it your point is not clicking...

Working with humans is decidedly not deterministic, though. And the discussion here is comparing AI coding agents and humans.

That starts to get into a very philosophical space talking about human action as deterministic or not. I think keeping to the fact that the artifacts (ie code) we are working off will have deterministic effects (unless we want it not to) is exactly the point. That is what lets chaotic human brains communicate with machines at all. Adding more chaos to the system doesn't strike me as obviously an improvement.

Almost everything is non deterministic to some degree. Huge amounts of machine learning, most things that have some timing element to them in distributed systems, anything that might fail, anything involving humans, actual running computation given that bitflips can happen. At what point does labelling everything that has some random element “gambling” become pointless? At best it’ll be entirely different to how others use the term.

Then the training itself is the legal question. This doesn't seem all that complicated to me.

A small price to pay for human hands to never be sullied digging through cold food to find things again. Progress.


This isn't strictly better to me. It captures some intuitions about how a neural network ends up encoding its inputs over time in a 'lossy' way (doesn't store previous input states in an explicit form). Maybe saying 'probabilistic compression/decompression' makes it a bit more accurate? I do not really think it connects to your 'synthesize' claim at the very end to call it compression/decompression, but I am curious if you had a specific reason to use the term.


It's really way more interesting that that.

The act of compression builds up behaviors/concepts of greater and greater abstraction. Another way you could think about it is that the model learns to extract commonality, hence the compression. What this means is because it is learning higher level abstractions AND the relationships between these higher level abstractions, it can ABSOLUTELY learn to infer or apply things way outside their training distribution.


ya, exactly... i'd also say that when you compress large amounts of content into weights and then decompress via a novel prompt, you're also forcing interpolation between learned abstractions that may never have cooccurred in training.

that interpolation is where synthesis happens. whether it is coherent or not depends.


I mean, actually not a bad metaphor, but it does depend on the software you are running as to how much of a 'search' you could say the CPU is doing among its transistor states. If you are running an LLM then the metaphor seems very apt indeed.


It's 'wild' to this person because it challenges their opinion on Musk and Tesla I have to guess. This is a classic 'it is bad reporting because it does not agree with my worldview' take, aka 'fake news'.


He also points out a pointless type check in a type checked language...

Your name is very accurate I must say.


That type check is honestly not pointless at all. You can never be certain of your inputs in a web app. The likelihood of that parameter being something other than an arraybuffer is non-zero, and you generally want to have code coverage for that kind of undefined behavior. TypeScript doesn't complain without a reason.


Not only that but they get to use an individual who they have philosophical differences with. You can say it was 'good security practice', tarnish his reputation, and get to switch the narrative to something sympathetic to yourself all in one go. Very convenient for them.

I think they make a lot of overly strong claims here, even though there are plenty of alternative explanations possible. The mere fact that 3 people had AWS root access during this period but they only identify one and never question that it could have been one of the others is telling. They reallllly want you to just take it as obvious that 1) all these actions were taken by 1 individual and 2) that individual was malicious. Then they sprinkle in enough nasty sounding activities and info about Andre to get you to draw the conclusion that he is bad, and did bad things, and they had to do these things the way they did.

Using what reads like a business strategy email as a 'nefarious backstory' is so bad faith. I bet if you got access to all the board's emails you would see a ton of proposals for ways to support RubyGems that may not all sound great in isolation. They are being just transparent enough to bad mouth Andre while hiding any motivations from their end as purely 'security' related.


-> In my opinion they are now deliberately making the community angry.

This is one thing I think hasn't been talked about explicitly enough within the community (that I see at least) yet, Ruby Central seems to be actively trolling the 'other side' of this situation. It reads to me like they know they have the lawyer power to defend their castle and are enjoying pissing down on people and telling them it's raining. Oh and you should enjoy that because it means there will be flowers soon... or something.

I think the dialogue of 'are they acting in good faith' only works in so far as they even care about the rest of the Ruby community at all. If they are indeed bad actors (motivated purely by greed, ambition, ego, etc) then they are not ever going to come clean and they would let the whole Ruby community die before they admit defeat or wrongheadedness. My favorite term for these types of actors is SCUM - Sufficiently Clever and Uncaring Malefactors.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: