Hacker Newsnew | past | comments | ask | show | jobs | submit | dehsge's commentslogin

Members only comment blogs. Where you need an invite to comment also solves the problem. You need to know a real human to get access.


> Members only comment blogs.

There, sadly, needs to be some gatekeeping and then it can work.

For example I'm member, since years, of a petrolhead forum where it works like that: a fancy car brand, with lots of "tifosi" (and you don't necessarily want all these would-be owners on the forum). To be part of the forum you must be introduced by some other members who have met you in real-life and who confirm that you did show up with a car of that brand.

If you're not a "confirmed owner", you can only access the forum in read-only mode.

It's not 100% foolproof but it does greatly raise the bar.

It's international too: people do travel and they organize meetups / see each others at cars and coffee, etc.

Or take a real extreme, maybe the most expensive social network: the Bloomberg terminal. People/companies paying $30K/year or so per seat each year probably won't be going to let employees hook a LLM to chat for them and risk screwing their reputation. Although I take it you never know.

It is the way it is but gatekeeping does exist and it does work.


That might raise the initial barrier, but it assumes every user behaves appropriately.

All it takes is one invited user to open the door to bots.


Because you have an initial user who invited the bots. The whole invite tree of this user can cull all invites given by the user who added bots.


Yes but I think bots can be very good, and many people have legitimate online-only relationships. It gets hairy quickly, with real users getting culled and bots slipping through.

Also, if the bots are smart, they'll add real people too and take them down with them.


Yeah that’s the trade off of this implementation. Lobste.rs already uses this implementation https://lobste.rs/about#invitations The comments are considerably better. I’m not even a member but get more out of reading those comments than hn, and I’ve worked at multiple YC’s. This place is not what it used to be.


Blog admin sees who invited the bots and recursively kicks that account and any invited by it.


I invite myself multiple times in addition to other real humans. Then I use my duplicate accounts to invite bots.


I'm assuming there's tracking on the invites. So a recursive kick on X and all who X invited would still do the trick. If an IP address appears more than 5 times in an invite tree, ban the /24 or ASN if not from a friendly country for 10 minutes or other reasonable timeframe.


Getting unique IPs in any country you want is trivial for anyone but people building toy bots.

How far up the tree do you kick? Going too far up makes it so malicious people can "sabotage" by botting to get huge swatch of legitimate users banned.

Going to shallow means I just need to create N+1 distance between myself and my bot accounts


That's fine. If it is understood that you might be permanently banned because someone you invite starts doing bad stuff, maybe you'll be careful about who you invite.


Inviting people who invited bots chould also hurt your "social credit" score in various ways.

Your tree could for instance be pruned - you can still invite people, but the people you invited can no longer invite people.

There are not a lot of sites which have tried this and failed. Those which have tried to be even a little bit clever about it, have succeeded pretty well (Advogato was a really early example).

What there have been, are sites which rejected such restrictions after a while, because they would rather have a big number to show to investors than real people. Many have even run the fake accounts themselves (e.g. Reddit).


Then we go back to torrent sites.

Invite only. You get a number of invites per year etc. And once a year an open door or so


>Where you need an invite to comment also solves the problem. You need to know a real human to get access.

Bittorrent trackers, as absolute retarded as they are, have performed this experiment for us and the lesson we're supposed to learn is that this does not work. Someone, somewhere, has an incentive to invite the wrong sort eventually, which because of the social network graph math stuff, eventually means "soon". Once that happens, that bot will invite 10 trillion other bots.


Actually it does work for those invite-only trackers, especially in niche fields.

Unlike most public trackers which are either dead or on a life-support, member-only and invite-only sites are still kicking.

And you are personally responsible for your invitee


Absolutely. If anything, private torrent trackers and NZB indexers are proof that it works overwhelmingly well.

The few I'm part of all have a real community (like in the net of old), civil conversation, and verified, quality materials being shared. Almost everybody behaves and doesn't abuse the invite system, because nobody wants to lose their access to such a wonderful oasis among the slop web. It's a great motivator to stay decent and follow the rules. When things go bad, it's usually not because of malice, but because someone got their account stolen. Prune the invitee tree and things are mostly under control again.


You haven't really paid attention, I guess.

Entire trees of invitees, going back months and years, are pruned. Mercilessly, indiscriminately, and self-servingly for the few people privileged enough that they are above suspicion. And if you're unlucky to be on the wrong side of it, there's nothing like an appeals process.

>and doesn't abuse the invite system

That's wild.

>When things go bad, it's usually not because of malice,

I never said it was malice. It's because the system itself is pathologically flawed and there's no way to make it work.


Compilers can never be error free for non trivial statements. This is outlined in Rices theorem. It’s one of the reasons we have observability/telemetry as well as tests.


That's fine, but this also applies to human written code and human written code will have even more variance by skill and experience.


There are some numbers that are uncomputable in lean. You can do things to approximate them in lean however, those approximates may still be wrong. Leans uncomputable namespace is very interesting.


Most math books do not provide solutions. Outside of calculus, advanced mathematics solutions are left as an exercise for the reader.


The ones I used for the first couple of years of my math PhD had solutions. That's a sufficient level of "advanced" to be applicable in this analogy. It doesn't really matter though - the point still stands that _if_ solutions are available you don't have to use them and doing so will hurt your learning of foundational knowledge.


There are other bounds here at play that are often not talked about.

Ai runs on computers. Consider the undecidability of Rices theorem. Where compiled code of non trivial statements may or may not be error free. Even an ai can’t guarantee its compiled code is error free. Not because it wouldn’t write sufficient code that solves a problem, but the code it writes is bounded by other externalities. Undecidability in general makes the dream of generative ai considerably more challenging than how it’s being ‘sold.


LLMs are bounded by the same bounds computers are. They run on computers so a prime example of a limitation is Rices theorem. Any ‘ai’ that writes code is unable (just like humans) to determine if the output is or is not error free.

This means a multi agent workflow without human that writes code may or may not be error free.

LLMs are also bounded by runtime complexity. Could an llm find the shortest Hamiltionian path between two cities in non polynomial time?

LLMs are bounded by in model context: Could an llm create and use a new language with no context in its model?


There still maybe some variance at temperature 0. The outputted code could still have errors. LLMs are still bounded by the undecidable problems in computational theory like Rices theorem.


LLMs and its output are bounded by Rices theorem. This is not going to ensure correctness it’s just going to validate that the model can produce an undecidable result.


Errr, checking correctness of proofs is decidable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: