Hacker Newsnew | past | comments | ask | show | jobs | submit | more Springtime's commentslogin

I get the sense the point of the HN rule is to preserve unique human expression, regardless of how someone's communication skills are at a given point. Like, I periodically see articles on HN which have stale turns of phrase and signs of poor LLM use (which then becomes distracting while reading) and then the author sometimes mentioning in the HN comments they used an LLM to 'help' with their post based on some list of points they wanted to communicate. Yet when it's relied on too heavily like that it smothers the author's own voice.

If an opinion/idea is being communicated in the voice of another then something unique to that user has been lost. Like if I were to have a germ of an premise and told someone else about it and I found their thoughts clearer and how they expressed it and then copied how they'd expressed it then I think I'd be at least crediting them. Otherwise our own growth with self-editing and clarity will just atrophy and the internet will be a soup of homogenized ways of expressing things.


Hmm, the user joined in 2019 but had no submissions or comments until just 40 minutes ago (at least judging by the lack of a second page?) and all the comments are on AI related submissions. Benefit of doubt is it'd have to be a very dedicated lurker or dormant account they remembered they had.

Edit: oh, just recalled dang restricted Show HNs the other day to only non-new users (possibly with some other thresholds). I wonder if word got out and some are filling accounts with activity.


There has been a shift to the Ai accounts, they use Show HN less now. This started before dang's comment, I assume because they saw the earlier posts about the increase in quantity / decrease in quality.

I suspect that they are trying to fake engagement prior to making their first "show" post as well.


Fair enough — I've been lurking since 2019 and picked a bad day to start commenting on everything at once. Not a bot, just overeager. I'll pace myself.


Your account posted dense, opinionated and structured paragraphs mere minutes apart—sometimes the same minute—for multiple story submissions. Even with my own sometimes lengthy replies this would be infeasible to both instantly have structured opinions and type them out in time. Two of your posts were posted the same minute, with a combined word count of 146.

It feels like it'd take someone superhuman to come across different stories, have such opinions and type and submit both of these in that timeframe or queuing up comments to post rapid-fire.

Conspicuously too, as another pointed out, is every single comment of yours uses an em dash, which despite occasionally using myself (hey look they're in this reply) is not in every single comment. Idk, if I was being seriously accused of botting I'd put more reasoning into my response about it.


Lol. I know at least a few high karma account who post at the same frequency but they post about anti-AI and anti-tech topics instead on the big social media tech where anti-tech opinions dominate. I guess this exempts them from scrutiny? I love these witch hunts.


They post 146 words per minute across multiple different submissions with similarly structured posts? I know there are users who post frequently in other communities I'm familiar with but not in that kind of timeframe with such paragraph density or structure.


Yes. Any of the really heated political threads on this site are full of posters like this. I don't want to dox anyone, but since at least 2019-ish I've found posters that spend hours a day posting huge amounts of content in large bursts on this site.

COVID was ridiculous as I presume a lot of anxious people were stuck at home able to do nothing but post.


Does this account for them reading the article though? There are pre-existing opinions that could be easy to rapidly post based on topical discussions but here it also has to be considered the time to parse technical submissions.

They read this article, called out a specific discrepancy then commented on a paper on Arxiv in a 70-odd word post then the same minute another 70-odd word post on a different technical post. Maybe like you suggest they're just wired differently.


Yeah, there are plenty of very online people who are great at just posting things and doing the minimum amount of diligence to contribute their thought. Like I said, just hang around in any of the high energy threads on this site and you'll see a lot of it.


And those using LLMs from not post-processing the output to swap such known watermarks. Not sure if meant as a joke RFC though.


Yes, it's also confirmed on the OP's blog linked in the post.


To provide a heads up to others for who feel similarly for whether something is worth spending time with there isn't a problem speculating if something is produced by AI if there are indicators of insufficient human authorship but that's a big if. If incorrect such comments themselves become noise.

In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.

Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.

What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).


Filtering is a valid form of improving signals. If there there was a reliable heuristic for users posting low effort content that was better then the user would be considering that instead.

If someone in a chatroom for example is being spammy with their messages at the expense of noticing posts one finds more relevant then blocking them isn't due to considering them some filthy pleb but improving their experience. If the user being filtered never becomes aware there's no reason to be offended, either.

Edit: also I wasn't the one to downvote you if that makes any difference.


My system has been working pretty well: using some extension or another that has mute functionality, if I see a person post an extremely low quality comment, I look at their comment history for two or three pages. If there is no comment of value in that set, I mute the user. The board gets better each day.


Are you doing that here? What extension(s) do you use for it?


HN is already heavily moderated. Low-effort posters and spammers get downranked immediately, based on their behavior. OP is simply intolerant and unable to function in a social setting.

Minimum karma and account age filters are discriminatory, anti-social features that should not exist on any social site. The people asking for such features are intolerant jerks, no different from ageists or ableists. They are parasites, because they want the people who are not intolerant jerks to do their filtering for them, and keep the site alive by doing so.

What would happen if every single user enabled their minimum karma filter?


This thread is evidence that some are unhappy with the state of a core HN feature due to users posting what they judge to be low effort content, so it does get through.

The comments here are about possible mitigations. Based on this feedback dang has apparently now restricted new accounts from posting Show HN threads, so globally now there is a form of filtering users from being seen by others based on a heuristic.

Your initial comment is written with the impression that the poster wanting to improve their chances of higher effort content is making some judgement on the posters themselves as though they're conceited ('filthy masses', 'your royal highness') when they're merely considering one approach to reducing noise from their feed.

I myself in this very comment chain have already posted that I disagree that filtering by karma would help due to gaming issues but I don't see the problem with the user's goal.


>What would happen if every single user enabled their minimum karma filter?

Hacker News would be a much better place.

In fact, filter stories as well as users. I want to filter out any story with fewer than three upvotes and any flagged comments. That would improve quality tremendously.


How would any new user earn karma in that system? How would any story get upvoted?

Again, this system can only work if there are at least _some_ people that are willing to upvote newbies and read new posts.

It sounds like what you want isn't a community with collaborative filtering, like Hacker News, but a newsletter with editors, like Slashdot for example.


People will need to participate otherwise there won't be any new content. I see it as just like vouching, except someone has to vouch for green accounts. A slightly more equitable (and easier to implement) version of lobste.rs' invite tree.

What I want is for green accounts not to be abused as much as they are. The number of noxious, vitriolic troll alt accounts and bots is getting ridiculous. That is almost entirely the fault of established users of course, but there's no way to deal with them poisoning the well without affecting new users.


I think you missed @sltkr's point. HN wouldn't just have less new content; it would fail to develop new users. That kind of stagnation is how sites like this die.

Aggressively filtering to raise the average post quality is a sugar rush and it has the metaphorical long term consequences of type-2 diabetes. Things start out feeling great but the acceleration of death is effectively guaranteed.


Given the choice, I would prefer the quiet dignity of death by stagnation over the toxic hell of cancer and metastasis.


That dev made many videos about its creation and motivations though and along with their personality I think people would be understanding.


Yeah, live streaming it would be a good option, I thought of that too.

Not sure I understand your 2nd argument though?


> Not sure I understand your 2nd argument though?

Sorry, I meant in the context of that original dev their earnest fixation/obsession with their creation came across in their personality that I think made people sympathetic.


Would be fine for a personal filter but if used globally would incentivize karma gaming. You can get high karma from reposts of past popular submissions (an author who was in prison who reached the front page even half-joked/resented once how many common Wikipedia articles land on the front page for the nth time).


I thought this would be more about stylometry but it's mostly about users literally posting the same identifiable information across multiple services, including in one example their age, dog name, profession.

It's all classic dox profiling techniques. Even the things like spelling differences being regional signals and commonality to specific things being discussed.

It's why one has to think about what is being posted to which community if using different identities, rather than posting the same things across all of them. Though any such effort would be a waste if reliant on some non-public info that later was exposed in a database breach which tied together previously unrelated profiles.


I’m curious if an LLM-based defense for this could be made. Like a browser plugin that warns you if you type identifiable information (like occupation) into a text field, and highlights turns of phrase that are “unusual” enough to be identifiable.


or something which just inserts random untrue details about you every now and again, like they do in Alaska, where I live.


I'm afraid that if you do this, you won't just stand out among regular users, but you'll actually shine for such llm systems.


Ah yes I remember seeing you at the Alaskan local underwater basket weavers meetup, you know the one for our profession.


Related: this[1] current article/thread about privacy-preserving age verification.

The author here seems to be commenting specifically on the type of anonymity-breaking age assurance widely being utilized along with the vaguely justified social media bans. Given the right technology to prove an age threshold but while preserving anonymity I'd be curious how their thoughts would change.

For example, we've never seen people critiquing the naive kind of 'Are you over 18?' prompts seen on ye olde Reddit or adult sites, precisely because those weren't breaking anonymity or leaking any trackable identifiers.

[1] https://news.ycombinator.com/item?id=47229953


I'm in the same boat as OP.

The question I'd ask myself is; who would _I_ trust to implement privacy preserving verification?

The only answer I can come up with right now is; myself. I would trust myself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: