
2026/03/09 4:26
「Ask HN:新規アカウントの投稿を制限してください」
RSS: https://news.ycombinator.com/rss
要約▶
本文
Please do so. And, forgive me if I speak heresy, but there has to be more proof of work (friction) to create accounts. I was shocked at how easy it is for something like chatgpt atlas to create new accounts on the fly. The problem is that we might lose some gold.Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate. Honest question, what are the alternatives to HN?Because if new account restrictions create enough friction, you lose legitimate users who periodically rotate accounts for privacy reasons.At some point the annoyance tips toward just lurking, and a forum where only old accounts talk is a stagnant forum given enough time. Responding from a new account is different from posting from a new account. You aren’t vetting people by making accounts have a minimum age to post articles. That’ll just cause people to make accounts before they need them.Reddit has forums where you need a minimum karma to post to certain subreddits and that is typically upvotes on your comments, but it could also be upvotes on someone else’s moderated subreddit. I think the right people will stick around. There is a certain kind of indivudal that has the paitence to understand that a system that restricts new accounts from post is a good thing. Of recent, there have been a lot of posters that come here from the open web just to try and slant opinion. The SA Forums model does accomplish the goals of filtering out noise, but then you’re stuck with a stagnant community of “the right people.” I believe HN's success is in large part not presuming to have a good idea of what "the right people" are.This doesn't mean it doesn't have a strong sense of what bad behavior is. It clearly does. I am only that kind of individual when I'm inclined to post unconstructively – not that I know that, at the time. When I'm feeling constructive, friction is likely to make me take my constructive energies elsewhere. Seems like restricting posts but not comments from a fresh account would thread that needle pretty well? I'd suggest: new accounts are read-only for at least a week. Then they can comment (rate limited at first, gradually relaxed) and vote, and then after some additional amount of time and/or karma they can submit a post. Maybe some of these mechanisms are already in place? Bots can probably game this too but drive-by bots maybe won't be patient enough. Immediate comment privileges are really important. Lots of examples, but to give a silly one, someone pastes their clipboard without realizing it includes their API key or their email. Good Samaritans should be able to say, "Hey, I just caught something."And, as another commenter mentions, if someone shares your work, you should be able to comment on that thread without delay. This is the only reason I got myself a HN account: someone posted a link to a blog post of mine, and I happened to see the increased traffic on my VPS.(And I stuck around after, a few posts are interesting enough. All the AI stuff isn't, and there is too much of that unfortunately.) It seems easy enough to circumvent: "We're launching our product in 2 weeks, so let the AI create and 'warm up' 20 new HN users so they're ready to shill".It's really not a problem that can be solved easily :( If someone is going to put that much effort into to it, let them. I think the ideas here are to try to get some low hanging fruit to see if that works “good enough”. You’ll never block all AI generated accounts, but you may not have to and still have the desired effect.But if someone wants to plant 20 new accounts, grow them out with karma votes, so that they can game the voting, there are probably other ways to detect that. Requiring accounts to be a certain age does not help and will only affect legitimate users. The slopsters will simply create accounts, wait a bit and start posting then.Actually cross the will out. They are already doing this to avoid the green smell. This account replied to me today. 4 months old, but only started posting today. https://news.ycombinator.com/user?id=BelVisgarraOh damn, that's the one who posted the AskHN about the verified job portal on the frontpage today. Either this is some chilling still in build up, or it's an actual human being with severe LLM slop impersonation derangement syndrome. If that were to happen, I'd also suggest that comments from fresh accounts should also have URLs deleted or disabled. Even something like…Example[.]comBut don’t worry, HN has been thoughtful about links from new accounts for months and months (can’t speak for longer, but maybe/probably). Effort could well be duplicative unless I’m unaware of some more granular detail. I'm surprised posts aren't restricted a bit more. Maybe that's just my old school "lurk moar" mentality, but I feel like I really need to understand the vibes of a community before I start to contribute posts to it. Yeah, exactly. Thirteen years ago, I was a lurker. No account, because why would I make an account just to read? But when I wanted to say something badly enough, I made an account. (I think the first thing I did is post an Ask HN about functional programming, so "no posting for X time" might have turned me away.) Totally.I don't think the solution is changing the dynamic but flagging, this site self-moderates quite well, aside from dang and tomhow's great work. I rotate accounts on "social media" (mostly Reddit and Hacker News, the others don't interest me) every few weeks or months to make sure not too much of my post history accumulates in one account. I would dislike it very much if there would be high friction to create new accounts. On the other hand my behavior is probably a major outlier. Same, though I'm also surprised how easy I can make new accounts for this site. But I love that. Hope it doesn't require me to jump through a bunch of hoops in the future. I think the problem is you can be tracked by your email when you sign up for a new account. So I am not sure how this can be helpful. This matters when you're hiding from the website. It doesn't matter if you're just trying to hide such things from the public.
not too much of my post history accumulates in one accountI'm curious to hear what benefits you think can be gained from avoiding this. I do the same. It simply means theres less accidental leakage / self-doxing that could be pieced together if you (or llm) read every comment on the account.Suggestion: Pick a long term account, dump the comments, and see what an llm could figure out about the target I do it sometimes just to restrict my own pride in the account. I get a buzz from upvotes and that upsets me on a deeper level. Same, but also for the opposite reason: a new account gives me a chance to do better. If I post lame comments, I accept the lameness of the posts attached to a particular user name and the hesitation I feel to post more lame comments decreases. With a fresh identity, I am more likely to avoid lame posting sort of like how you avoid going out in the mud in brand new sneakers. A sort of repentance; being born again in the digital realm. I think yours might be extreme. But I think the anonymity here is widely appreciated. And frankly necessarily relies on easy creation of accounts.People share things that they often wouldn’t. And somehow the culture remains mostly civil. It’s a pretty fantastic forum IMHO.Changing the rules would surely change the vibe, so to speak. I appreciate the anonymity. Posting as throwaway is often useful to distance the poster from $work or $ex or other situations yet contribute to a conversation.But will it continue under all the login id surveillance laws coming up? Never got banned for it, though my "rotations" tend to be "a few weeks every year".even if they did ban me: the account was going to be deleted in a short while regardless. So that fear isn't present for what's essentially a longer lasting throwaway. My intuition is increasing the difficulty of account creation favors motivated actors and disincentivizes organic participation because:1. ideological and/or economically motivated actors will just see it as a cost of doing business.2. Ordinary sign-up friction is more likely to make HN appear ordinary to anyone who stumbles upon it.3. Sign-up friction is a moat. The strength of HN is moderation of what gets in. I was going to suggest emotional leetcode, but LLMs do well on this.When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy). That type of question could also turn people off. We already have too many discussions where people are quick to jump to conclusions and attribute intent, rather than asking basic questions. I echo this sentiment for all social media platforms today...At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years. Seems to be a general problem right?The standard solution is using an email to register account, maybe a cloudflare captcha, and then using good network logging to group accounts by IPs and chainbanning abusive accounts when they are caught by other mechanisms. I was thinking of setting up a system to highlight sock-puppeters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. But it strikes me that the HN staff probably already does something like this, they may not welcome a side-loaded project of this sort, and it would require some automated crawling of HN (which again may be unwelcomed). Finally, I don't actually have experience in this area. Is this something that would be welcomed, or unwanted?My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'. I'm wary about new accounts such as yours wanting to censor and shape discourse by antagonizing people who hold diverse views that differ from your own here.The HN culture has shifted drastically over the past 5 years. To be clear, I wouldn't filter people just because they have different views than me (the goal is to automate the detection, to avoid the effort of reading all the comments -- I should mostly not be in the loop). But I have come across accounts that openly admit to being sock-puppets (eg https://news.ycombinator.com/item?id=47242156). These sorts of accounts I would highlight.Likewise for guideline-abusers. I don't really know what heuristic you would use to detect rules abuse, but I imagine there are at least some clear violations that could be detected.Finally, I think I'd make one account for sock-puppets, another for guidelines-abusers, etc, so people can 'subscribe' to whatever degree of 'highlighting' that they want. Agree, HN can't be immune to what happens in the programming world. Would be great though if we can have a way to mute or hide accounts. This way each HN user will be able to clean his own feed of articles. That works for me so long as it’s not the main solution, as I personally don’t want to curate, I’d rather just partake in a sanely moderated forum and that’s my understanding of what HN has been it’s just facing a new challenge with ai spam That's sad there have been some really neat things shared that way but you gotta do what ya gotta do. Just new ones for now.I don't want to make HN harder for legit new users, but I do think a bit of community participation is reasonable before posting a Show HN, so it isn't just a box on some "how to promote your project" checklist. It's really hurting the brand. I can't remember the last time I bothered to even check that index. I used to check it all the time. /newest is pretty grim, too. Go there and click any link, and odds are you won't even need to read the contents to know it's AI generated, because you'll immediately be met by one of:- A landing page that looks exactly like every single AI generated landing page ever, I don't even need to describe it, you already know what it looks like- An article or blog post headered by an image with the Gemini logo in the corner- A Github repository with CLAUDE.md or AGENTS.md and/or 50 large commits made in the span of a dayI'd estimate that more than half of new submissions now fall into one of the above categories. There's almost no shot to get hand authored posts some views (I tried with one of mine recently). I felt like I submitted it and a moment later there were like 20 new very obviously AI generated posts ahead of it. It does seem, anecdotally, that the Show HN is being used less since the recent analytic posts that made it to the front page. Reddit has tried this approach and, IMO, it's failed.A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum? It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way. Hacker News has three advantages. First, it is moderated by the same people who build the tooling, so the incentives are aligned. Second, it is an enormous source of soft power for a venture capital firm with the resources, incentives, and likely the competence and capacity to keep it running smoothly. Third, the scale is smaller and is not tied to hardline revenue constraints like CPM, user LTV and DAU-maximization which restrict what Reddit can do. It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.That's the day the site started its death spiral. I quit moderating because it was destroying my mental health.Getting called a fascist and rehashing how “no, you’re libertarian politics are fine, but can you please just start your own sub” in a long, drawn out, hateful, back and forth gets exhausting after the 200th person who comes to the bicycling subreddit and feels they should be allowed to endorse harming cyclists with their vehicles.Everyone got mad at spez for having the audacity to fuck with these kids, and there is a point there, but after living with it, I could see myself doing the same damn thing. There needs to o be a distinction between creating a post and replying.IMO New accounts should be restricted from creating new posts, or at least certain kinds of new posts.Replying shouldn't be restricted. That is how users interact with each other and learn the etiquette of HN. 100%. Not sure what the solution is but I have lost interest in Show HNs these days. Part of it is because when someone posted before, it usually meant they spent a fair amount of time thinking, and found it worthwhile to spend energy on the project. This was a nice first filter for bad ideas and now no longer exists.Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus. I'm not sure if LLM projects doesn't mean they were not made with love. It just makes programming accessible to more people, but essentially it's still just a tool.It does take the handcraft out of it, in that sense an LLM-made tool would be more akin to IKEA stuff compared to a handcrafted work of art (though I struggle to call even hand-made electron crap a work of art, lol).But yeah I know what you mean, they are usually half-finished solutions. Why do you keep posting here? Asking seriously. You open a new account, immediately get it banned, then move on to the next. Doesn’t that get boring? For your first ever comment, you are breaking multiple rules.Please review the Guidelines and FAQ I furthermore wish that "posting an LLM-generated comment (i.e. and passing it off as your own)" was worthy of an instant ban, because I see this sort of behavior from non-green accounts as well.EDIT: I meant (but totally forgot) to qualify that my "proposal" would only apply when the LLM-ness is self-obvious—idk, make up a "reasonable person" standard or something. Presumably, the moderators would err on the side of letting things slide. Even so, many comments I've seen are simply impossible for any reasonable person to claim as "human-written"—the default ChatGPT style is simply too distinct. I disagree with this policy.Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM. I think your comment was generated by an LLM and hereby vote for your immediate and permanent instant ban. I think that your comment was generated by Eliza, and hereby vote for you to get a karma boost for being Legit Old School, then an immediate and permanent instant ban.I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :) I've seen people admit it. I've even seen a commenter say that they were an agent. We can do these cases. Then nobody would admit it, so the problem persists. Except maybe for fully automated accounts. Those should of course be banned anyways. Many HNers strongly argue that it's absolutely impossible to distinguish between AI text and non-AI text. Some of it seems to be a knee-jerk reaction to some of the occasional, one-sided stories of people who were accused of using LLMs and fired from their jobs. And some of it seems to be just hedging so that we don't develop a culture that could penalize their LLM-generated posts or code.My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it. Hmm, some LLM text is hard to detect, sure.Some is also horribly easy. If the text is full of:- Overly positive commentary and encouragement- Constant use of bullet point lists, bolding and emoji- This quaint forced 'funniness', like a misplaced attempt at being lighthearted- A lot of blablah that just missed the point- Not concise and to the point, but also not super longThen that really screams ChatGPT to me.I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO. Well just the bullet points but in this case I thought they were warranted. ChatGPT uses them whenever and ever. The moderators are supposed to just know it when they see it? It's that black and white to you? Or are lots of false positives a price we have to pay? Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)Maybe there can be a dedicated 'flag botspam' button?Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well? Maybe there can be a dedicated 'flag botspam' button?We already have flagging and downvoting? Abusing the flag button by reporting LLM generated posts and comments (which are not breaking any current guidelines) seems like a good way to get your flags ignored. Flagging isn’t only in case of breaking the guidelines. From the FAQ:What does [flagged] mean?Users flagged the post as breaking the guidelines or otherwise not belonging on HN.In other words, submissions get flagged that users believe don’t belong on HN. LLM-written submissions can be one such case. I would be worried the reason for the flag wasn't immediately obvious. Maybe if there was a drop-down for the rule being violated it would help. Yeah it's weird, there was one case where I thought it was AI but wasn't sure. Several other comments pointed it out, too. Author claimed he wrote it manually. (Which is honestly even more concerning!)I find the above comment concerning, so I ask: to what degree is the above commenter calibrated to ground truth? How would they know? How would we know?[1]: https://en.wikipedia.org/wiki/Calibrated_probability_assessm...It seems to me comments like the above are overconfident in the worst ways. It’s only going to get harder has people continue to model their writing on LLM style. You know it's bad when reading "you're absolutely right..." causes you to oscillate between wanting to laugh and also violently destroy the computer. Something we need to remember that AI was trained on every public internet comment, the vast majority of which are legit terrible. The biggest tell that someone is using AI is having multiple paragraphs saying the same point over and over again. Even trolls are more succinct. In some fraction of cases, it's really obvious.I would argue that those cases are really the ones that cause an LLM-specific harm, i.e., which make people feel like they aren't exclusively among fellow humans.If someone posts something that doesn't clearly read LLM-ish, but is otherwise terrible, it's not really different from if the same terrible thing had been written by hand.I don't think anyone who objects to LLM comments is really demanding a super-low false negative rate. Just get rid of the zero-effort stuff. For example, recently I've seen a lot of comments from new accounts that are just sycophantic towards TFA and try to highlight / summarize a specific idea or two, but don't really demonstrate any original thought (just, like, basic reading comprehension and an ability to express agreement). And they'll take a paragraph to do so, where a human with the same level of interest in the material might just say "good post" (granted, there's an argument to be made for excluding that, too). Sorry, updated my original comment—I meant to qualify it to only those cases where it's blatantly obvious. Obviously a lot of ambiguous comments will slip through as a result, but I agree with you that false negatives are better than false positives. Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.I don't think your account is AI just by these few comments, but I would like to point out that most rubrics one might use to determine what is obviously AI might end up including the way you talk.If there was a truly accurate tell, some algorithm you could feed a few sentences in and it could tell you "yep, this is 100% AI", then yeah sure use that. I don't know you could realistically build that machine, especially when it comes to the generation of text. For what it's worth, there are modern LLM detectors with extremely low false-positive rates. The tech has advanced quite a bit since the ZeroGPT days. Personally I've gotten very good results from Pangram Labs. Still can't directly ban people though because false positives are always possible. Is that false-positive rate from your own testing, or the author's claims? What is the source of ground truth? Your comments use em dashes. Many would claim those are vastly overrepresented in AI language and thus an account overly using them are blatantly AI.I've always found this funny. Doesn't macOS' default text substitution enable (annoying to me) things like em-dash, smart quotes, etc? People accuses everything of being LLM generated these days. That'd be a tough rule to enforce. I am more annoyed by the anti-AI luddites filling the comments with low value complaints than I am by quality content written partially by an LLM.Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring. In every single article's comments now, there's always someone coming out of the woodwork to post "This article is written by LLM." These comments are about as useless as "The website's color scheme is annoying" and "The website breaks the [back button | scrollbar]." (which, by the way, are not allowed per the HN guidelines[1])If anything should be banned, it's low-effort "This is AI" commentary. It adds absolute zero to the conversation.1: https://news.ycombinator.com/newsguidelines.html Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
I'd argue that: whether or not the article (or reply) was written by AI is a tangential annoyance at this point. I have commented once or twice on articles being AI generated. I don't put them when I think the writer used AI to clean up some text. I added them when there are paragraphs of meaningless or incorrect content.Formats, name collisions or back-button breakage are tangential to the content of the article. Being AI generated isn't. And it does add to the overall HN conversation by making it easier to focus on meaningful content and not AI generated text.Basically, if the writer didn't do a good job checking and understanding the content we shouldn't bother to either. It's much more than a "tangential annoyance" and it adds a lot to the conversation--among other things, it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.Blogging, sharing blog posts, reading them, commenting on them--these are all acts of human communication. Farming any of these steps out to an LLM completely breaks down the social contract involved in participating in an online forum like this. What's the point?It's the exact same effect that's playing out in many other areas where LLMs are encroaching: bypassing the "human effort" step has negative side effects that people who are only looking at the output are ignoring.I actually find your opinion so infuriating that it's taking all my composure to not reply with something nastier. If you guys want to spend your time reading shitty LLM spam posts with shitty LLM comments, why don't you find another site to do it on instead of destroying this one. To provide a heads up to others for who feel similarly for whether something is worth spending time with there isn't a problem speculating if something is produced by AI if there are indicators of insufficient human authorship but that's a big if. If incorrect such comments themselves become noise.In its worst form I've seen now many times in other communities users claim submissions are AI for things that are provably not, merely to dismiss points of view the poster disagrees with by invoking calls to action from knee-jerk voters who have a disdain for generative AI. I've also seen it expressed by users I expect feel intimidated by artwork from established traditional artists.Thankfully on HN it hasn't reached that level but I have seen some here for instance still think use of em dashes with no surrounding spaces is some definitive proof by pointing to a style guide, without realizing other established style guides have always stated to omit the spaces (eg: Chicago Manual of Style). This just leads to falsely confident assessments and more unnecessary comment chains responding to them.What one hopes for with curated communities is that people have discriminating taste at the submission and voting level. In my own case I'm looking for an experience from those who have seen a lot of things and only finds particular things compelling and are eager to share them. Compared to some submission that reaches the front page of say popular programming language docs that just provide another basis for rehashed discussion (and cynically since the poster knows such generalized submissions do this and grow karma).
it establishes a norm that AI-generated blogspam is, well, spam and unwelcome.It is welcome though. Being on the front page regularly is evidence that people enjoy it or find it informative.You may feel that others shouldn't be ALLOWED to enjoy it, but that's just your opinion and is almost always tangential to the actual topic.Worse, you seem to believe that it needs to be labeled to help you identify it. Why? If its good enough that you need help to spot it then its obviously of sufficiently high quality. Hey, I'm not a fan of LLM slop articles and blogspam either and if I could hold back the tide, I'd try to. But I'm just saying that pointing it out each and every time is just going to become its own form of spam. We're quickly entering a world where 99+% of what is written online, be it blogs, amateur news, or actual professional journalism, is LLM generated. You hate it, I hate it, but it's coming. The state of journalism is already in shambles and line must go up, so "everything written by AI" is sadly inevitable. Posting every time to remind people of that? I mean by the end of 2026 you might as well have a bot commenting on every article that it's probably LLM generated. I argue it adds no signal to the conversation. I still think it has strong normative value. Maybe at some point when norms have become firmly established these comments will be pointless and spammy but I don't think we're anywhere close to that point yet.A lot of blogging is essentially self-expression and that stuff won't be taken over by LLMs (it defeats the whole point). Other blogging is done with some kind of sales/promotional/brand purpose and the extent to which LLMs will dominate this will depend on how we as a society react to it (see the AI art battles) since if people react negatively to it it becomes counterproductive. Perhaps it would be better to have comments that praise apparently human-written text?I understand where you're coming from. I've been posting complaints about LLM-written articles almost as long as I've been here. (My analysis is definitely more complex than a search for blacklisted Unicode characters or words.)But I've let off on that, partly because I agree the guideline is meant to encompass that kind of criticism (same with my comments about initial page content not rendering with JavaScript, honestly) but largely because it just seems futile. It's better material for a blog post than HN comments (and would be less repetitive). I very much agree.The number of comments I see complaining about "it's not this, it's that" and other "LLMisms" definitely frustrates me more than the original content. I was thinking he same thing but didn't want to post my complaint about other commenters becasue I think that's against the rules too? I think a steelman interpretation of the parent is that entirely LLM-generated projects should be disallowed. There's a lot of submissions on Show HN that seem completely vibe-coded to me (like, including the README), which is a very different situation IMO from someone who simply used Claude to write some—or even most—of the code. When even the human-facing portion of a submission is LLM-generated, it bothers a lot of people (myself included). Agreed. Having some level of human input makes a submission at least meaningful. If the entire repo and all text is generated by an LLM, does it really matter if the human is the one posting the link? It's functionally indistinguishable from automated spam. Without engaging in more ad hominem, that are wrong by the way, what's the issue with labeling AI content with what it is? It's one thing to have an AI-label. It's another to completely derail a conversation with a likely false AI accusation.Example: https://news.ycombinator.com/item?id=47122272You have to scroll a few pages before the actual article is discussed."This was LLM generated" is likely to float to the top of an article. That's where the best comments about the article deserve to go, not an off topic comment. An AI label should be much less obtrusive. You have to scroll a few pages before the actual article is discussed.Or you could collapse the one thread containing those comments. what's the issue with labeling AI content with what it is1. Your guess is not always correct2. Over time, AI content will get harder to guess until it is indistinguishable from human content3. You're not helping anyone by posting "this is AI". Maybe it is, maybe it isn't, but it's not helpful. It just adds to the noise. I'm not suggesting anyone post "this is AI", the submitter should vouch that it's AI or eventually get banned for spamming.Ideally there could be a label on the submission that states it's AI Ideally there could be a label on the submission that states it's AIA lot of people tried for #politics and that didn't work. I doubt you'll get #ai. The guidelines haven't even been updated to say that AI generated posts and submission aren't permitted even though it's been the policy for a couple of years now if one searches for postings by the moderators. So outsiders and new HN users have no reason to know that it's not allowed. I'm sure there are reasons for it but the inaction is all very mysterious from an outsider perspective. Other than this being probably challenging to enforce fairly, I think I agree that if you had strong proof of an account largely or completely posting comments/stories/whatever that was adulterated by an LLM, that is really probably ban worthy like you said. I think all submissions to HN should be submitted via snail-mail, and must be handwritten. That would solve the problem./heavy sarcasmThat being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for. I think you need (at least) one exception to that rule. We have many people here whose first language is not English, and this is an English-only forum. For at least some of those people, an AI translation may give better clarity than their own attempt at writing in English.So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome. For now there is already a pretty effective mechanism in place, downvote and/or flag those comments that you think are across the line in that sense.But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'. This might be well-intended to restrict bot posting, but it also silences dissent. HN is one of the few places left on the internet where dissenting voices can post. A dissenting voice already has to work against the hivemind, adding more restrictions will increase the echo chamber effect. I'm very wary of this request, though I understand it. I've been reading HN daily since around 2014. My involvement was purely passive (e.g., I have been a lurker) because I really didn't think I had much to contribute that wasn't already stated better by others.I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you? I don't care if the code is generated, i care if the content is. I don't want to read another "No complexity. No fuss. No buzzwords". "It's not just a tool, its a lifestyle". Its sooooo boring... If you're going to spend 3 hours making a post, why not just write it yourself in the first place and avoid the issue and the reputational damage? Just write the text yourself, not many people enjoy reading AI-generated posts, even edited. There is an epistemic silver lining. This is in fact a Red Queen's race that cannot be won. So in the end the only solution is to evaluate the text on its own merits without reference to the writer's status, because that status can no longer be reliably detected. For a public feed like this one, the only alternative is to ignore it. The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection. One way that I could imagine a human-only HN could evolve in the coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast, maybe there’s some controls at the top level that let you see more content “lower down the tree” if you’re ok with lower SNR. Latency to get a post widely distributed grows but I don’t see that as a massive problem. In my recent experience, local meetups and groups are unexpectedly more prone to self promotion and low effort spamming.Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims. "cannot be won" "only solution" "only alternative". sorry, no, that's too black and white. There are other solutions, even if they will only work for a couple of days/months/years. We can relentlessly bully anyone using phrases like "Red Queen's race" unironically. Measly human resistance against the vapid strip-miners of semantic value. You mean that you don't believe that we are in co-evolution with AI? Because otherwise it is a Red Queen's race, and it is a useful frame for understanding. For example we can make it a race between symbiotes.If you are Sisyphus, the fact that the hill is infinite is useful when planning your day. The thing is, I can read something that's really terribly written and still extract useful information from it. (Suppose, for example, an LLM was directed to synthesize information from some sources that I wouldn't have thought of doing; or a submission simply makes me aware of a blind spot I had. Or I look up documentation and find something that's incredibly verbose and full of marketing-speak, but the code samples look reasonable and can be verified by testing and/or cross-reference.) Agreed. Merit is the only fair solution. If OP noticed a garbage post, that means they evaluated a post on merit and decided it was garbage. So it works.We have genAI generating videos and the quality sucks compared to human produced and filmed content. People call it out and nobody is going to watch a genAI movie at the theater or binge a genAI TV show. Merit based filtering.GenAI for music is not as good as human-generated music either. Not a single AI song from Suno or Udio has reached the top40. Not even one. 100% of the songs are human because they are evaluated on merit.We have SWE and agentic benchmarks to evaluate coding LLMs on merit.Disclaimer: I am a new account. This comment uses a lot of big words but it’s full of fallacies.The HN user base is not perfect at detecting LLM content but a lot of it does get flagged and downvoted eventually. About once a day I’ll click on a link, realize it’s AI slop, and go back to HN to flag it but discover that it’s already flagged.If you turn on showdead you can see all of the comments from LLM bots that have been discovered and shadowbanned.The fallacy in the comment above is simple: It’s taking the current situation and extrapolating to an extreme future, then applying the extrapolated future prediction on to the current situation. The current situation does not represent the extreme future predicted. A lot of the LLM content is easily spotted and a lot of it is a waste of time to read, therefore it’s right to police and ban it. Even if imperfect. Earlier today I found something that impressed me as awful slop, but I was hesitant to flag the submission because as far as I could tell it got the facts right (I didn't try to verify some details of who was involved with what, but I was familiar with the proposals the article was discussing). So in the end the only solution is to evaluate the text on its own meritsThis falls apart as soon as you realize that evaluating the text requires far more effort than generating it. If you're spending 2 minutes reading text that took 2 seconds to generate, you already lost. That just means that you can only evaluate a smaller fraction of the data. If your goal is to do more than sample it, you've already lost. I'm somewhat keen to adopt ATProto's feed generators and/or labeller concepts to create an alternative /new and comment prioritizer The fire hose of data will inevitably become ever more fecal. We can only walk away from it or be more careful about the pearls we pluck out. It ends well only if we get better at pearl detection.I'm not sure we can. Imagine an AI that 1) creates multiple accounts, 2) spews huge numbers of comments, 3) has accounts cross-upvote, and then 4) gets enough karma on multiple accounts to get downvote privileges. That AI now controls the conversation. Anything it doesn't like, it can downvote to death.I mean, I'm sure that HN has a "voting ring" detector, but an AI could do this on a sufficient scale to be too large to register as one cohesive group. And I think HN has a "downvote brigading" detector, but if the AI had enough different accounts, I'm not sure that would trigger, either.The best chance to detect it is just on volume (or perhaps on too many accounts coming from the same IP address or block). But if the AI was patient, I'm not sure even that would work.That's depressing. I don't want HN to become a bot playground, with humans crowded out. But I'm not sure we can stop it, if it was done on a large enough scale. I was thinking of setting up a system to highlight sock-puppeters and other consistent-rule-violating accounts, as a 'fun project' that might improve the HN experience. I've asked dang in another thread if he has any objections, but am curious to hear other input as well -- is this something people would want? Obviously it would not change the comments that are actually on HN, it would just call out 'bad' contributors more explicitly. I don't actually have experience in this area, so no promises that I'll be able to build it quickly, or take the best approach in the initial implementation.My initial thought is to set up a devoted account like "sock_puppet_detector", use the infrastructure from https://hackersmacker.org/, and add any likely sock-puppets as 'foes'. Then anyone can install hackersmacker, and add "sock_puppet_detector" as a friend to see sock-puppets highlighted. Likewise for rules violators. I don't understand how this is supposed to solve anything, and I've seen it suggested as a solution multiple times. If you restrict comments to older accounts, all it's going to do is make the bot creators speculatively open and proactively age accounts for future use. I would argue that we shouldn't let the perfect be the enemy of the good. Adding a cost to commenting that requires aging accounts I think might discourage fly by night operations and "experiments". This already happens now. Go look through a few of the "Show HN" authors - you'll inevitably see around several accounts that are 50-100 days old with a karma of 1 to avoid a green label.The OP is talking about posts, not comments. The simplest solution might be to prevent someone from posting a "Show HN" until they’ve earned twenty-five or fifty karma, to demonstrate that they’ve been actively participating on Hacker News rather than using it solely to promote themselves. I have seen accounts that were dormant for years suddenly start posting frequently, all with slop. (I don't know if this represents people having an epiphany about AI use, or accounts being compromised or just what.) This leads inevitably to karma farming bots who upvote each other’s submissions à la Reddit.It’s a speed bump at best. I wish for karma based too if we managed to get filters. I want to see posts only by accounts with {x}+ karma points. Would be fine for a personal filter but if used globally would incentivize karma gaming. You can get high karma from reposts of past popular submissions (an author who was in prison who reached the front page even half-joked/resented once how many common Wikipedia articles land on the front page for the nth time). Have you taken a look at reddit recently? It's absolutely infested with bots farming karma, either by reposting old popular posts, or simply posting AI generated comments.Actively encouraging this will only make things worse. You want other people to deal with the things you don't like and filter stuff for you, to improve your own experience and shield you from the filthy masses. God beware you have to endure a comment you don't like, your royal highness.I'd rather see you gone than the people you complain about. Core function of HN Front page is based on "other people filtering stuff for others". Filtering out by any criteria (karma, account color, first letter of the nickname, whatever) doesn't automatically mean that someone is a jerk as you have stated in the comments nearby. It just means that someone is selecting the information to consume and does not harm anyone (perhaps besides the selective person who might miss interesting info due to selection). The filtering is supposed to be based on the quality of the content, and it's only useful to the extent that people filter either on quality directly or closely correlated metrics.If everyone votes purely on basis of the first letter of the username, to use your example, then the votes provide no useful information and you may as well abolish it. Filtering is a valid form of improving signals. If there there was a reliable heuristic for users posting low effort content that was better then the user would be considering that instead.If someone in a chatroom for example is being spammy with their messages at the expense of noticing posts one finds more relevant then blocking them isn't due to considering them some filthy pleb but improving their experience. If the user being filtered never becomes aware there's no reason to be offended, either.Edit: also I wasn't the one to downvote you if that makes any difference. HN is already heavily moderated. Low-effort posters and spammers get downranked immediately, based on their behavior. OP is simply intolerant and unable to function in a social setting.Minimum karma and account age filters are discriminatory, anti-social features that should not exist on any social site. The people asking for such features are intolerant jerks, no different from ageists or ableists. They are parasites, because they want the people who are not intolerant jerks to do their filtering for them, and keep the site alive by doing so.What would happen if every single user enabled their minimum karma filter? This thread is evidence that some are unhappy with the state of a core HN feature due to users posting what they judge to be low effort content, so it does get through.The comments here are about possible mitigations. Based on this feedback dang has apparently now restricted new accounts from posting Show HN threads, so globally now there is a form of filtering users from being seen by others based on a heuristic.Your initial comment is written with the impression that the poster wanting to improve their chances of higher effort content is making some judgement on the posters themselves as though they're conceited ('filthy masses', 'your royal highness') when they're merely considering one approach to reducing noise from their feed.I myself in this very comment chain have already posted that I disagree that filtering by karma would help due to gaming issues but I don't see the problem with the user's goal. What would happen if every single user enabled their minimum karma filter?Hacker News would be a much better place.In fact, filter stories as well as users. I want to filter out any story with fewer than three upvotes and any flagged comments. That would improve quality tremendously. How would any new user earn karma in that system? How would any story get upvoted?Again, this system can only work if there are at least some people that are willing to upvote newbies and read new posts.It sounds like what you want isn't a community with collaborative filtering, like Hacker News, but a newsletter with editors, like Slashdot for example. My system has been working pretty well: using some extension or another that has mute functionality, if I see a person post an extremely low quality comment, I look at their comment history for two or three pages. If there is no comment of value in that set, I mute the user. The board gets better each day. Several of the posts I've seen are from autonomous AI agents, which don't currently seem to have that kind of long-term planning. That's a nice false equivalency you've got there. Theft deterrence is not spam prevention and the costs for each are wildly different. I don't understand why we put locks on bicycles, a determined person can just saw them off. And also invest more effort in karma farming. In other words, if we raise the bar for Show HNs we'll probably see more generated comments in the threads. I have long believed that whatever comes along to replace the reddit/HN etc type site will be based almost entirely on trust networks.i.e. only surface stories posted by or upvoted by those you trust, and the inverse with those you distrust.Then exponentially drop off trust transitively and it could be almost workable. I sometimes feel like a paid newsletter that's curated by users would be fun. I'd happily pay €5 a month for a weekly/daily digest where the comments are en par with HN. The return of Advogato. If you weren't around for it, it had a certification system like what you describe, so the stuff on it was pretty good. After a while, spammers figured out that it had very high search engine placement because of its quality, and that pretty much ruined it. It's gone now. The risk is to build very good echo chambers. One shouldn’t have to read AI slop or despicable opinions during their free time, but some exposure to alternative respectable and not idiotic views should be part of the design. Eventually HN is going to need to charge people $1 to post, just for spam filtering. Maybe donate the money to open source or something. $1 is an incredibly low price to pay for advertising and an incredibly high price to pay for legitimately interacting with a community. This would have the exact opposite of the intended effect. Charging money does not seem a very good idea in a site like this where you expect users to upload all the content. Also this would require credit card info which is a massive barrier, even if you were to charge just 1 cent. No credit card. You have to send a $1 bill by snail mail, which is proof of "work" (mailing the bill) as well as $$. You enter the bill's serial number when you enroll the account, and the account activates when the bill arrives. You can be pretty anonymous this way.I once proposed a scheme like this where you would donate to charities who would post lists of serial numbers they had received, for this purpose, but it never got anywhere. Maybe we need it more now than we did then.I guess instead of mailing a $1 bill, if necessary it could be a hand drawn picture of a kitten (artistry not required). Authentication would involve checking the paper for pressure marks made by the pen. I wonder how many would take the trouble to fake that. Those of us old enough to remember Compuserve know that the cost of entry was exactly why the quality was so high. I was lucky enough that my employer paid for it. I was also active on various comp.os.* Usenet forums. Both were great sources of quality information but Compuserve stayed “high signal” for longer. Usenet - the birthplace of trolling - eventually degraded to the point of near uselessness. The signal was drowning in noise. Mainly because some people are just shitty. Which is worth remembering here. Behind every AI agent spamming HN (and everywhere else) is a human who thought this was a good idea. Why do they think that? Maybe that’s the line to pursue for how to deal with this issue. It worked for years for the SomethingAwful forums. A nominal charge for the ability to post, with plenty of 'timeout' chances for rehabilitation before an outright ban keeps out most of the junk.It feels wrong at first to pay for commenting on a forum, but the alternative is almost always a gentle slide towards a trash dump. AI means that slide is almost a vertical slope. That was Elon's idea for Twitter but the X membership scope grew in scope. $1/m sounds better. Ooh, it's time to pull out the classics! Please feel free to check the boxes as you see fit, as I am currently too lazy to have Claude do it for me. Your post advocates a
( ) technical ( ) legislative ( ) market-based ( ) vigilante
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses ( ) Mailing lists and other legitimate email uses would be affected ( ) No one will be able to find the guy or collect the money ( ) It is defenseless against brute force attacks ( ) It will stop spam for two weeks and then we'll be stuck with it ( ) Users of email will not put up with it ( ) Microsoft will not put up with it ( ) The police will not put up with it ( ) Requires too much cooperation from spammers ( ) Requires immediate total cooperation from everybody at once ( ) Many email users cannot afford to lose business or alienate potential employers ( ) Spammers don't care about invalid addresses in their lists ( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it ( ) Lack of centrally controlling authority for email ( ) Open relays in foreign countries ( ) Ease of searching tiny alphanumeric address space of all email addresses ( ) Asshats ( ) Jurisdictional problems ( ) Unpopularity of weird new taxes ( ) Public reluctance to accept weird new forms of money ( ) Huge existing software investment in SMTP ( ) Susceptibility of protocols other than SMTP to attack ( ) Willingness of users to install OS patches received by email ( ) Armies of worm riddled broadband-connected Windows boxes ( ) Eternal arms race involved in all filtering approaches ( ) Extreme profitability of spam ( ) Joe jobs and/or identity theft ( ) Technically illiterate politicians ( ) Extreme stupidity on the part of people who do business with spammers ( ) Dishonesty on the part of spammers themselves ( ) Bandwidth costs that are unaffected by client filtering ( ) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical ( ) Any scheme based on opt-out is unacceptable ( ) SMTP headers should not be the subject of legislation ( ) Blacklists suck ( ) Whitelists suck ( ) We should be able to talk about Viagra without being censored ( ) Countermeasures should not involve wire fraud or credit card fraud ( ) Countermeasures should not involve sabotage of public networks ( ) Countermeasures must work if phased in gradually ( ) Sending email should be free ( ) Why should we have to trust you and your servers? ( ) Incompatiblity with open source or open source licenses ( ) Feel-good measures do nothing to solve the problem ( ) Temporary/one-time email addresses are cumbersome ( ) I don't want the government reading my email ( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
( ) Sorry dude, but I don't think it would work. ( ) This is a stupid idea, and you're a stupid person for suggesting it. ( ) Nice try, assh0le! I'm going to find out where you live and burn your house down! Reddit software subs are over run. It’s all “look at my new app” and they’re all the same. Same screenshot style, same shallow apps.Other subs are slowly being inundated with hidden history spammers …Bad times. I almost emailed dang this morning to offer to help out tho I'm not particularly technical. Few solutions I thought of: 1 - honeypot, hide some links llms can follow if stuff gets posted in it, unlikely to be a human. 2 - Make an captcha that only llms can answer, I recently made 2 social networks, one that humans couldn't join by making the submission question too difficult to figure out quickly. 3- Use an LLM to detect LLMs, the other social network I did for fun (that a small number of people use), an llm that looks for moderation issues does a good job of flagging them. 4- Invites but vary the number you have to give out by account age + karma. The first 3 seem like they'd stop some % for some time, but eventually get old. Devils advocate take: I think the quality of the ShowHN projects are in fact getting higher, at least the ones that land in the front page. The issue is that projects that used to take weeks, months, or even years of work now can be done in a weekend or so. It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort with AI assistance.So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.
Devils advocate take: I think the quality of the ShowHN projects are in fact getting higher, at least the ones that land in the front page.I've never seen so many low effort, obvious astroturfing posts linger on the first few pages than the past few months. They never mention what they're doing and have no proof of their work. There was a post the other day titled "Tell HN: I'm 60 years old. Claude Code has re-ignited a passion"[1] that had tons of traction with no substance. It was posted by an account created that day and the comments were filled with bots responding about how productive they are, but no mention of what they're actually doing.It's annoying seeing these obvious spam/astroturfing posts linger, taking attention away from interesting content that's worth reading.[1] https://news.ycombinator.com/item?id=47282777 I disagree in that the last few I can think of have involved things like services that do not really explain what they do properly and then ask for full permissions to your github account, or claim to be far more than they are (ie "I made this thing" but it's just a shim for someone else's stuff). It’s been democratizing, but it also means that when we look at these posts we (rightly) see that these new projects aren’t that much effort with AI assistance.This also appears to cause a serious shift in the kind of projects that are submitted (i.e.: towards things that are much more accelerated by AI assistance). Well it's not just that... picture a community group talking among themselves and then some rando shows up, yells "I built this thing that you all might like", hangs out for an hour and then is never heard from again.I think that's great in moderation as it stimulates ideas and discussions, shows us what folks are working on, etc... but this can't become Product Hunt. The reasons for posting here should be vastly different than posting on Product Hunt. I'd pose a different perspective, that Show HN in non-hype cycles tend to have a higher self-imposed bar before posting. With the democratizing, there are many posts where time from first commit to Show HN is on the order of hours, 25m being the shortest I have personally seen. I would contend that community standards have not changed meaningfully, but due to the underlying mix changing, the front page changes too.That being said, there is an above average, low quality submissions sub-trend, that are obviously trying to plant a money tree. This is largely driven by the "look ma, no hands" Ai tools like OpenClaw, mixed (venn) with the crypto crowd looking to make easy money with near-zero effort.With that being said, I have definitely seen some real bangers that have large Ai contributions. So I am generally in favor of minimally changing how HN works today. One small change would be adding to the Guidelines and FAQ, giving the agents something to read before posting (such that they know that automated submissions are not allowed[1])https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... Except the quality hasn't been getting higher. Most of those projects wouldn't be considered HN worthy if a human being had made them, they only get the praise they do because they were generated by an LLM and as such they aren't projects so much as demonstrations of the latest model's capabilities.Also the purpose of Show HN along with HN in general is to spark intellectual curiosity and create interesting conversation, and nothing about LLM generated code does that, because the person who prompted the AI to make it doesn't understand it and can't discuss it in any depth. I was thinking about this the other day. If someone made TempleOS today, people wouldn't be as impressed, because they'd just assume they used AI.They'd assume this, even if they hadn't used AI, and even if AI didn't have to ability to pull it off. That dev made many videos about its creation and motivations though and along with their personality I think people would be understanding. How about an opt-in toggle to display the year each account was created?randusername_2022I'm right on the boundary of the slopocene, not sure if in or out. There have been numerous stories on HN where someone directly involved with the story has created an account specifically to engage in discussion about whatever the story was about.Losing that seems too high of a price to pay. Yes there are AI generated comments, in the past there has been script generated comments. You can report, downvote, or just ignore and move on. I am aware of posts like this existing, but I feel they are being effectively managed.Try not to be too offended about the notion of these posts existing. Many of them are not malicious, they just caused by users stepping outside what is considered appropriate, but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here. There have been numerous stories on HN where someone directly involved with the story has created an account specifically to engage in discussion about whatever the story was about.Yes, and sometimes some of the HN automatic filters kills the comments. Remember to "vouch" the comments if they are interesting/relevant, a few "vouches" unkill the comments. And in extreme cases, send an email to hn@ycombinator.com so dang/tomhow can take a look and use some magic to fix the problem. Losing that seems too high of a price to pay.Assuming the mods just auto-ban new accounts and require them to be vouched and to earn minimum karma before being visible, those comments can be vouched up or approved by the moderators. The poster won't know that they've been banned, of course, because that's how shadowbanning works, so the approval process should be seamless for them.But how often does that happen versus the AI comments and alt account trolling?>but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.The consensus is and has always been clear. Generated comments of any kind have never been allowed. People just don't care, and that's a problem.And those comments are malicious in effect if not intent. We're here to have conversations with human beings, the intellectual and emotional connection is important. What is the point of having conversations with a machine, much less not knowing one is having a conversation with a machine? If nothing else, it's dehumanizing and a waste of time. those comments are malicious in effect if not intent.I don't believe that is possible. I think malice requires intent. Accounts have to start posting somewhen.Moderators don't have the capacity (and fairly, it is impossible) to check if they are bots or humans.There are no good solutions, there are hundreds of thousands of intelligences out there, trained millions of hours on how to scam humans, capable of spitting out text tirelessly and shamelessly, and there will be only more of them, tens, hundreds, thousands times more. https://news.ycombinator.com/newest - Scroll through there and there are a lot of [dead] submissions by green accounts. They aren't outright banned from submitting, but it often triggers auto moderation. It's like posting a link in one of your first few comments as a green user, that often results in shadow banning automatically. I personally cycle accounts on this site for pseudo-privacy reasons. HN does not allow you to delete old comments you made and thus the only way to maintain some semblance of control over my profile and privacy is to periodically switch new accounts. I've been doing this for years now. The only real downside for me is that as a new account you don't have the ability to downvote, which is super annoying but something I've learned to live with.I'm not saying your idea is bad necessarily but giving another perspective. It's not like older accounts are necessarily any better.If you look at the leader board (https://news.ycombinator.com/leaders), you'll find a few old accounts that pretty much do nothing but farm links, posting sometimes dozens of times a day, with a very low percentage of comments. Their high "score" isn't an indicator of quality; they just spam enough that a few get some good upvotes, but most of their submissions are low quality. The solution is for the users to be able to mute/hide accounts. It won't matter if an account has 10k points, once you mute it, you won't see what it posts. This has long been my biggest issue, much bigger than new accounts spamming slop. There are accounts with 10000x karma that do little more than feed links from the NY Times and similar publications, regardless of their relevance or value.Each one gets 4-5 karma, a few crack double digits. Post 10 or 20 a day over a year or two and they're five figures. Pure farming. I'd suggest instead a lower threshold for [dead]-ing posts and submissions by new accounts when flagged by HN users. That's indeed the problem with restricting new users. Existing community members always want to do that, but it's a recipe for not surviving. I'm hoping to do a show HN soon on something I've been working on, but my account is currently only 6 days old. Tips?Btw, restricting new accounts (based on karma/age/whatever) could be combined with the option to ask mods for permission somehow, although that'd have to be done in a way that that doesn't become too much work. I really wish there was a setting whereby I could simply hide all comments from accounts less than a year old. The correlation with LLM slop is simply off the charts.It almost feels like new accounts should be treated like new posts -- it is sort of a service that a select few are willing to undertake to upvote interesting stories early on.I wish even more I could block specific users (there are some highly prolific, high karma users here who are extremely irritating), but that's harder and is probably best handled client side. It used to be so pleasant to read Show HN and find such interesting projects, but nowadays it's rare that any project posting their GitHub has ever read their source code or even comes close to functioning in the way the OP claims.Such a sad development. I think this another sign in the flood of slop to come. I really suspect SNR (for whatever definition of signal most use) will continue to drop and mitigating is going be kind of like bailing out the ocean. Maybe a strange consequence of this might be that a real Show HN project would be easier to demo and find at something like a meetup now, if they weren't all kind of dying. Maybe we'll see a revival? This is largely the same pattern that happened during the crypto hype cycle, spam posts and complaints. It will likely subside as reality sinks in.There are still quality submissions by new accounts and HN is good at pulling those needles from the haystack. I created a Firefox plugin which takes HN commenters'/submiters' account create date and sort/scales the order/points based on its creation since 2009 (older accounts get more weight). Optionally, the plugin just puts "spoiler" text over accounts created after a certain date (say, 2023 or so).Unfortunately, I was not able to "reorganize" comments/posts in a manner that I felt was particular "better", and didn't keep the plugin, for whatever that's worth.I think it would be more prudent to overlay a web-of-trust, where accounts which submitted links/comments that you upvoted are then given significantly higher priority in other threads/feeds (unfortunately downvotes are not made apparent on HN, but factoring downvotes would also help.) Exposing your web-of-trust may also assist others in determining trusted content.Perhaps this web-of-trust approach is dystopian on the order of MeowMeowBeenz, but I have not heard any other practical solutions to the disintegration of trust which is upon us.Edit: Elsewhere in this thread HackerSmacker was mentioned, which is what I'm describing. That's exciting, I'll be trying it out later. Humans are better than AI at flagging AI and where they fail is where the content doesn't cause a "disgust" signal - so wouldn't it be useful instead to have a flag as AI feature? It’s getting really bad. New accounts hours old posting walls of AI-generated garbage comments across dozens of topics. Please restrict new posters, minimally, and perhaps add a little friction to new account sign ups. Folks here can decide for themselves whether to check green accounts' "Show HN" these days. We are all aware of AI slop and creep in all shape and form.Moderation is already taxing as it is. Fully agree. I have the same impression. Especially, the last couple of days I've experienced an increase of submissions from accounts which were not even 1 hour old. All just promoting some fishy ai generated bs. I checked new show HN a couple days ago and it was shocking how most were “flagged dead”, unlike how it was before the AI invasion. I'm honestly surprised HN isn't used to share more malware/githubs with new accounts too. The target audience for malware authors/distributors typically isn't a community full of technically literate software engineers, security practitioners, reverse engineers, malware analysts, etc.Same reason that burglars don't typically target security camera stores and robbers don't typically target police departments - it's basically a fast-track to early detection, which disrupts the main objective of the adversary. I’ve been mulling over this for a couple of days too. I have a project I want to share with the HN community that I put a substantial amount of effort into but it was definitely AI assisted (as is literally everything today).I’ve read all of the source and I drove the architecture but it would be a stretch to say I didn’t ask for assistance on things that felt fuzzy or foreign to me. I also have generally stopped typing code. I still don’t think the LLM made the project though, it feels like my decision making.If the bar for Show HN becomes no AI whatsoever then you’re just going to see a bunch of people covering their AI tracks. I’m reluctant to post it because I’m afraid of getting blasted by the community for using AI. At the same time, it is work that I’ve poured hundreds of hours into, that I’m proud of and that I think would be of interest to HN.I read the Obliteratus post that made it to the front page the other day and I agree that is pure slop. While it’s frustrating that it took up front page space, it’s evident that the whole community caught on to the sloppiness of it all immediately and called it out. I just don’t think HN wants to set the precedent that no AI code should be shared.I also saw a week or two ago that someone open sourced a project of theirs that wasn’t open source in the first place. The reason they stated was that they had vibe coded and were embarrassed to be discovered. If you want to get a concept out quickly with AI, you’re now hesitant to open source because of the precedent set by the community. I think that’s a scary thought to me. I would rather know the tools I’m using are AI generated/assisted and make the value judgement on if I trust the code and project owners. I don't think people are blasted for using AI (mostly), I think people are blasted for low effort work, just like pre-LLMs. LLMs just made it way easier to complete low effort projects, so therefore there is more of it. If everyone turned off new account visibility, we'd just see the same noise 30 days later.. not sure that helps. During that time, one would assume mod action would filter out the undesired, thereby “seasoning” accounts. Lots of social media platforms need better ways forward. Let's focus on things we can measure and enforce. Let's be honest to ourselves about we know and what we don't.Think back to prohibition. Just because we want less public drunkenness doesn't mean it is wisest to ban alcohol. One has to ask: what is the chance the ban is successful? What happens when it cuts the wrong way?To what degree do we care about (1) "human" versus "AI"; (2) comment quality; (3) sensible methods for revealing social preferences? I care a lot more about the latter two than the first. It doesn't have to be a zero sum tradeoff, but I think it is a good starting question.Let's have that discuss and not try to solve the human vs AI classification problem. I understand and appreciate your perspective. I do, however, disagree with your priorities. I mostly read here, but when I participate I want to interact with humans, not chatbots. I would much rather read a human comment with typos and poor grammar than another piece of anodyne LLM output that shows only that the responsible party doesn't value the human interaction that I do. Amen. I think the purpose of the bots is to create high-upvoted accounts for the purpose of later flag and downvoting things they've programmed the bots to suppress. I don’t want to see HN becoming twitterI find it's worse here now than X. Literally every discussion turns into meta and severely politicized. Certain topics you get flagged out by a mob for stating facts.At least on X reply bots are not allowed anymore. Blue checks are useless tho. I find it's worse here now than X.I disagree, but in any case the easy solution in that case is to use X instead of HN.> At least on X reply bots are not allowed anymore.In theory, maybe. HN has mostly turned into a reddit bis since 2023, with tons of topics that have absolutely nothing to do with startup, tech or programming but are directly taken from of r/news ... I'll take bots spamming fake projects over petty divise partisan politics. Yep - and if you have an opinion on one particular side that isn’t favored here, it gets flagged. I'm honestly surprised how well it's going.From the perspective of usually just swinging into a post from the front page, when I do see green, it's usually overtly political trolling, and dead from the start. So I had assumed new account = everyone sees your post in gray, at least for a week or two.I don't envy the "Show HN:" case. It can be intractable, story time:Last week, there was a "Show HN:" post for a GitHub link, made it all the way to #2. It was a Flutter app, written up as if it did all the stuff you'd want from an open source LLM client. I said to myself "geez, I knew I took too long to deliver the thing I've been working on for 2 years. the MVP version is insanely popular."-- only after digging into the repo for 10 minutes, with domain expertise, did I realize it was a complete Potemkin village, built by Claude. And even then, I was afraid to post something pointing this out because it required domain expertise, and it could have read as negative rather than principled.All that to say, some subsets of The AI Poster Problem now require having intimate domain expertise and 10 minutes to evaluate it. :/Additionally, the Claude 4.6s and GPT-5.4s are better than me at posting on HN now. :/ And I've been here 16 years. The past couple days, any comment I write involving some sort of judgement or argument is by Opus 4.6 or GPT-5.4, via: 1) dump HN post into prompt 2) say "I feel $X about this, write me an HN post that communicates this but not negatively".I'm a little ashamed to admit if you look through my post history, you'll definitely see a repeated pattern over 16 years of someone who is very negative and has a hard time communicating it constructively. They're smart enough now to extrapolate observations in the way I want to, while avoiding my own tarpits. My only problem with the last part is that your tarpits are you, and personally: I want to know you, not some version of you filtered or softened by AI. That to me is what makes HN great, how...jarring the reading experience can be, it's really fun and interesting to see how people communicate their ideas - I think it's admirable that you're making an effort to become more kind and communicate more positively, but fingers cross you don't lose "your voice"! :) How about this: ask your LLM to review your post, "does it follow HN rules?", "how would others read it", "If I were the other person, how would I feel about this reply" , "is it convincing to you?" that sort of questions. That'll help, and it'll still be your voice.And beware of what's already in context. Sometimes ideas that seem obvious given antecedents are not so obvious when taken in isolation. Please don't post snarky, shallow dismissals. That's been against the guidelines for a long time.Genuine innovation is what we most want to encourage. That's what Show HN has always been about.The problem now is that coding assistants have dramatically lowered the bar for getting a product or tool working, without the need for much innovation. We need new ways of identifying projects that are genuinely innovative so that their creators can be fairly rewarded, rather than being drowned out. I would actually expect Openclaw bots to be showing up here from time to time now, since there's no explicit documented policy against them.(edit: And thus such bots can't easily discover that they shouldn't post, afaict)