It was weird to read this. I know antirez is on HN, so it's strange to say this, but here goes...
I always looked up to antirez. Redis was really taking off after I graduated and I was impressed by the whole system and the person behind it. I was impressed to see them walk away to do something different after being so successful. I was impressed to read their blog about tackling difficult problems and how they solved them.
I'm not a 10x programmer. I don't chase MVPs or shipping features. I like when my manager isn't paying attention and I can dig into a problem and just try things out. Our database queries have issues? Maybe I can write my own AST by parsing just part of the code. Things like that.
I love BUILDING, not SHIPPING. I learn and grow when I code. Maybe my job will require me to vibe code everything some day just to keep up with the juniors, but in my free time I will use AI only enough to help speed up my typing. Every vibe coded app I've made has been unmaintainable spaghetti and it takes the joy out of it. What's the point of that?
To bring it all together, I guess some part of me was disappointed to see a person that I considered a really good programmer, seem to indicate that they didn't care about doing the actual programming?
> Writing code is no longer needed for the most part
> As a programmer, I want to write more open source than ever, now.
This is the mentality of the big companies pushing AI. Write more code faster. Make more things faster. Get paid the same, understand less, get woken up in the middle of the night when your brittle AI code breaks.
Maybe that's why antirez is so prolific and I'm not.
Sometimes I wish I was a computer scientist, instead of a programmer...
I care a lot about programming, but I want to do programming in a way that makes me special compared to machines. When the LLM hits a limit, and I write a function in a way it can't compete, that is good. If I write a very small program that is like a small piece of poetry, this is good human expression. But if I need to develop a feature, and I have a clear design idea, and I can do it in 2 hours instead of 2 weeks, how to justify with myself that just for what I love I use a lot more time? That would be too much of ego, I believe. So even if for me too this is painful, as a transition, I need to adapt. Fortunately I also enjoyed a lot the desing / ideas process, so I can focus on that. And write code myself when needed.
> To bring it all together, I guess some part of me was disappointed to see a person that I considered a really good programmer, seem to indicate that they didn't care about doing the actual programming?
My take on this is that we as a society are now on the verge of transitioning towards programming as an art form. And the methodologies of art vs non art programming are vastly different.
Take clothes, for example. Manufacturing is vastly optimized for throughput, but its art form is heavily optimized for design and customization. Maybe that is what all this is about now with programming, too?
I too would think of myself as someone who likes to code for the sake of explorative understanding and optimization. I'm pretty bad at the last 10%, like _reeeally_ bad actually.
But I am aware that the methodology of programming is changing. And currently I believe that design and customization might in parts also change, because a lot of LLM- / slop-coded successful projects were optimizing for something like text-in-the-loop where they started with a terminal CLI and made it a real design later, because the LLM agent was able to parse and understand CLI / TTY characters.
Maybe this is what it's actually about. Maybe we need to optimize things for text now so that LLMs can help us more in these topics?
I'm thinking lately a lot about scene graphs and event graphs and how to make them serializable so that I can be more efficient in generating UIs. Sorry for babbling, maybe these are just thoughts I'm gonna regret in the future.
Another data point: my generally tech savvy teenage daughter (17) says that her friends are only aware of AI having been available for last year (3 actually), and basically only use it via Snaphhat "My AI" (which is powered by OpenAI) as a homework helper.
I get the impression that most non-techies have either never tried "AI", or regard it as Google (search) on steroids for answering questions.
Maybe more related to his (sad but true) senility rather than lack of interest, but I was a bit shocked to see the physicist Roger Penrose interviewed recently by Curt Jaimungal, and when asked if he had tried LLMs/ChatGPT assumed the conversation was about the "stupid lady" (his words) ELIZA (fake chatbot from the 60's), evidentially never having even heard of LLMs!
My mom does. She's almost 60. She asks for recipes and facts, asks about random illnesses, asks it why she's feeling sad, asks it how to talk to her friend with terminal cancer.
I didn't tell her to download the app, nor she is a tech-y person, she just did on her own.
I've felt this too as a person with ADHD, specifically difficulty processing information. Caveat: I don't vibe code much, partially because of the mental fatigue symptoms.
I've found that if an LLM writes too much code, even if I specified what it should be doing, I still have to do a lot of validation myself that would have been done while writing the code by hand. This turns the process from "generative" (haha) to "processing", which I struggle a lot more with.
Unfortunately, the reason I have to do so much processing on vibe code or large generated chunks of code is simply because it doesn't work. There is almost always an issue that is either immediately obvious, like the code not working, or becomes obvious later, like poorly structured code that the LLM then jams into future code generation, creating a house of cards that easily falls apart.
Many people will tell me that I'm not using the right model or tools or whatever but it's clear to me that the problem is that AI doesn't have any vision of where your code will need to organically head towards. It's great for one shots and rewrites, but it always always always chokes on larger/complicated projects, ESPECIALLY ones that are not written in common languages (like JavaScript) or common packages/patterns eventually, and then I have to go spelunking to find why things aren't working or why it can't generate code to do something I know is possible. It's almost always because the input for new code is my ask AND the poorly structured code, so the LLM will rarely clean up it's own crap as it goes. If anything, it keeps writing shoddy wrapper around shoddy wrappers.
Anyways, still helpful for writing boilerplate and segments of code, but I like to know what is happening and have control over how my code is structured. I can't trust the LLMs right now.
Agreed. Some strategies that seem to help exist, though. Write extensive tests before writing the code. They serve as guidance. Commit tests separately from library code, so you can tell the AI didn't change the test. Specify the task with copious examples. Explain why yo so things, not just what to do.
Yeah, this is where I start side-eying people who love vibe coding. Writing lots of tests and documentation and fixing someone else's (read: the LLM's) bad code? That's literally the worst parts of the job.
I also get confused when I see it taken for granted that "vibe coding" removes all the drudgery/chores from programming. When my own experience heavily using Claude Code/etc every day routinely involves a lot of unpleasant clean up of accumulated LLM slop and "WTF" decisions.
I still think it saves me time on net and yes, it typically can handle a lot on its own, but whenever it starts to fuck up the same request repeatedly in different ways, all I can really do is sigh/roll my eyes and then it's on me alone to dig in and figure it out/fix it to keep making progress.
And usually that consists of incredibly ungratifying, unpleasant work I'm very much not happy to be doing.
I definitely have been able to do more side projects for ideas that pop into my head thanks to CC and similar, and that part is super cool! But other times I hit a wall where a project suddenly goes from breezy and fun to me spending hours reading through diffs/chat history trying to untangle a pile of garbage code I barely understand 10% of and have to remind myself I was supposed to be doing this for "fun"/learning, and accomplishing neither while not getting paid for it.
The way I do it is write tests, then commit just the tests. Then when you have any agent running and generating code, before committing/reviewing you can check the diff for any changes to files containing tests. The commit panel in Jetbrains for example will enumerate any changed files, and I can easily take a peek there and see if any testing files were changed in the process. It's not necessarily about having a separate codebase.
Also: detailed planning phase, cross-LLM reviews via subagents, tests, functional QA etc. There at more (and complimentary) ways to ensure the code does what it should then to comb through ever line.
Always interesting (in an informative way) to see people "defending" em-dashes from my personal perspective. Before you get mad, let me explain: before ChatGPT, I only ever saw em-dashes when MS Word would sometimes turn a dash into a "longer dash" as I always thought of it. I have NEVER typed an em-dash, and I don't know how to do it on Windows or Android. I actually remember having issues with running a program that had em-dashes where I needed to subtract numbers and got errors, probably from younger me writing code in something other than an IDE. Em-dashes always seem very out of place to me.
Some things I've learned/realized from this thread:
1. You can make an em-dash on Macs using -- or a keyboard shortcut
2. On Windows you can do something like Alt + 0151 which shows why I have never done it on purpose... (my first ever —)
3. Other people might have em-dashes on their keyboard?
I still think it's a relatively good marker for ChatGPT-generated-text iff you are looking at text that probably doesn't apply to the above situations (give me more if you think of them), but I will keep in mind in the future that it's not a guarantee and that people do not have the exact same computer setup as me. Always good to remember that. I still do the double space after the end of a sentence after all.
Unfortunately this table doesn't show us where the em-dash users are coming from and if they are native speakers.
It's not that it doesn't exist in my native language, but I don't remember seeing them very often outside of print books, and I even know a couple typo nerds.
Maybe I'm totally off, and maybe it's the same as double spacing after a '.'. I had not heard of this until I was ~30 and then saw some Americans writing about it.
Maybe I'm weird, but one of the first things I've always done when setting up emacs is to enable Typo mode (or Typopunct) for writing modes, which handles typing en and em dashes and "smart" quotation marks in a fairly natural way.
I actually checked HN's comment data corpus to see if em dash usage rose after AI adoption became more widespread. I was kind of shocked to see that it did not.
Its overuse is definitely a marker of either AI or a poorly written body of text. In my opinion, if you have to rely on excessive parentheticals, then you are usually off restructuring your sentences to flow more clearly.
I actually got punked during a demo because I wrote some terminal commands and stored them in the macOS notepad and didnt notice it had changed -- to —.
When I copy and pasted them in it failed obviously so... yeah. If you have terminal commands that use `--` don't copy+paste them out of notepad.
Shift+Win/Option+-. And holding - gives you en/em dash on iOS and Android. Personally I love using em dashes so this whole AI thing is a real disaster for me.
Just a reminder that our experience does not necessarily invalidate someone else's experience.
Eg, I was typing Alt-0151 and Alt-0150 (en-dash) on the reg in my middle school and high school essays along with in AIM. While some of my classmates were probably typing double hyphens, my group of friends were using the keyboard shortcuts, so I am now learning from this "detect an LLM" faze that there's a vocal group of people who do not share this experience or perspective of human communication. And that having a mother who worked in technical publishing who insisted I use the correct punctuation rather than two hyphens was not part of everyone's childhood.
Yes, it's essentially the Pareto principle [0]. The LLM community has conflated the 80% as difficult complicated work, when it was essentially boilerplate. Allegedly LLMs have saved us from that drudgery, but I personally have found that (without the complicated setups you mention) the 80% done project that gets one shot is in reality more like 50% done because it is built on an unstable foundation, and that final 20% involves a lot of complicated reworking of the code. There's still plenty of value but I think it is less than proponents would want you to believe.
Anecdotally, I have found that even if you type out paragraph after paragraph describing everything you need the agent to take care of, it eventually feels like you could have written a lot of the code yourself with the help of a good IDE by the time you can finally send your prompt off.
Yeah, my mental model at this point is there’s two components to building a system: writing the code and understanding the system. When you’re the one writing the code, you get the understanding at the same time. When you’re not, you still need to put in that work to deeply grok the system. You can do it ahead of time while writing the prompts, you can do it while reviewing the code, you can do it while writing the test suite, or you can do it when the system is on fire during an outage, but the work to understand the system can’t be outsourced to the LLM.
This can't really be the full story, or else people would have already come up with the "first line developer" like first line support. There is a dumbass or executive who creates that first 70 or 80%. Then hands off the entire thing to a professional developer to keep working on it.
The AI people sure dont want that, thats too telling about its limitations and value
Reading the text of the article, and not just reacting to the title, I do think this article has a kernal of truth to it that resonates with me. It's not really talking about intelligence, but MEASURES, and how individuals contort themselves into what they believe is valuable.
But at the end of the day, we do not have an inherent value. I wonder if people that get hung up on these metrics and what value they seemingly hold either that a person is a whole person, not just some measurement about them. The world's tallest man also has a favorite food, favorite color, and hobbies. He has friends and family. The metric you assigned to him is not the totality of the man.
I say this because recently I've been struggling with work and I feel like I have to say to myself sometimes, I am more than just a source of income and health insurance to my family. To someone who isn't in my situation, it might seem silly, but it has been scary and stressful and in some ways I did say to myself, you have value because you provide. But we have money saved, and are in a stable situation, and I could always find a new job, but my ego assigned value to the job regardless despite my best efforts at pretending that I don't play games with corporations. The stress that keeping a 9 to 5 causes in my mind is entirely self-inflicted by me.
I guess what I'm saying is that I should value other things about myself more highly, or maybe even not value anything about myself if that makes sense. What value is there in in measuring my success, as long as I am honest about my efforts and happiness?
I will never conquer the entire world by 25, or have a billion dollars, so maybe I need to learn to measure less and focus on true personal accountability and happiness instead. Hopefully that's a simple task...
I think an interesting different way to talk about aphantasia is not, "Can you see an apple when you close your eyes" but more along the linked of, "Can you mentally edit the visual reality you see?"
A common exercise while being in the back seat of a car while I was young was to imagine someone in a skateboard riding along the power lines on the side of the road, keeping pace with our car.
It's not literally overriding my vision, it's almost like a thin layer, less than transparent, over reality. But specifically, it's entirely in my mind. I would never confuse that imagery with reality...
Having said that, I think that is related to the way our brains process visual information. I've had an experience when I'm driving that, when I recognize where I am, coming from a new location in not familiar with, I feel like suddenly my vision expands in my peripheral vision. I think this is because my brain offloads processing to a faster mental model of the road because I'm familiar with it. I wonder if that extra "vision" is actually as ephemeral as my imagined skateboarder.
> A common exercise while being in the back seat of a car while I was young was to imagine someone in a skateboard riding along the power lines on the side of the road, keeping pace with our car.
Oh, I've done this! I think many kids have. I remember a moment in my childhood when it was ninja turtles riding on those hoverboards, while I was bored watching outside the window of the back seat. Riding along the power lines, and occasionally katana-cutting something in the way.
As someone who has aphantasia I did the same thing, but with motes of dust on the window. I’d stare at a single bit of dust or dirt and move my head up and down to make the dirt move with the landscape. It’s funny to read these stories because it solidifies my assumption that I have aphantasia. I did the same thing as a child just without the imagery.
This is super interesting to me. A lot of threads about aphantasia devolve into both sides being mildly incredulous that the other exists, I think partially because it's _hard_ for us to imagine experiences outside of our own.
But here, I feel like we have a clear delineation of the differences between experiences, in a non-abstract way... and that feels more valuable to me, somehow.
omg! That was every trip to my grandparents house my entire childhood. I couldn’t “actually see” the skateboarder, but it was enough to serve as entertainment.
Mine was usually some sort of superhero who did flips over things and picked them up and whatnot.
I can’t imagine if you could “actually see” the skateboarder how much less boring those rides would be.
I've been trying to generate my own maps using Voronoi diagrams as well. I was using Lloyd's algorithm [0] to make strangely shaped regions "fit" better, but I like the insight of generating larger regions to define islands, and then smaller regions on top to define terrain.
One of the things I like about algorithms like this is the peculiarities created by the algorithm, and trying to remove that seems to take some of the interesting novelty away.
I live in the DC area and whenever I hear people say "just crack a window" I think, that brings in all of the pollen I'm allergic to in all seasons except winter, plus humidity and 95 f degree heat if it's the summer... I' be been looking into getting an ERV for a while.
The humidity and temperature are rough. Some months I can't open the window at all. This month has been pretty good though, huh? At least for the temperature and humidity.
I feel you about the pollen. I use a Blueair filter, and that keeps PM 2.5 and PM 10 in check.
I always looked up to antirez. Redis was really taking off after I graduated and I was impressed by the whole system and the person behind it. I was impressed to see them walk away to do something different after being so successful. I was impressed to read their blog about tackling difficult problems and how they solved them.
I'm not a 10x programmer. I don't chase MVPs or shipping features. I like when my manager isn't paying attention and I can dig into a problem and just try things out. Our database queries have issues? Maybe I can write my own AST by parsing just part of the code. Things like that.
I love BUILDING, not SHIPPING. I learn and grow when I code. Maybe my job will require me to vibe code everything some day just to keep up with the juniors, but in my free time I will use AI only enough to help speed up my typing. Every vibe coded app I've made has been unmaintainable spaghetti and it takes the joy out of it. What's the point of that?
To bring it all together, I guess some part of me was disappointed to see a person that I considered a really good programmer, seem to indicate that they didn't care about doing the actual programming?
> Writing code is no longer needed for the most part
> As a programmer, I want to write more open source than ever, now.
This is the mentality of the big companies pushing AI. Write more code faster. Make more things faster. Get paid the same, understand less, get woken up in the middle of the night when your brittle AI code breaks.
Maybe that's why antirez is so prolific and I'm not.
Sometimes I wish I was a computer scientist, instead of a programmer...