Hacker Newsnew | past | comments | ask | show | jobs | submit | ElatedOwl's commentslogin

I think good code is even more important now.

People talk about writing the code itself and being intimate with it and knowing how every nook and cranny works. This is gone. It’s more akin to on call where you’re trudging over code and understanding it as you go.

Good code is easy to understand in this scenario; you get a clear view of intent, and the right details are hidden from you to keep from overwhelming you with detail.

We’re going to spend a lot more time reading code than before, better make it a very good experience.


none of this even kind of addresses why the article implies that people stopped writing good code. why are we going to spend "a lot more reading code than before"? is this an ai generated comment?


The author effectively argues deep thinking is dead, that people are no longer going to take the time to understand the problem and solution space before they solve it.

I think that’s untrue, I think it’s /more/ important than before. I think you’re going to have significantly more leverage with these tools if you’re capable of thinking.

If you’re not, you’re just going to produce garbage extremely fast.

The use of these tools does not preclude you from being the potter at the clay wheel.


Just because you or I may invest effort into deep-thinking, it does not mean that others will.

I'm not worried about this at Modal, but I am worried about this in the greater OSS community. How can I reasonably trust that the tools I'm using are built in a sound manner, when the barrier to producing good-looking bad code is so low


We’re already there. Seeing OpenClaw and the new thing LocalGPT on the front page. It’s clear these projects are pretty heavily vibe coded and I have no trust that they are tested, secure or even work as advertised. It’s going to suck when all projects become that. When you can’t trust that a library works as advertised.


> How can I reasonably trust that the tools I'm using are built in a sound manner, when the barrier to producing good-looking bad code is so low

Honest answer: You never could.


Hm? The article is pretty clear about two claims, IMO: (1) good code has been rare for a long time because the job is a pragmatic one and not a philosophical one but that sometimes "good code" pays off down the line, and (2) possibly the "pays off down the line" will be less important in the future with AI coding tools.

And the comment by 'ElatedOwl is pretty directly responding to that second idea.


LLMs also make refactoring for readability, simplicity and performance far easier.

Nothing has fundamentally changed! A good solution is a good solution.

I do worry that the mental health of developers will take a downturn if they’re forced into a brain rotting slop shovelling routine, however.

So yes readability and good concise code is still important.


> This post brings up a lot of (imo true) points that I honestly can't share with the ai-lovers at work because they will just get in a huff. But the OP is right - we automate stuff we don't value doing, and the people automating all their code-gen have made a very clear statement about what they want to be doing - they want _results_ and don't actually care about the code (which includes ideas like testing, maintainability, consistent structure, etc).

I havent run into this type yet, thankfully. As an AI lover, the architecture of the code is more important than before.

* It’s harder to understand code you didn’t write line by line, readability is more important than it was before.

* Code is being produced faster and with lower bars; code collapsing under its own shitty weight becomes more of a problem than it was before.

* Tests/compiler feedback helps AI self correct its code without you having to intervene; this is, again, more important than it was before.

All the problems I liked thinking about before AI are how I spend my time. Do I remember specific ActiveRecord syntax anymore? No. But that was always a Google search away. Do I care about what those ORM calls actually generate SQL wise and do with the planner? Yes, and in fact it’s easier to get at that information now.


I keep seeing “Claude image understanding is poor” being repeated, but I’ve experienced the opposite.

I was running some sentiment analysis experiments; describe the subject and the subjects emotional state kind of thing. It picked up on a lot of little detail; the brand name of my guitar amplifier in the background, what my t shirt said and that I must enjoy craft beer and or running (it was a craft beer 5k kind of thing), and picked up on my movement through multiple frames. This was a video slicing a frame every 500ms, it noticed me flexing, giving the finger, appearing happy, angry, etc. I was really surprised how much it picked up on, and how well it connected those dots together.


I regularly show Claude Code a screenshot of a completely broken UI--lots of cut off text, overlapping elements all over the place, the works--and Claude will reply something like "Perfect! The screenshot shows that XYZ is working."

I can describe what is wrong with the screenshot to make Claude fix the problem, but it's not entirely clear to what extent it's using the screenshot versus my description. Any human with two brain cells wouldn't need the problems pointed out.


This is my experience as well. If CC does something, and I get broken results and reply with just an image it will almost always reply with "X is working!" response. Sometimes just telling it to look more closely is enough, or sometimes I have to be more specific. It seems to be able to read text from screenshots of logs just fine though and always seems to process those as I'd expect.


This will get written off as victim blaming, but there’s some truth here.

I don’t use Claude code for everything. I’ve fallen off the bike enough times to know when I’ll be better off writing the changes myself. Even in these cases, though, I still plan with Claude, rubber duck, have it review, have it brainstorm ideas (“I need to do x, I’m thinking about doing it such and such way, can you brainstorm a few more options?”)


I agree with that being fastest, but not cheapest.

In my experience these one off reports are very brittle. The app ends up making schema changes that are breaking to these one off reports, and you usually don’t find out until it goes to production.

I’ve dealt with the maintenance nightmare before. At current gig we’re exploring solutions, curious what a robust pipeline looks like in 2025.

The ORM piece is interesting — we use ActiveRecord and Ruby, and accidentally breaking schema changes within app will get caught by the unit test suite. I would love for a way to bring OLAP reports in similarly to test at CI time.


Why not test the OLAP reports?

Surely there is a way to run a raw query in Rails/ActiveRecord and use it in a smoke test?


They call this Data Contracts.


I mean, if you're relying on tests to catch schema changes... then test your sql reports? This doesn't seem like an amzingly cool solution but if that's the one you're already using...


> As the recent US election has shown, it's not what you say - but how you say it, that counts.

Are you implying that Biden spoke with more confidence than Trump, but Harris did not?


I spent 10 years doing C#, and the last 3 doing Ruby. I never thought of N+1 as that big of an issue. These queries are typically fast (1ms * 100 is still only 100ms…) and multithreaded web servers are non blocking on IO like database calls.

But these sporadic elevated response times kept showing up on endpoints, where they’d be hundreds of milliseconds slower than normal, but by some extension of 100ms. Say, normally 5ms, now taking 105ms, or 505ms, or more.

Then I learned about ruby’s non parallel but concurrent model, where within a process only one thread can execute at a time. In most workloads you’ll hit IO quickly, and the threads will play nicely. But if you have a CPU crunching exercise, it’ll delay every other thread waiting to execute by 100ms before it preempts. Now consider you’re doing 10 1ms queries inter process with a greedy thread, and you’re waiting at minimum 1010ms.

Still love Ruby but the process model gave me a reason to hate N+1s.


Since Rails 7.1 we've had https://www.rubydoc.info/github/rails/rails/ActiveRecord%2FR... which actaully does run queries in parallel.

There's also Rails' russian doll caching, which can actaully results in pages with n+1 queries running quicker than ones with preloaded queries. https://rossta.net/blog/n-1-is-a-rails-feature.html


load_async is still concurrency, but not parallelism. The queries themselves can run parallel, but when materializing AR objects e.g., only one thread can run at a time. A greedy thread in process will still subject you to GVL waits


If that’s a problem for you right now I’d suggest giving JRuby a look as it has no GVL and true multithreading.

Hopefully as Ractors mature that problem will be solved for MRI too.


Writing is really beneficial for exploring the problem space.

Many times I’ve confidently thought I understood a problem, started writing about it, and come away with new critical questions. These things are typically more visible from the abstract, or might not become apparent in the first few release milestones of work.

I’m reminded of a mentor in my career, who had designed an active/active setup retroactively for a payment gateway. He pulls up a Lucidchart and says “this diagram represents 6 months of my life”.

They’re not always necessary or helpful. But when they are you can save weeks of coding with a few days of planning.


I had a boss who had a math degree. He'd map out the flow from start to finish on a whiteboard like you see mathematicians on TV/movies. Always had the smoothest projects because he could foresee problems way in advance. If there was a problem or uncertainty identified, we'd just model that part. Then go back to whiteboard and continue.

An analogy is planning a road trip with a map. The way design docs most are built now, it shows the path and you start driving. Whereas my bosses whiteboard maps "over-planned" where you'd stop for fuel, attraction hours, docs required to cross border, budget $ for everything, emergency kit, Plan A, Plan B.

Super tedious, but way better than using throwaway code. Not over-planning feels lazy to me now

Sure, everyone has a plan until you get punched in the mouth; however, that saying applies to war, politics, negotiations, but not coding.


I the book How Big Things Get Done they analyze big and small project failures and success and end up with something along the lines:

1. Spend as much time in planning as necessary, in the context of mega projects planning is essentially free, maximize the time and value gained in planning.

2. Once you start execution of the plan, move as fast as possible to reduce likelihood of unforeseen events and also reduce costs increases due to inflation, interest paid on capital etc.

[0] https://www.goodreads.com/book/show/61327449-how-big-things-...


+1 for "How Big Things Get Done". It changed the way I run projects. I got lucky in the sense that I was able to convince my corporate overloads to allow us to have separate Discovery and Delivery goals, on the premise that discovery is cheap and delivery is expensive (the former significantly reduces risk of the latter) and we show our work. Discovery goals come with prototype deliverables that we're ok not shipping to production but most times lay the foundational work to ship the final product. Every single time we've found something that challenged our initial assumptions and we now catch these issues early instead of in the delivery phase.

We've fully embraced the "Try, Learn, Repeat" philosophy.


> convince my corporate overloads to allow us to have separate Discovery and Delivery goals

Since I’m in the middle of trying to do something similar, would love to hear more details. What kind of goals, whats the conflict?


Yes I have to second that. MLJ.jl is also written by a mathematician and the API is excellent. Truly well thought-out.

(If you think “why does MLJ.jl have so few stars?” please keep in mind that this library was written for the Julia language and not for Python. I honestly don’t think the library is the cause of low popularity. Just wrong place wrong time.)


First you have to have smart people who will be able to foresee design issues.

That’s a bit uncharitable but following this line of thought - you also need those smart people to be confident and communicative.


And for them to be listened to, what is independent on how well they communicate; and for them to be aligned with the most powerful stakeholder, what is almost never the case; and for no big change to happen in an uncontrolled way, what powerful people nowadays seem intent on causing all the time.


If you create the plan like a mathematical formula like my boss did, the evidence becomes irrefutable... like a mathematical proof. The article does mention that the plan is communication tool.


Everywhere I worked technically correct and irrefutable facts were enough times thrown away and dismissed based on someone feeling or emotion that I don’t believe in irrefutable mathematical proof being communication tool that solves everything.

There had to be something more like just that guys authority or him being majority shareholder or him being super empathetic that he knew how to handle people.


> however, that saying applies to war, politics, negotiations

It’s not even an argument against planning. You’d be a fool to go to war without a plan. The point of the saying is that you’d be a fool not to tear up your plan and start improvising as soon as it stops working.


To borrow another quote:

Plans are nothing, but planning is everything.

The process of building a plan builds the institutional knowledge you need to iterate when inevitably the original plan doesn’t work.


It is kind of an argument against overplanning though, because if your plan that you spent considerable time creating becomes irrelevant, you wasted a lot of time


That assumes the plan itself is the only useful output from the time spent planning. Even if the plan itself isn't used, the time spent planning means you examined the problem thoroughly, and raised questions that needed answering. Taking the time to think about those questions in order to give a coherent answer is, in and of itself, worthwhile for answering the question later, even if that part's never actually written down.


True, I agree 100%, and that's why I chose to say 'irrelevant' to imply that there was nothing useful about it inherently for those cases. Most of the time, at least in coding, there was probably something useful that came out of it, even if you had to scrap the plan. At the very least, some sort of learning more about the problem space. In the case of war, however, if you lost the war because you over-planned (such as planning one thing very very intricately instead of having several rough plans that leave room for some improv), I'd argue that there probably aren't any residual benefits to celebrate


I had to do this for a patent application, and likewise found it very useful for identifying holes in my thought process or simply forcing myself to do the functional design work up-front and completely.

It was also great for brainstorming about every feature and functional aspect you can imagine for your product, and making an effort to accommodate it in your design even if it's not MVP material.


> but not coding.

In my experience it applies to coding when you have any reliance on third party libraries or services and don't have an extensive amount of actual real world experience with that technology already.


If you have unknowns, then your planning process starts with, "let's figure out how to use this new technology." And that process can involve a bunch of prototyping.

Having to make a choice between "make a design document" or "do prototyping" is a false dichotomy. They're complimentary approaches.


This right here <- is why every discussion in the SWE space is super tedious. Every critique of anything is really just "you are holding it wrong".

Over-planning is impossible if you plan for it, thanks!


>Sure, everyone has a plan until you get punched in the mouth; however, that saying applies to war, politics, negotiations, but not coding.

hey, the EU just introduced this new regulation is the software version of getting punched in the mouth.


> Super tedious, but way better than using throwaway code. Not over-planning feels lazy to me now

How was it better? I think a lot of people plan precisely because it feels virtuous, but that's true regardless of whether it's effective or not.


My boss would take a piece of data/input and run it through the entire process. It's a string data here, converts to number here, function transforms it here, summarized here, output format there... You wouldn't run into data type issues or have an epiphany that you're missing a data requirement.


If the data transformations are the hard part, sure. But often the hard part is whether you're even outputting the right thing at all. Also, if you're planning in that much detail, you might as well be writing code (perhaps with some holes).


> Super tedious, but way better than using throwaway code. Not over-planning feels lazy to me now

Certain projects have too many unknowns to overplan and you need to collect data to construct the assumptions necessary to evaluate the approach.


Man plans and executives laugh


I agree writing is beneficial. But I also find this works with coding. And they go hand in hand for exploring in my experience.

And in the end a good PR has a lot of writing too and has this effect. IMO this sort of well documented draft PR serves as a better design proposal because pure writing causes you to forget important constraints you only remember when you’re in the code.


Agreed. You can write prose or implementations or tests beforehand and I don't think it matters too much which you choose, just as long as you give yourself a phase 2 for incorporating the learning you did in phase 1 and put a reality check of some kind in between them.

The only problem with having the draft being implementation is that maybe you'll get pressured into shipping the draft.


I’ve had to ship the draft a few times in my career. Usually when the actual code would have been weeks or months more of work (draft has poor architecture, while a proper architecture would have been just as much work as the draft). Twice it was due to showing a demo and a decision maker in the audience said “we can sell this tomorrow” or something to the same effect for that org. In one case, we ran a simple a/b test as a proof of concept on whether to pursue the idea further and it added an extra million bucks a year in revenue. Nobody wanted to wait for a proper implementation. All that code is still in production, slow as shit and nobody wants to fix something that isn’t broken.

If you have a draft, keep it to yourself. Use it as a personal reference when writing the design, or share snippets. Other engineers will realize you have a draft, business people won’t.


> In one case, we ran a simple a/b test as a proof of concept on whether to pursue the idea further and it added an extra million bucks a year in revenue.

I'm with the people who decided to ship this. The organization will need to fund more maintenance than they would if they waited, but that has real costs. And "keep your 1mm/revenue idea to yourself" doesn't sound like a healthy engineering culture either.


Heh, yeah, that is a special case though. Most people don't work on things that has that kind of impact. If it might, I would suggest going against my own advice.


The time spans for doing it properly or shipping are maybe a bit muddled here, but if you can make a million dollars by shipping something now, you should probably ship it now. If the code is that bad (ie can end up costing way more to maintain than fix) then you should afterward immediately fix it. Side benefit is you’ll probably learn a lot about doing it right from the prod system.


Oh we tried to fix it and never succeeded. As soon as other teams found out about it, they started using it too. To be honest, had we waited to do it right, we potentially could have made even more money.


I'm not sure what you mean by "should". If your job is to build things that aren't going to fail in ways that later end the company or hurt people, then finding ways to keep the:

> But we could make an additional million next quarter if we ship it now

...crowd in check is probably what you should be doing.


I like where your head is at but if you keep it to yourself then you can't get feedback on it.

I once did the draft with ncurses as a hedge against it becoming the real thing. It didn't go over especially well but it was fun.


"Writing is nature's way of letting you know how sloppy your thinking is." -- Dick Guindon


I agree that LLMs capabilities with a language are going to be extremely relevant. Community, API consistency, and whatever other factors that are going to increase LLM usefulness will decide the popularity of languages in the coming years.

I’m not sold on the importance of static typing though. I’ve had great results with Ruby and Python with 4o, o1, and to a limited degree Copilot.

One of the biggest benefits of Ruby is how simple testing is. The language is so dynamic that mocking/stubbing and intercepting or whatever is dead simple stupid.

So the “static types prevent you using LLM hallucinations” does not hold for me. I’m going to write tests covering the method (which the LLM will probably help with), and I’m going to get an undefined method error.


It's very evident this is the case if you generate similar JavaScript or TypeScript.

The types mismatching can really help you spot mistakes early on instead of at runtime, plus with the LLM generating trivial boring types is very straightforward.

The same effect is visible in Rust too and you'll quickly catch APIs that don't exist or that are being used incorrectly - albeit LLM understanding of Rust is really bad compared to other mainstream languages


> both our BMI's hover just over 30.

> We're active, we eat well

Respectfully, you are both obese. If you’re ok with that that’s ok; but to say your diets are appropriate is telling yourself a lie.


Respectfully, both of our diets are totally fine, and I don't need your opinion to let me know that. Any biomarker you might like to use tells us that we're fine; our weight is the only number that is "outside the norm", and while sure it's a measure of overall health it's not the only one or even the best one.


To be clear, being obese is dangerous in the long-term. Your biomarkers being okay now doesn't mean you're good to go. Obesity increases your risk of pretty much everything bad. That doesn't mean you're magically unhealthy, but certainly your risk is greater.

That doesn't mean you need to change anything or that you're weak or whatever people might say. I do tons of unhealthy stuff that are fine for the time being. I drink for one - that's gonna catch up to me.


Also basic biomarkers are not a complete picture of your health. Obesity puts stress on the body in a variety of ways, some of which might not show up until it gets to a threshold point. For example, strain on joints.


What's the best measure of overall health?


BMI is a shit metric and must be taken with a grain of salt. If you take a not so tall person with a large muscle mass and sub 10% body fat, you can still end up in the severely overweight / obese range.

Because muscle is way more dense than fat, and we should factor in for bone mineralization and bone weight in those who do resistance sports as well.

So take BMI = kg/m2 with precaution, as there better metrics such as waist-to-height ratio.


You know if you are a BMI outlier. It's not interesting to discuss outliers as they are rare by definition.

BMI is a fine metric for describing a population's general health when it comes to weight. The actual reason the metric exists.

There are exceedingly few folks with 10% body fat and a BMI of 30. They tend to be clustered around professional athlete or bodybuilder circles. Again, not interesting to discuss these things outside of niche circles. Those that are outliers know already, due to the work they put in to be such.

No one is walking around with a BMI of 30 and happening to accidentally be at a healthy weight due to low body fat percentage/high lean muscle mass and not knowing it.


Generally, short buff folks aren't having breakdowns about being overweight and unhealthy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: