Hacker Newsnew | past | comments | ask | show | jobs | submit | jason_oster's commentslogin

True, but carpenters using hand tools are a niche.

If you are implying that programmers who hand code are going the way of carpenters using hand tools, I think I can agree.


I do... but I also think all programmers need to know how to hand code, and all carpenters need to know how to use hand tools.

I agree also.

I have to point out that having "high personal standards" is its own fatal flaw. The worst quality code I've seen comes from developers with little self awareness or humility. They call themselves artisans and take no responsibility for the minefield of bugs and security vulnerabilities left in their wake. The Internet is held together with bubblegum and baling wire [1] [2] because artisans reject self improvement.

These same artisans complain about how bad AI generated code is. The AI is trained on your bad artisan code. It's like they are looking in the mirror for the first time and being disgusted by what they see.

[1]: https://techcrunch.com/2014/03/29/the-internet-is-held-toget...

[2]: https://krebsonsecurity.com/2021/11/the-internet-is-held-tog...


A sufficiently detailed spec is not code. It's documentation containing a wealth of information that the code cannot. Code describes how a product works, not what it is supposed to do. That is the job of the specification [1] [2]. Notably, the specification omits implementation details. That is the job of the code.

Confusing the *how* and the *what* is very common when discussing specifications, in my experience. Programmers gravitate toward pseudocode when they have trouble articulating a functional requirement.

> Specifications were never meant to be time-saving devices.

Correct. Anyone selling specifications as a way to save time does not understand the purpose of a specification. Unfortunately, neither does the article's author. The article is based on a false premise.

LLMs experience the same problems as humans when provided with underspecified requirements. That's a specification problem.

[1]: https://en.wikipedia.org/wiki/Software_requirements_specific...

[2]: https://en.wikipedia.org/wiki/Formal_specification


Absolutely not. GPL is freedom for the authors. The end users have conditions they must meet to use the software. Those conditions are restrictions. That is precisely the opposite of freedom for end users.

To anticipate objections, the conditions keep the software "free for everyone", which is true. But that's still explicitly freedom for the authors. The conditions preemptively eliminate end users who would otherwise find the software valuable. Because it is not freedom for end users.


This is irrelevant over the long run because the environment changes even if nothing else does. A compiler from the 1980's still produces identical output given the original source code if you can run it. Some form of virtualization might be in order, but the environment is still changing while the deterministic subset shrinks.

Having faith that determinism will last forever is foolish. You have to upgrade at some point, and you will run into problems. New bugs, incompatibilities, workflow changes, whatever the case will make the determinism property moot.


I don't know, having done a lot of completely pointless time-wasting staring at hex dumps and assembly language in my youth was a pretty darned good lesson. I say it's a worthwhile hobby to be a compiler.

But your point stands. There is a period beyond which doing more than learning the fundamentals just becomes toil.


This is an excellent observation and puts into words something I have barely scratched the surface of. Along with specifications, formal verification is another domain that received the "just automate it" treatment in the before times.

And because formal verification with LLMs is an active area of open research, I have some hope that the old idea of automated formal verification is starting to take shape. There is a lot to talk about here, but I'll leave a link to the 1968 NATO Software Engineering Conference [1] for those who are interested in where these thoughts originated. It goes deeply into the subject of "specification languages" and other related concepts. My understanding is that the historical split between computing science and software engineering has its roots in this 1968 conference.

[1]: http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PD...


You're putting a lot of responsibility on a license that has several permissive contemporaries. The original BSD license "Net/1" and GPL 1.0 were both published in 1989, while the MIT license has its roots set in "probably 1987" [1] with the release of X11.

No doubt, GPL had some influence. But I would hardly single it out as the force that ensured software stayed open. Software stayed open because "information wants to be free" [2], not because some authors wield copyright law like a weapon to be used against corporations.

[1]: https://opensource.com/article/19/4/history-mit-license

[2]: A popular phase based on a fundamental idea that predates software.


The existence of permissive licenses like BSD or MIT does not show that copyleft was unimportant.Those licenses allowed code to remain open, but they also allowed it to be absorbed into proprietary products.

The GPL’s significance was that it changed the default outcome. At a time when software was overwhelmingly proprietary, it created a mechanism that required improvements to remain available to users and developers downstream.

Gcc was a massive deal for the reasons why compilers are free now today for example


I did not say it was unimportant. I said it was not the only important factor.


GPL was a response to Symbolics incorporating public domain into their software without giving back to the community (and Lisp Machines).


I’m not saying it’s the only force. But if it wasn’t instrumental what’s your take on the cause of proprietary software dominating until relatively recently?


You certainly made the case that the GPL was the only force, or at least ignored the contribution of alternative licenses.

I also wouldn't agree that proprietary software is in decline. There are niches where the OS, mobile apps, and games are almost entirely proprietary (and that is not changing any time soon). But the most damning problem is that all computer hardware now has multiple layers of subsystems with proprietary software components, even if the boot loader and beyond are ostensibly FOSS.

My take on the cause of proprietary software is "the bottom line". Companies want to sell products and they believe that it's easier to sell things that are not open source. Meanwhile, there are several counterexamples of commercial products that are also open source (not necessarily copyleft), including computer games. The cause of whatever decline you're seeing in proprietary software dominance is unlikely to be the GPL.


> You certainly made the case that the GPL was the only force

Nope.


The vast majority of running instances of operating systems are Linux or BSD. I don't think proprietary software has dominated for 15-20 years.

The two places it has won out thus far is in retail and SaaS. The environment of 1980 when most important software was locked behind proprietary licenses is quite far behind us.


Since Linux is GPL this seems to support my point.


Linux won against the multiple proprietary Unixes because it forced corporations to contribute back instead of keeping their secret sauce for themselves.


And same corporations are now pushing BSD license at every avenue just to avoid having to do that.


This confuses the economics of open source. It's easier to contribute changes upstream than maintaining a fork. A smart business decision is using permissively licensed software that is maintained by other teams (low maintenance cost) while contributing patches upstream when the need arises (low feature cost).

Bringing a fork in-house and falling behind on maintenance is a very bad idea. The closest I've ever come to that in industry was deploying a patch before the PR was merged.


Proprietary Unixes were literally that at the scale of an entire OS.


The downvotes on the above post are telling -- the GPL Bolsheviks are girding their loins. Myself, I am nostalgic for "information wants to be free" and find the Bolsheviks to embody a horseshoe alternative form of fascism who, somehow without cognizance of the irony, attempt to redefine the meaning of freedom.


I agree it's probably monomorphization (speculation without looking at it). Generic function parameters might be the root cause, but number of dependencies is a combinatorial multiplier.

I've hit compiler bugs that behave this way. Here's one from an LLVM upgrade [1]. The test case I discovered apparently took over 20 minutes to compile, up from 26 seconds on stable! Their value tracking algorithm was accidentally quadratic.

[1]: https://github.com/rust-lang/rust/issues/137909


> But it’s still too many dependencies and too slow. The binary is pretty huge too.

Ah yes, my never-ending crusade to raise awareness that the cost of free (as in effort) dependencies is bloat.

You can make useful tools that are tiny and compile fast [1], but it takes a lot of effort; exactly what developers don't want to spend.

[1]: Like https://github.com/parasyte/hd -- And I wrote about one of the tricks that it uses: https://blog.kodewerx.org/2023/04/improving-build-times-for-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: