Hacker Newsnew | past | comments | ask | show | jobs | submit | afry1's commentslogin

And the leaders of these companies are so genuinely surprised at the fact that the people are refusing this technology.

Sam Altman: “Looking at what’s possible, it does feel sort of surprisingly slow.”

Satya Nadella: “For this not to be a bubble by definition, it requires that the benefits of this are much more evenly spread.”


Translation: "Look at all these poor executives who don't yet have yachts because we still have to pay workers!"


"The future belongs to whoever understands what they just shipped."

Perfect summary.

It's like we invented a world where you can finally, _finally_ speedrun an enormous legacy codebase and all patted ourselves on the back like that was a good thing.


I like that. These AIs are legacy codebase generators. Nobody knows how it works and everyone is afraid to touch it.


Anything in production is legacy; I'm pretty sure it happens as soon as the code is shipped regardless of who wrote it.


True, but I think there's another dimension implied: how many devs are left that understand the code? Being able to start at zero is a fascinating surprise (compared to five years ago).


At least with a person, you can say that there's one person in your org that understands the code after they write it and submit it for review.

Maybe they stick around for a while, maybe they move on to another job, but they were THERE at some point. They have a name. You can ask them questions about what they did. And hey, they still exist in the real world too, so you can get in touch with them even after they leave if you need to.

AI powered development is like a guy shows up, gets hired for 90 seconds, writes part of a feature, and dies instantly once the code hits the screen.


a comment I cannot stop thinking about is "we need to start thinking about production as throw away" Which is a wild thought to think about when I think about on my career. We had so many dbs or servers that we couldn't touch because they were a special snowflake


Yup. AI can't automate long-term responsibility and ownership of a product. It can produce output quicker but somebody still has to be responsible to the customer using said product. The hard limit is still the willingness of the human producing the code to back what's been output.


I agree with most of the article, right up to the point where the assumption is that AI will make things worse.

We have reached a point of complexity and short-termism where it's standard practice to shove a huge, barely tested mass of Python, JavaScript, shell scripts, and who knows what else inside a docker container and call it done. Complete with hundreds or thousands of intractable dependencies, many of which we know ~nothing about and thousands of lines of esoteric configurations for servers we have barely any hope of even getting to run optimally, let alone securely.

Most software has been awful for a while.

Already, with AI:

- We can build everything in a statically typed, analysable, and memory safe language like Rust[0], even bits that really have to interact with the shell or wider OS and would previously have been shell scripts

- We can insist on positively deranged amounts of testing, at every level

- We can easily cut the number of dependencies in our code by >50%, in many cases more like 90%

- We can do the refactor as soon as it becomes obvious that it would be a good idea

- We can implement quality of life admin and monitoring features that would otherwise sit on the backlog for eternity

- We can educate ourselves about what we've built[1] by asking questions about our codebase, build tools to understand the behaviour of our systems, etc.

So yes, I agree that "The Future Belongs to Whoever Understands What They Shipped", but unlike the author I am somewhat optimistic[2]. There is more opportunity than ever to build and understand extremely high quality software that does not accept technical debt, corner cutting to meet deadlines, or poor quality (in design or implementation), for those that engineers who are knowledgeable enough and willing to embrace the new tools.

And AI, and the tooling around it, is only getting better.

[0] or Go or even TypeScript, but there's precious little reason not to pick Rust for most use cases now

[1] of course we need to choose to, and many won't…

[2] of course, there'll also be near-infinite valueless slop and some people will get sucked into that, but this seems little different to regurgitated SEO spam, short form video, and all the non-AI enshittification we already put up with, and perhaps AI will help more of us do a better job of avoiding it


We are speedrunning legacy "codebases" all the time. Or do you conjure up your own pickaxe, mine your own minerals, produce your own electricity, and construct your own computers and networks first before you go off to develop an application? Would you even know how to do those things? That is all enormous legacy codebase that we speedrun all the time. Just add one more to it.


When I use a library someone understood it when they shipped it. It also had a long stabilisation period where bugs were fixed in it. When I use an LLM, potentially nobody understands what was just shipped and it has had no time to stabilise.


I sure don't.

But when I'm using all of those things (pickaxe, mineral mine, power station, internet network hub), I know that there was a thinking human being that took some measure of human care and consideration when creating them. And that there are people on the other side of the economic transaction to talk to or hold accountable when something goes wrong.


You're talking about layers of abstraction, OP was talking about an ever ballooning mass of code in the same layer of abstraction


That’s all legacy but none of it is speedrunning.

If we could conjure pickaxes and electric power plants in a single day, that would be speedrunning.


If code is now free, why does the language matter at all?


All code has bugs, the vector space of all possible bugs determines the entropy of the problem space for a large language model to traverse.

Reduce entropy, increase probability of the correct outcome.

LLMs are surfing higher dimensional vector spaces, reduce the vector space, get better results.


Because you'll have to review it, and Go's design limits the number of ways it can go wrong.

Code is free, sure, but it's not guaranteed to be correct, and review time is not free.


If code must be correct, and review time is not free, and review time is costlier than code time, why not just ...

... write the code yourself?


I 100% agree, and do, but it's an answer to the GP's question (why generated code language matters)

I think many many people just skip the "review" step in this process, and assume they're saving time. It's not going to end well.


Cause it's still quicker to review only then to code + review. And review time isn't necessarily costlier than code time.


I thought about this for a while and came to a conclusion that while "code is free", tokens are not. If tokens were free and instant, it would generate machine code directly. Therefore, it needs abstractions like a compiled or interpreted language in order to address the token bottleneck.


Those Lasko fans have pretty raw edges on the blades of the fan itself, which I think contributes a lot to the noise. If you take the cover off, sand down the nubs and bits of flaking plastic, and reassemble, I think that will take care of a lot of noise.


Blade geometry would make a lot of difference. Could be a fun at home science experiment to do with a 3D printer and $200 worth of filament.


You want to discourage turbulence at the tips. Blade tips, like the risers at the edge of some jet wings, would probably help.


It is very possible!

Just this year these girls discovered a proof for the Pythagorean theorem using nothing but trigonometry, a feat considered impossible until they did it: https://youtu.be/VHeWndnHuQs


Unfortunately it seems their proof already had the Pythagorean Theorem embedded within its implicit assumptions - they define measure of an angle through rotation of a circle. They don't explicitly define circle, but from their diagram they hint at the "understood" definition, namely a set of points equidistance from a central point, while using Euclidean distance as the metric.


That's not true at all.

To understand why, read Euclid.


Geometry has made a bit of progress since Euclid's time. Its become a bit more rigorous.

Euclidean geometry is based on five axioms, and some other terms left undefined.

The fifth postulate - the parallel postulate - was considered so irksome that for hundreds of years, many attempted to prove it using the other four, but failed to do so, and almost drove some crazy. In the late 19th century it was shown you can generate perfectly valid geometries if you assume it to be false somehow - either no-parallel (spherical geometry) or infinite parallel (hyperbolic)

Euclid's third postulate - "a circle can be drawn with any center and radius - doesn't define how to do it. Like I could draw a "circle with a radius of 1" using taxicab distance, and it would look like a diamond shape.

Conversely, if you take the "conventional" definition, than the Pythagorean theorem falls out almost immediately.


The (non-generalized) Pythagorean theorem is part of Euclidean geometry, so non-Euclidean geometry is irrelevant to this discusion.

> Euclid's third postulate - "a circle can be drawn with any center and radius - doesn't define how to do it.

You do it using an axiomatic compass, a device that copies length in a circular pattern but does not measure it. Lengths are measured using constructable line segments.

Are you implying that nearly all the hundreds of proofs of Pythagorean theorem, which do not use modern rigorous definitions, are not valid proofs?

> Conversely, if you take the "conventional" definition, than the Pythagorean theorem falls out almost immediately.

So? The Pythagorean theorem is very easy to prove. There are hundreds of proofs created by amateurs. That doesn't make them "not proofs" simply because other proofs exist.


Strictly speaking, the postulates say nothing about compasses, or even straihhtedges/constructions. Also introducing lengths similarly, involves introducing number which is not a "pure" geometry concept. The third postulate just says that a "circle" exists defined by a point and a radii (which also, not a "pure" geometry construct since it involves a metric - i.e. number.

I would say yes, alot of the fundamental proofs while not striclty "incorrect" or false, are rather informal and contain some hidden axioms/circularities.

Tarski put geometry on a more secure footing using first-order logic.

Similar to how Calculus wasn't on a solid logical foundation until Riemann.


> a feat considered impossible until they did it

Hm? https://www.cut-the-knot.org/pythagoras/TrigProof.shtml

> J. Zimba, On the Possibility of Trigonometric Proofs of the Pythagorean Theorem, Forum Geometricorum, Volume 9 (2009)

And Zimba's proof terminates in a finite number of steps.


These guys have a fabulous bit on microservices that I watch at least monthly to maintain my sanity.

https://youtu.be/y8OnoxKotPQ


Microservices is probably their most popular video, but all their videos are genius.

I'm partial to I Have Delivered Value... But At What Cost? https://m.youtube.com/watch?v=DYvhC_RdIwQ


Congrats Michael! I've been following along with your story on and off since 2019-ish, very cool to read about how things are getting along.


Pound for pound, mining and processing minerals for batteries has a much smaller environmental impact compared to extracting and processing fossil fuels for gas.

It's not nothing, but way less.


I'm not sure if I believe that, at least the "pound per pound" bit.

Extracting oil usually involves drilling a hole and getting a material you can mostly (I think about 80%) turn into useful products, though hydrofracking involves handling a lot of water and oil from some places in Saudi Arabia contains a lot of sulfur that has to be removed.

On the other hand, many minerals are found in concentrations of less than 10%, often much less

https://www.sgs.com/-/media/sgscorp/documents/corporate/broc...

The real advantages of the minerals are: (i) a car consumes about its own weight in fuel every year so in its lifetime it consumes maybe 10x it's weight in fuel, (ii) the use (as opposed to production) of that fuel has environmentally unacceptable effects, (iii) the minerals ultimately will be incorporated in a "circular economy".

Note that the automobile industry is a lot more circular than many (say food packaging) in that your local junkyard sells whatever parts it can (I know a guy who just bought a used Ford truck with doors rusted out who just bought two doors from a junkyard) and will send what is left to get crushed when metal prices are high. If you smack your car up at 110,000 miles likely you will get some body panels from this source.

Battery recycling is not a big industry now but it will be. Mining will still be more important than recycling as long as the world is in transition to electric cars. I was quite amused to find that the techniques planned for battery recycling are very similar to both established and in development techniques for recycling spent nuclear fuel.


If I had a dime for every time I saw somebody copy and paste "#myExampleWidget" into production code ...


Very cool!

This issue of the review includes a very cool submission from a few weeks ago which did pretty well on the front page: https://alex.miller.garden/grid-world/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: