Hacker Newsnew | past | comments | ask | show | jobs | submit | jevndev's commentslogin

I love ligatures but I wish there was tooling for context sensitive ones. This is a really good example. When developing, I love <= turning into ≤. When running a cli that happens to use <= for the start of its progress bar… not so much

You are in luck! Editors do support customizing which ligatures get used where. For example, ligature.el lets me set only certain ligatures in certain modes. I like ligatures in Haskell, but dislike them in prose. I don't really customize at a finer-grained level than modes, but I could. Other editors should have similar configs.

Reasonably certain the practice of naming combinators after birds comes from “To mock a mockingbird” by Raymond Smullyan. Don’t have the book on hand to verify but figured I’d drop it here because it’s a great bunch of logical puzzles

As it says in the footnote of TFA.

The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them.

But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.


This is true about LLMs themselves but the developments behind them have been a boon for robotics. I’m mostly familiar with computer vision so I can’t speak to everything, but vision transformers (ViTs is the term to search for) have helped a ton with persistence of object detection/tracking. And depth estimation techniques for monocular cameras have accelerated from the top of the line raw cnn based models from just a few years ago; largely by adding attention layers to their model.

I agree that they’re not there yet but I don’t want to discredit the benefits of these recent advancements


Unfortunately this practice is prevalent still. Recently I’ve been applying to jobs in the two industries I have experience in (algorithmic robotics and fintech) and nearly half of the companies that I’ve heard back from start with either a timed leetcode problem or an HR interview which is immediately followed by a timed leetcode problem. It’s exhausting.

Interesting. I am going for a broad search rather than being targeted. Maybe it's as you say, an industry specific problem. At my last fintech job they just quizzed me a bit on Terraform and asked me about experience, though that role ended up being a disaster later on.

My least favorite by far is the “multi section” webpage design. Where the page is split into multiple whole-screen sections and scrolling the mouse wheel alternates between either moving between sections or playing the animations of that section. Yes please make my scroll wheel only sometimes actually scroll the page and other times rotate a graphic for way too long thanks

Neat example of the strengths and weaknesses of vibe coding… But if anyone here is looking for a solid browser-based parametric CAD solution, [onshape](https://www.onshape.com) is the best there is. It’s missing a few tools that more complex alternatives have but if all you need is something easy to learn so you can make things to 3d print it’s a good choice


Onshape is indeed fantastic for hobbyists and professionals alike.

Their licensing model is reminiscent of early Github days in that you can use all available modeling features free of charge, but must pay for a private repo. Otherwise, all user generated content is publicly available.


> Their licensing model is reminiscent of early Github days

The licensing cost has a few more digits than GitHub ever did.

And you are locked in.


You are not wrong, but they’re much more generous than anyone else in the professional CAD space.


Yeah, it's pretty good but still crazy expensive. Most of the good CAD softwares have remained very expensive.


All the better FreeCAD continues to steadily plow forward..


this is free and open source :) also runs directly in your browser / offline. onshore requires an account and internet connection because it streams to your browser


> the definitions are cloudy enough […]

This is one of the biggest traps I’ve seen in code review. Generally, everyone is coming from a good place of “I’m reviewing this code to maintain codebase quality. This technically could cause problems. Thus I’m obligated to mention it”. Since the line of “could cause problems (important enough to mention)” is subjective, you can (and will, in my experience) get good natured pedants. They’ll block a 100LOC patch for weeks because “well if we name this variable x that COULD cause someone to think of it like y so we can’t name it x” or “this pattern you used has <insert textbook downsides that generally aren’t relevant for the problem>. I would do it with this other pattern (which has its own downsides but i wont say them)”.


The “Stop at first level of type implementation” is where I see codebases fail at this. The example of “I’ll wrap this int as a struct and call it a UUID” is a really good start and pretty much always start there, but inevitably someone will circumvent the safety. They’ll see a function that takes a UUID and they have an int; so they blindly wrap their int in UUID and move on. There’s nothing stopping that UUID from not being actually universally unique so suddenly code which relies on that assumption breaks.

This is where the concept of “Correct by construction” comes in. If any of your code has a precondition that a UUID is actually unique then it should be as hard as possible to make one that isn’t. Be it by constructors throwing exceptions, inits returning Err or whatever the idiom is in your language of choice, the only way someone should be able to get a UUID without that invariant being proven is if they really *really* know what they’re doing.

(Sub UUID and the uniqueness invariant for whatever type/invariants you want, it still holds)


> This is where the concept of “Correct by construction” comes in.

This is one of the basic features of object-oriented programming that a lot of people tend to overlook these days in their repetitive rants about how horrible OOP is.

One of the key things OO gives you is constructors. You can't get an instance of a class without having gone through a constructor that the class itself defines. That gives you a way to bundle up some data and wrap it in a layer of validation that can't be circumvented. If you have an instance of Foo, you have a firm guarantee that the author of Foo was able to ensure the Foo you have is a meaningful one.

Of course, writing good constructors is hard because data validation is hard. And there are plenty of classes out there with shitty constructors that let you get your hands on broken objects.

But the language itself gives you direct mechanism to do a good job here if you care to take advantage of it.

Functional languages can do this too, of course, using some combination of abstract types, the module system, and factory functions as convention. But it's a pattern in those languages where it's a language feature in OO languages. (And as any functional programmer will happily tell you, a design pattern is just a sign of a missing language feature.)


I find regular OOP language constructor are too restrictive. You can't return something like Result<CorrectObject,ConstructorError> to handle the error gracefully or return a specific subtype; you need a static factory method to do something more than guaranteed successful construction w/o exception.

Does this count as a missing language feature by requiring a "factory pattern" to achieve that?


The natural solution for this is a private constructor with public static factory methods, so that the user can only obtain an instance (or the error result) by calling the factory methods. Constructors need to be constrained to return an instance of the class, otherwise they would just be normal methods.

Convention in OOP languages is (un?)fortunately to just throw an exception though.


In languages with generic types such as C++, you generally need free factory functions rather than static member functions so that type deduction can work.


> You can't return something like Result<CorrectObject,ConstructorError> to handle the error gracefully

Throwing an error is doing exactly that though, its exactly the same thing in theory.

What you are asking for is just more syntactic sugar around error handling, otherwise all of that already exists in most languages. If you are talking about performance that can easily be optimized at compile time for those short throw catch syntactic sugar blocks.

Java even forces you to handle those errors in code, so don't say that these are silent there is no reason they need to be.


This is why constructors are dumb IMO and rust way is the right way.

Nothing stops you from returning Result<CorrectObject,ConstructorError> in CorrectObject::new(..) function because it's just a regular function struct field visibility takes are if you not being able to construct incorrect CorrectObject.


I don't see this having much to do with OOP vs FP but maybe the ease in which a language lets you create nominal types and functions that can nicely fail.

What sucks about OOP is that it also holds your hand into antipatterns you don't necessarily want, like adding behavior to what you really just wanted to be a simple data type because a class is an obvious junk drawer to put things.

And, like your example of a problem in FP, you have to be eternally vigilant with your own patterns to avoid antipatterns like when you accidentally create a system where you have to instantiate and collaborate multiple classes to do what would otherwise be a simple `transform(a: ThingA, b: ThingB, c: ThingC): ThingZ`.

Finally, as "correct by construction" goes, doesn't it all boil down to `createUUID(string): Maybe<UUID>`? Even in an OOP language you probably want `UUID.from(string): Maybe<UUID>`, not `new UUID(string)` that throws.


> Even in an OOP language you probably want `UUID.from(string): Maybe<UUID>`, not `new UUID(string)` that throws.

One way to think about exceptions is that they are a pattern matching feature that privileges one arm of the sum type with regards to control flow and the type system (with both pros and cons to that choice). In that sense, every constructor is `UUID.from(string): MaybeWithThrownNone<UUID>`.


The best way to think about exceptions is to consider the term literally (as in: unusual; not typical) while remembering that programmers have an incredibly overinflated sense of ability.

In other words, exceptions are for cases where the programmer screwed up. While programmers screwing up isn't unusual at all, programmers like to think that they don't make mistakes, and thus in their eye it is unusual. That is what sets it apart from environmental failures, which are par for the course.

To put it another way, it is for signalling at runtime what would have been a compiler error if you had a more advanced compiler.


Unfortunately many languages treat exceptions as a primary control flow mechanism. That's part of why Rust calls its exceptions "panics" and provides the "panic=abort" compile-time option which aborts the program instead of unwinding the stack with the possibility of catching the unwind. As a library author you can never guarantee that `catch_unwind` will ever get used, so its main purpose of preventing unwinding across an FFI boundary is all it tends to get used for.


> Unfortunately many languages

Just Java (and Javascript by extension, as it was trying to copy Java at the time), really. You do have a point that Java programmers have infected other languages with their bad habits. For example, Ruby was staunchly in the "return errors as values and leave exception handling for exceptions" before Rails started attracting Java developers, but these days all bets are off. But the "purists" don't advocate for it.


Python as well. E.g. FileNotFoundError is an exception instead of a returned value.


> Functional languages can do this too, of course, using some combination of abstract types, the module system, and factory functions as convention

In Haskell:

1. Create a module with some datatype

2. Don't export the datatype's constructors

3. Export factory functions that guarantee invariants

How is that more complicated than creating a class and adding a custom constructor? Especially if you have multiple datatypes in the same module (which in e.g. Java would force you to add multiple files, and if there's any shared logic, well, that will have to go into another extra file - thankfully some more modern OOP languages are more pragmatic here).

(Most) OOP languages treat a module (an importable, namespaced subunit of a program) and a type as the same thing, but why is this necessary? Languages like Haskell break this correspondence.

Now, what I'm missing from Haskell-type languages is parameterised modules. In OOP, we can instantiate classes with dependencies (via dependency injection) and then call methods on that instance without passing all the dependencies around, which is very practical. In Haskell, you can simulate that with currying, I guess, but it's just not as nice.


Indeed, OOP and FP both allow and encourage attaching invariants to data structures.

In my book, that's the most important difference with C, Zig or Go-style languages, that consider that data structures are mostly descriptions of memory layout.


You have it backwards from where I'm standing.

'null' (and to a large extent mutability) drives a gigantic hole through whatever you're trying to prove with correct-by-construction.

You can sometimes annotate against mutability in OO, but even then you're probably not going to get given any persistent collections to work with.

The OO literature itself recommends against using constructors like that, opting for static factory pattern instead.


Nullability doesn't have anything to do with object-oriented programming.


Yes, yes, "No true OO language ..." and all that.

But I'm going to keep conflating the two until they release an OO language without nulls.


Funny enough, he has talked about this exact problem on his podcast “Two’s complement”; Specifically the episode “The future of compiler explorer”. Commenters below are correct that it’s just about how heavily associated his name is with the tool. I just figured I’d also drop this source here because he has a lot of interesting things to say about his involvement with the project


For anyone else wanting to listen to the episode, this site worked well for me:

https://podtail.com/en/podcast/two-s-complement/the-future-o...

It does have ads, but they were not too intrusive. Scroll down if there’s an ad on first click and there’s a play button that plays the episode.

For me the ads it showed were only text and images, not audio interrupting ads.

You can also listen to it on YouTube:

https://www.youtube.com/watch?v=2QXo5c7cUKQ

But since it’s audio only, I preferred listening to it via the aforementioned podcast website.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: