Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Naïve algorithms on immutable data structurs tend to have very bad worst case performance.


And there would always be the optional mutability. Your complex data structures are seldom the ones driving performance.

Ie often you have to get the complex data structures right (lots of logic, not much computation) to set up your computations on the simple ones (eg lots of number crunching on your arrays).


Yes, it's easier to reason about correctness in immutability land and add mutability in after the fact as a compiler optimization than to remove mutability for safety as a compiler pass.


Haskell sort-of does this. Laziness can be seen as a very disciplined form of mutability.

(Haskell also allows mutability that is visible to the semantics of the program, that's where the famous monads come in.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: