Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


It's easier to formally reason about a program that is specified in a small formal language than one that is sloppy, big, and leaks state, yes. When you can formally reason about a program you can prove correctness. And that's the ultimate goal.

Minified JavaScript is not a fair comparison either. You can write scheme and lisp programs that are very readable. It annoys me when people conflate a terse and/or illegible programming style with functional programming.

We have abstractions and large languages so that humans can better understand the intent of a program. Isn't that exactly what this post is about, making a more readable lambda calculus interpreter?


"Isn't that exactly what this post is about, making a more readable lambda calculus interpreter?"

Here is a less terse introduction: http://lambdaway.free.fr/workshop/?view=helloworld, using a regexp based implementation of a "dialect" of lambda-calculus.

Thanks for your interest.


> It annoys me when people conflate a terse and/or illegible programming style with functional programming.

The problem is, functional programming languages are almost always harder to read than other languages. Haskell is the obvious example, but Lisp is pretty bad too. It takes a bit of experience to be able to refactor in the middle of a Lisp block without messing up the balancing of the parentheses. Yes, it can be learned, and it can be learned faster than people expect. But the barrier to entry is much higher than it is for most languages.

By the way, I love both Haskell and Lisp, and I think more people should use them. But I'm not going to pretend it's easy get started working with those languages.

F# and other languages ML-style syntax are probably the easiest to read.


Depends on what is "easy" for you. Haskell is syntactically dense and can be obtuse in the hands of many authors, but on the other hand I find it easier to read well written Haskell because it's easier to break down into chunks. Similar for Lisp, keeping track of parentheses can be annoying but the regularity of S-expressions make it pretty easy to follow syntactically. I do agree it can be hard to get started in Haskell (though not so much Lisp, especially Scheme), but I think that's for other reasons that aren't really dependent on the syntax or readability.

I think we can all agree APL is incomprehensible moon language, though /s.


APL fascinates me, but it's very hard to understand. I haven't found any intellectual footing as it's so dissimilar to what I've learned already.

Would you have any good book recommendations? Or any recommendations, really? I'm just asking in the chance that you do.


APL really is no harder to understand than any other programming language, in a way it is easier than most because you can look at the whole program at once without being bothered by all the cruft of loops and such. You're probably mostly thrown off by the symbols.

There was a gorgeous article submitted a couple of days ago:

http://www.jsoftware.com/papers/50/

That should serve as a good intro.


Having a whole program in your field of view isn't the same thing as comprehending it all at once. That would be a horrible fallacy to assert.

Software Engineering can be regarded as a discipine in which we strive for the understandability of programs which cannot possibly fit into a person's field of view all at once.


No, it isn't the same thing. But it does make comprehension easier. You can get a feel for this by making a text editing window really small, say 10 x 20 characters and then trying to comprehend an otherwise perfectly legible program through that window. It's going to be much harder than when you have more overview. This is also why we make diagrams to show a simplified version of a program to aid understanding.

APL went all the way with this and assigned many reasonably complex operations on complex datastructures to single character symbols. Once you know those symbols the code is about as hard to understand as any other language that you've mastered. It's just that because the symbols are not used in any other language family that you are going to struggle quite a bit in the beginning. It's like trying to ready Cyrillic.


Fitting an entire function on the screen is useful compared to just a fraction of a function. (However, if you can only see a fraction of a function, the problem is with the function, not with your screen.) Having a hundred functions in view at once (full definitions) is way, way past the point of diminishing returns. And we have folding editors so you can see function headers: names and arguments.

By the way, a 10x20 window: isn't that like a wasteland of wast proportions to an APL programmer?


I was joking, I have actually used APL a bit and quite enjoy it. Array based programming is quite intuitive once it clicks in your head and I think at least a conceptual understanding of APL is very valuable even when using other languages (e.g. NumPy in Python).


> It takes a bit of experience to be able to refactor in the middle of a Lisp block without messing up the balancing of the parentheses

I double-click on one parenthesis and the whole expression gets selected - I can then operate on it. How would I mess up the balancing?

The only way to mess up the balancing is editing without s-expression support. No Lisp programmer would do that.


> The only way to mess up the balancing is editing without s-expression support. No Lisp programmer would do that.

Well, in the past they did.


Long ago.

BBN Lisp (later known as Xerox Interlisp) had already a structure editor in the 60s.

http://www.softwarepreservation.org/projects/LISP/bbnlisp/BB... see page 40ff.

A lot of editing support was then developed in the 70s with the various Emacs editors (TECO Emacs, Zmacs, Multics Emacs, ...) or the Interlisp editors. In the 80s various Lisp IDEs had support for editing Lisp on PCs, Macs, Unix-Workstations, Lisp Machines, ...


Most people that know both, find functional languages far easier to read than imperative languages. They are declarative and permit a wider range of abstractions. Operator-overloading is a similar example that comes up for debate, over-and-over again. There are people that simply do not want to learn any new abstractions and are happy to read and write the same patterns/code over-and-over again. The problem for the rest of us, is that we may find ourselves in a position where we have to maintain code written by these heathens.


> The problem is, functional programming languages are almost always harder to read than other languages. Haskell is the obvious example

There are many legitimate reasons to dislike Haskell, such as being hard to parse mechanically, but being hard to read is not one of them.

> F# and other languages ML-style syntax are probably the easiest to read.

The syntax of ML's module language is pretty complicated. You cannot look at those “where type” (SML) and “with type” (OCaml) clauses and tell me with a straight face that they were meant to be easy to read. This syntax makes translucent ascription harder to read than it ought to be. It is so bad that many[0] people work around it in various ways, like using the combination of generative datatypes and transparent ascription as a poor man's translucent ascription.

As for F#, I would not call it ML-style, precisely due to the inability to express modular abstraction.

[0] Relative to the size of the ML community, of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: