I am not a computer scientist by training. Can someone experienced in Haskell convince me why I should take time to learn it? I do tons of technical/statistical computing (python, Matlab, R) and some web-based projects (database driven mostly). Having math background I can see how FP is elegant and sexy, but what are real practical advantages over OOP or Matlab-style programming?
I've tried a few times to learn Haskell, but always tend to lose steam. Last time I got as far as monads before petering out. So I am definitely not a great person to try explaining the practical benefits of Haskell programming, because I've never written anything more complicated than a moderately difficult project euler solution.
However, despite never actually reaching escape velocity with the language, I don't feel like I've wasted any time at all. It's made me a much better programmer. The restrictions that the language places on you (e.g., the inability to change a value once you declare it) forces you to really grapple with functional principles.
As a result, I am so much better at reasoning recursively, and figuring out how to compose functions, that I hardly believe it. As a ruby programmer, I thought I understood recursion. I thought I understood higher order functions. And I did. But at such a basic level that I had no idea how basic my understanding was. Haskell totally opened my eyes.
So, imho, even if you never use haskell, even if you never even fully learn it, the attempt will almost certainly make you a better programmer. Unless, of course, you've been already messing around with one of the other nearly pure functional programming languages out there.
It took 3-4 years of learning Erlang and Scheme first before I could fully grok Haskell concepts. Monads aren't as bad as you think they are, read this if you haven't: http://ertes.de/articles/monads.html
Haskell's strengths: it's succinct, safe, feature-rich, and compiles to fast programs.
* Succinct: due to Haskell's strong focus on mathematical abstraction (I may be abusing that term here) it makes reasoning about programs easier on a general level - if you get the general case right then any potential problem that fits it will be solved by it. Succinctness also makes writing the code much faster (it's kind of like escape velocity with Emacs - takes a while to get a reasonable map down in your head, but once you do you're 5-10x more productive than you were before).
* Safe: this one's obvious - Haskell's type system is brilliant in every way and paired with flymake for haskell can make writing robust code very straightforward.
* Feature-rich: lots of libraries, native support for multi-processor/multi-core concurrency - I'm still waiting on something like OTP for Haskell though; Erlang takes the cake there simple because of OTP.
* It's fast too; granted you have to be careful to not let non-strictness bite you in the ass, but that's generally very easy to profile and nail down if you can't.
I love Haskell, I use Erlang at my startup and I'm working on some big personal projects in Haskell.
Okay here is an awesome description of monads: a function buddy!
Want to print out parts of a function? Want to log something? Don't add an argument to your function! Use a monad!
Basically if you have something orthogonal you want to do with a functions results, use a monad. You don't have to pass everything as an argument explicitly any more!
P.s. don't listen to this until a haskell demigod corrects me
I've been doing the same kind of work as you for a few years now while being somewhat proficient in Haskell.
The answer is, to me, that there's absolutely no reason to use Haskell for that kind of work.
It's rather conceivable that someone could build a statistical/scientific system that's driven by Haskell that would be useful, but fundamentally systems like Matlab/R/Python are far more suited for the exploratory analysis kind of programming that I used to do. YMMV but I wouldn't be surprised if it didn't.
That said, if I were implementing a statistical system I had designed and analyzed in Matlab/R/Python in a complex domain then I'd move to Haskell.
I think this issue would begin to evaporate if GHCi treated the IO monad differently. As it is GHCi is extremely valuable for exploring the structure of your own programs, but utterly terrible at exploring data. I find myself either constantly writing complex IO-unwrapping chains or deleting my state data when I build a new function and refresh the environment.
I'm building a complex system today that needs really well understood behavior, so I'm using Haskell. Whenever I want to see the data passing through it, though, I load up R.
Immutability means it's a lot easier to change your code safely. And the functional style encourages better separation of concerns, again making it easier to change something in one place. Strong typing is really good in a large codebase for making it harder for errors to propagate - if you make an error in a given function that changes the return type, you can immediately localise the problem to that function.
Writing the initial code is likely to be if anything slower in a functional language; the real advantage comes in maintenance.
(disclaimer: I don't actually know haskell, though I've used several functional languages and written functional-style code in several more)
"Writing the initial code is likely to be if anything slower in a functional language; the real advantage comes in maintenance."
Definitely disagree with this. As with anything, with practice, you get very fast at writing Haskell code.
the expressive/strong type system and type inference means you'll catch errors early while writing code at a higher/denser level than even today's popular dynamic languages, resulting in a net increase in programming speed (for me).
As a python guy (still the densest language on the programming language shootout AFAIK), I find I can run a unit test before most languages would compile. More to the point, I don't make type errors when writing the first version of a piece of code; it's when changing it that the type system becomes invaluable. Of course, this is different for different people.
Completely agree about the changing code part. I program in Ruby and I often wonder what I have broken whenever I change some code. (I know I am supposed to write a lot of tests, but it becomes a big chore to write tests for all the test conditions.)
By the way, I would not call Python one of the densest language by any means. It is one of the nice and straightforward languages to learn which is expressive as well.
I was going by http://shootout.alioth.debian.org/u64q/which-language-is-bes... . (the most naive/obvious approach - all benchmarks have weight 1, code size has weight 1, other factors have weight 0). I know it measures gzipped code size; I think that's a reasonable measure for "density".
Looks like I'm out of date, ruby 1.9 has overtaken python. Guess it's time to learn ruby.
"This paper [pdf The Effect of Language Choice on Revision Control Systems] compares one scripting language, Python, with C in the domain of revision control systems, as large working implementations exist for both languages. It finds no clear evidence that scripting languages produce smaller systems…"
> Remember - these measurements are just of the fastest programs for each of these programming language implementations
If you add in speed, even at a size:speed weight of 5:1, Python's advantage disappears.
So, the fastest python code is smaller and slower than the fastest haskell code, ignoring any slower code that may be denser.
Also, "gzipped code size" is a horrible measure of density: what everyone hates about Java is how redundant programs are (access modifiers, type declarations), which gzip would compress nicely, but doesn't help programmers (except where Eclipse auto-completes, I guess).
Now we know you find it unpleasant, please name your preferred measure and explain why you think it would be better for comparing programs written in very different languages, with widely different source code styles and conventions.
Incidentally, do you think much Java or C# gets written with Notepad? :-)
I generally agree that I rarely make type errors when first writing code in Java or C#. I must say though, it really isn't the same with Haskell because of how much more expressive the type system actually is. Of course with practice one improves there too.
That question has been answered 100s of times in various other HN threads, Reddit's haskell subtopic, stackoverflow, blogs, etc. There are many really good reasons - I would encourage you to make some simple use of Google to find out.
Most arguments I've come across seem to be of the type "FP will make you a smarter programmer and better person, just try it and you'll see. Lot's of programmers are too dumb to get it though, so don't feel bad if you're one of them". I was hoping that someone here could provide some more concrete explanations of when Haskell & company should be used over imperative/OO designs. For example, the most watched Haskell repo on Github seems to be pandoc, an engine for converting text from one markup to another. What is it about functional programming that makes it better suited for this particular task then, say, Perl? Or is pandoc written in haskell for purely historical reasons? Maybe there is a better example.
Pure functional programming typically shifts thinking overhead from your brain as the programmer to the compiler (and the very design of the language). After using Haskell intensively for the past ~4 years, if I go back and write software in Python/Ruby/etc., I realize:
- How annoying it is to have to keep state (of objects, data structures, etc.) in mind for pipelines of computations
- How bad mutability in data structures is and how wonderful it is that in Haskell you NEVER have to worry about any of the data structures changing under your feet
- How nice it is that you can refactor massive codebases fearlessly, thanks to purity and the expressive type system
- How cool QuickCheck and unit testing is in Haskell vs. the mainstream OO languages
- That even Ruby feels lower level and more verbose after Haskell due to a lack of higher-order/first-class functions, among other things
- that if I'm using a dynamically typed language, it sucks that I'll likely get 1 - 10% of the performance I would get if I were using Haskell with no real special tricks
Pandoc is a compiler from N input grammars to M output grammars. Compilers are easy to write in typed, functional languages because they have algebraic data types and pattern matching.