Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many confusing operators certainly exist(especially in Control.Arrow), but the ones you have mentioned are the worst possible examples you could give. All of those have a clear meaning, and are extremely useful

    f . g y . h z a . f'
is certainly much clearer than

    f `compose` g y `compose` h z a `compose` f'
The same goes for (>>=)


It's clearer to me, but I still have no idea what's going on. This, on the other hand, is a syntax made by people who thought it through:

  with Ada.Text_IO; use Ada.Text_IO;
  procedure Hello is
  begin
    Put_Line ("Hello, world!");
  end Hello;
Not that it's at comparable complexity, but I know what the following code does, but I still can't understand it. It's basically obfuscated:

  module Main where
  import Control.Monad
  import Control.Concurrent
  import Control.Concurrent.STM
 
  main = do shared <- atomically $ newTVar 0
            before <- atomRead shared
            putStrLn $ "Before: " ++ show before
            forkIO $ 25 `timesDo` (dispVar shared >> milliSleep 20)
            forkIO $ 10 `timesDo` (appV ((+) 2) shared >> milliSleep 50)
            forkIO $ 20 `timesDo` (appV pred shared >> milliSleep 25)
            milliSleep 800
            after <- atomRead shared
            putStrLn $ "After: " ++ show after
   where timesDo = replicateM_
         milliSleep = threadDelay . (*) 1000
   
  atomRead = atomically . readTVar
  dispVar x = atomRead x >>= print
  appV fn x = atomically $ readTVar x >>= writeTVar x . fn


What, a Haskell program that launches 3 threads and coordinates them with inter-thread communication is harder to read than an Ada Hello World? Who would have tought!

The equivalent in Haskell to your Ada program is:

main = putStrLn "Hello, World!"

I don't even think about taking the time to write the equivalent in Ada to the Haskell program you posted.


I'm no good at Ada, but I like the way it's syntax tries to guide you through reading the program. Now for something of comparative complexity in Clojure:

  (def x (ref 1))

  (defn increment [i]
    (if (> i 0) 
      (
        (dosync
          (alter x inc)
        )
        (Thread/sleep 1)
        (increment (- i 1))
      )
    )
  )

  (defn decrement [i]
    (if (> i 0)
      (
        (dosync
          (alter x dec)
        )
        (Thread/sleep 1)
        (decrement (- i 1))
      )
    )
  )

  (defn printref [i]
    (if (> i 0) 
      (
        (dosync
          (println (format "in printref %d" @x))
        )
        (Thread/sleep 1)
        (printref (- i 1))
      )
    )
  )

  (future
    (increment 10)
  )

  (future
    (printref 15)
  ) 

  (future
    (decrement 10)
  )
Isn't this much nicer? It's not immediately obvious that x is an atomic variable, but aside from that it's a lot better than the Haskell example. It took me along the lines of 2-3 hours from never having touched a Lisp to writing this.


Here is the Haskell equivalent of your Clojure code:

    import Data.IORef
    import Control.Concurrent

    increment _ 0 = return ()
    increment x i = do
        alter x succ
        threadDelay 1000
        increment x (i - 1)

    decrement _ 0 = return ()
    decrement x i = do
        alter x pred
        threadDelay 1000
        decrement x (i - 1)

    printref _ 0 = return ()
    printref x i = do
        val <- readIORef x
        putStrLn ("in printref " ++ (show val))
        threadDelay 1000
        printref x (i - 1)

    main = do
        x <- newIORef 1

        forkIO (increment x 10)

        forkIO (printref x 15)

        forkIO (decrement x 10)

        threadDelay 100000

    -- This is just a helper to more closely match the clojure
    alter x fn = atomicModifyIORef' x (\y -> (fn y, ()))
I'd argue the Haskell is even nicer.


No this is not much nicer.


It's absolutely shocking that you're able to easily understand a literal hello world example in a language in the dominant paradigm, but not able to easily understand a significantly more complex example in a language from a different paradigm that you haven't taken the time to learn. Absolutely shocking.


See my reply to marcosdumay. It's just altering an atomic variable. It doesn't need to look so cryptic.


There are certainly times when operators seem to be overused (I'm thinking of Lens in particular). But I think this criticism is overstated. One of the things that makes it more desirable to use operators rather than named functions is that due to type classes, the meaning of the operators will change with what context they're being used in. Another is that sometimes there really isn't a great name to be found; an example is the `<* >` operator in the Applicative class (space put in for formatting). Once one becomes familiar with the operators, it's much easier to read something like `doThing1 >> doThing2 >> doThing3` than `sequence doThing1 (sequence doThing2 doThing3)`.

I'm not sure why you chose the two examples you did. The Haskell version of the Ada program you wrote is as simple as can be:

    hello = putStrLn "Hello, world!"
While I'm sure an Ada program that did what the code you pasted does would be of comparable complexity to the Haskell version. And being familiar with how monadic functions work lets me guess pretty well what the code does, despite having very little knowledge of the libraries involved.

    main = do -- atomically create a new transactional variable init'd to 0
              shared <- atomically $ newTVar 0
              -- atomically read the variable and print it
              before <- atomRead shared
              putStrLn $ "Before: " ++ show before
              -- Fork a thread where we show the variable and sleep 25 times
              forkIO $ 25 `timesDo` (dispVar shared >> milliSleep 20)
              -- Fork a thread where we add 2 to the variable and sleep 10 times
              forkIO $ 10 `timesDo` (appV ((+) 2) shared >> milliSleep 50)
              -- Fork a thread where we subtract 1 from the variable and sleep 20 times
              forkIO $ 20 `timesDo` (appV pred shared >> milliSleep 25)
              -- sleep 800 ms in the main thread
              milliSleep 800
              -- read the variable and print it
              after <- atomRead shared
              putStrLn $ "After: " ++ show after
     where -- define some convenience functions
           timesDo = replicateM_
           milliSleep = threadDelay . (*) 1000
   
    atomRead = atomically . readTVar -- perform an atomic read
    dispVar x = atomRead x >>= print -- read then print what was read
    appV fn x = atomically $ readTVar x >>= writeTVar x . fn -- read, apply a function and then write


"One of the things that makes it more desirable to use operators rather than named functions is that due to type classes, the meaning of the operators will change with what context they're being used in."

I don't understand what distinction you're making here. Named functions can also be members of typeclasses (and frequently are - return, mempty...)


Personally, I believe Haskell syntax is a work of art. Learning how it fits together with currying is extremely satisfying. Also, the meaning of all the operators you mention, with the exception of (>>=), is immediately clear from their types.

    (.) :: (b -> c) -> (a -> b) -> (a -> c)
It is clear that it takes two functions, and chains them together to create a new function

    f . g = \x -> f (g x)
So

    double (addOne 3)
is equivalent to

    (double . addOne) 3
Similarly, (!!) has type

    (!!) :: [a] -> Int -> a
It is immediately obvious from the type that it acceses the object at a particular index in a list, so

   ['a', 'b', 'c'] !! 1 == 'b'
Also, the syntax complements currying extremely well

    f g h x
is equivalent to

   (((f g) h) x)

This allows for some very neat things.

   addOne :: Int -> Int 
   -- addOne 3 == 4

   map :: (a -> b) -> ([a] -> [b]) -- which is equivalent to '(a -> b) -> [a] -> [b]'
map is an extremely neat function, and is used in many languages. It applies a function to every element of a list, producing a new list.

Now, there are two ways to use map

    map addOne [1, 2, 3, 4] == [2, 3, 4, 5]
However, the above is equivalent to

   (map addOne) [1, 2, 3, 4]
From this we see there is another way to use map

   addOneList :: [Int] -> [Int]
   addOneList = map addOne

   -- addOneList [1, 2, 3, 4] = [2, 3, 4, 5]

Note how map was partially applied. In Haskell, map can be seen as doing two things. One is taking a function and a list, and applying the function to every element in it to produce a new list. However, you can also see map as a function transformer, taking an ordinary function, and converting it into a function that works on lists!

   map :: (a -> b) -> ([a] -> [b]) -- which is equivalent to '(a -> b) -> [a] -> [b]'


The Haskell's record/struct syntax is probably the worst of any language.

C style:

    a.b.c = 1;
Haskell:

    let b' = b a
        c' = 1 + c b'
        b'' = b' { c = c' }
    in c' { b = b'' }


It is bad, but couldn't you also write:

    a { b { c = 1 } }


It's (b . c .~ 1) with lens. Or a { b = (b a) { c = 1 } } without.


lens attempts to solve that. Though it does so at the cost of unreadable types.


What exactly do you feel is obfuscated?

There's $, backticks, >>=, >>, ++, ., and <-.

All of these are very frequent things you use in Haskell, and deserve short notation. Besides >>, they're all learned in day 0-1 of Haskell.


Are you actually going to respond to the fact that you compared Hello world in Ada to something significantly more complex in Haskell?

How could you ever think that is a fair comparison?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: