There's also been more recent mailing list traffic on it, but my Google skills are failing me here and the arguments haven't changed all that much.
My opinion is that we are relying too heavily on upper bounds to solve a whole bevy of interrelated, but separate, problems. Other solutions are evolving to help deal with some of the problems. cabal sandboxes allow you to build a package separately from your others, reducing dependency issues. cabal freeze will help you specify exact dependencies, including transitive ones. Another project, Stackage, aims to maintain a whole slew of packages in a buildable state:
It's an incredibly stupid policy. As others have noted, other package managers also has the capability of setting upper bounds on package versions and pip also has the ability to freeze your dependencies to specific versions.
However those features aren't necessary to productively use pip for example. Because Python package maintainers are very good at maintaining backwards compatibility (or using deprecation cycles when those are needed) because people that depend on their packages get upset with them. The rules are different for beta releases because it's your own fault if you depend on them and their api changes.
Precisely because specifying upper bounds is encouraged, Haskell maintainers seem to act as if they have a free card when it comes to backwards incompatible changes. In the long run that causes much more packaging troubles then specifying upper bounds saves. Perhaps there's nothing wrong with Haskell nor Cabal, but the Haskell community's way to handle backwards compatibility is wrong imo.
Upper bounds are information about what you know your package works with. It makes no sense to throw this away. What does make sense is improving our tooling to allow this information to be ignored in certain circumstances.
I've been on the other side of the fence with a large app written a long time ago that did not specify upper bounds. Now that app is essentially unbuildable given the time I'm willing to put in. If I and all my dependencies had followed the PVP and specified correct upper bounds everywhere, I would still be able to build my app today. No, I wouldn't be able to build it with the latest version of other libraries, but that's not what I want to do.
If you don't specify upper bounds, the probability of your package building goes to ZERO in the long term. Not epsilon...zero. That's unacceptable for me. If it works today, I want it to work for years to come.
"Upper bounds are information about what you know your package works with."
Not really; the upper bounds generally are speculative. They specify that the package will not work with particular versions when typically those versions have not yet been released. Often when those versions are released they work fine, which prompts a useless flurry of dependency bumping.
"If I and all my dependencies had followed the PVP and specified correct upper bounds everywhere, I would still be able to build my app today."
I now distribute my packages with the output of the ghc-pkg command so you can see the exact versions the library builds with. That solves your problem without using speculative upper bounds.
I wouldn't expect it to work for "years to come" in any event, though: Haskell changes too much. You probably won't be able to get today's package to work with the compiler and base package that will exist years hence. It's hard to even get an old GHC to build: today's GHC won't build it because of...dependency problems.
> Not really; the upper bounds generally are speculative.
Yes, an upper bound is still useful information. It says that a package is known to work with a certain range of dependency versions. This is why I think cabal should support something like < for speculative upper bounds and <! for known failure upper bounds. Then we could have a flag for optionally ignoring speculative upper bounds when desired.
> I wouldn't expect it to work for "years to come" in any event, though
I can still download a binary of GHC 6.12.3. Hackage also keeps the entire package history so there's no reason things shouldn't build for years to come. I'm all for the fast moving nature of the Haskell ecosystem, but we also need things to be able to build over the long term and there's no reason we shouldn't be able to do that.
> Which won't even work on current versions of Debian GNU/Linux because the soname for libgmp changed.
Again, my argument assumes that you're not upgrading to current versions of things. Enterprise production-grade deployments often fix their OS version because of this. There's a reason RHEL is so far behind the most recent versions. If you are upgrading, then you'll be fixing these problems incrementally as you go along and it won't get out of hand.
> No it doesn't. I had specified a dependency on a particular version of text
You're making my point for me. It looks like you were probably depending on 0.11.1.4, which doesn't exist now. Your problem would not have happened if you had used an upper bound of "< 0.12" like the PVP says instead of locking down to one specific set of versions. If you had done that, cabal would have picked up subsequent 0.11 point releases which should have worked fine. Also, I can still download text-0.1 which is more than 5 years old.
The old adage "Haskellers doesn't know how it feels to shot themselves in the foot because they are walking around with a bleeding flesh wound in their feet" fits aptly. The problem upper bounds tries to solve just does not exist in other languages because packagers are expected not to break backwards compatibility. I can almost without exception run the same apps I developed for Django 1.0 (csrf protection introduced in 1.2 caused backwards incompatible changes but that was an exception) almost ten years ago with current software versions.
In fact, if any of the packages demanded an upper bound it would suck because I don't want to use a legacy Django web server containing tons of exploits which no one is ever going to fix because it's not maintained anymore.
> The problem upper bounds tries to solve just does not exist in other languages because packagers are expected not to break backwards compatibility.
I'll agree that Haskell moves faster and has more breaking changes, but that statement is just wrong. Look at http://semver.org/ (not related to Haskell at all) and you'll see that the very first point on the page is about incompatible API changes. So clearly this issue exists outside of Haskell and people have developed methods for managing it with version bound schemes. You are arguing against rapidly changing software, not against upper bounds. In your example where the Django server is not making backwards-incompatible changes, upper bounds wouldn't hurt you at all because the bounds are on the major version number, but exploit fixes that don't break backwards compatibility will only bump the minor version number.
Comparing Haskell to any other mainstream language in this discussion is invalid because the other languages have been around a lot longer and have reached a more stable state. Python appeared in 1991. The first Haskell standard appeared in 1998. So that means Python has at least 7 years of stability on Haskell. I would argue that Haskell gained adoption much more slowly because it is much less similar to any mainstream language that came before it, so the actual number should be larger. Paul Graham's essay "The Python Paradox" came out in 2004. I would suggest that Haskell is just now getting close to the point that Python was at when PG wrote that essay. That means that Python has at least 10 years on Haskell. So if you're comparing breaking changes in Haskell today with Python, you need to compare it with Python as it was 10 years ago. If you think the breaking changes are not worth that much pain for you, then don't use Haskell right now. But you shouldn't make that decision without educating yourself about the benefits the language has to offer. For me, it is a small price to pay compared to the benefits I get from Haskell.
Only time will tell, but I predict that companies based solely on Haskell will emerge in a few years dominating their competition because they can write better quality software, iterate faster, with fewer people, more reuse, fewer bugs, and easier maintenance than companies not using Haskell.
There is a policy that says you're supposed to do this:
http://www.haskell.org/haskellwiki/Package_versioning_policy
The idea is that by specifying upper bounds, the package is more likely to build after the dependencies change.
This idea is subject to considerable controversy.
http://www.haskell.org/pipermail/haskell-cafe/2012-August/10...
There's also been more recent mailing list traffic on it, but my Google skills are failing me here and the arguments haven't changed all that much.
My opinion is that we are relying too heavily on upper bounds to solve a whole bevy of interrelated, but separate, problems. Other solutions are evolving to help deal with some of the problems. cabal sandboxes allow you to build a package separately from your others, reducing dependency issues. cabal freeze will help you specify exact dependencies, including transitive ones. Another project, Stackage, aims to maintain a whole slew of packages in a buildable state:
https://github.com/fpco/stackage