Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Away from Exceptions: Errors as Values (hashnode.dev)
98 points by Jonathanks on Aug 30, 2021 | hide | past | favorite | 136 comments


It's good to see so much focus on errors. They are essential when trying to build resilient systems. But our approaches are still very immature.

First, to make it clear, this article appropriately points out that exceptions are still necessary and relevant. I disagree with some of the use-cases given, but it's important to recognize that exceptions should still exist in programming languages. Joe Duffy's article about Midori's error model [0] is in my opinion the best reference to actually understand the difference between exceptions and errors and why it's so important to get right. It's a very good article, and it has been posted here before; if you are interested in error handling it's a must read.

Now, about errors as values. Treating errors as values is practical, and in modern programming languages, relatively ergonomic. That said, we already have some other comments in the thread pointing out how sticking to just "errors as values" is often not enough (btw, the article uses Rust, not Go, but anyway...). And it's also important to clarify this: errors are such an integral part of our programs, and have so much to do with flow control, that I don't believe thinking about errors just as values is enough. Sure, they might be "just values" under the hood, but in all programming languages we see either optional results or syntax sugar to be able to handle errors more gracefully. And in most cases, we still feel it's not enough (or it's enough to be practical, but not to be pleasant in many cases). So we should keep the door open, and not pretend we have already solved errors.

Finally, the topic of errors is extremely deep and complex, and when you start introducing other factors like how to report the errors publicly to a non-technical user, maybe in different languages, or whether to log it or send it who knows where, whether to trace or not, how, how to deal with duplicates or similar errors... then you start realizing that we are far from a satisfying and complete model for error handling. We haven't reached this part of the discussion yet.

For the moment, passing most errors as values is the relatively painless way that still allows us to customize errors to our needs. But there's still a long way to go.

[0] http://joeduffyblog.com/2016/02/07/the-error-model/


> errors are such an integral part of our programs, and have so much to do with flow control, that I don't believe thinking about errors just as values is enough

Exactly. The control flow. Both errors and asynchronous programming share the quality that they don't go well with our call/return based programming model(s). You have to return something, but you either don't have anything (error) or don't have something yet (async).

A great solution to this is to use dataflow. This decouples the logic, which is encoded in the dataflow, from the control flow, which just serves to drive the dataflow, and thus negotiable.

For async, it is synchrony-agnostic, which is nice, because it solves, or rather sidesteps, the "function colouring" problem. For errors, it allows you to keep error handling out of the happy path without needing exceptions.


Interesting. Can you give some pointers/examples?


We can't forget that "error handling" is a civilization-level hard problem. Do you set up support structures to catch them when shit hits the fan, or do do you preventatively and excruciatingly suppress them?

It's as much a theoretical problem of what errors are as a practical problem in how to represent these intricate models ergonomically, and what will be sacrificed. In some sense it's an almost moral question!


And then throw PII concerns into the fray. How do you report meaningful information to your ops team but still not expose PII in your logs. There's ways of course but its not trivial.


> when you start introducing other factors like how to report the errors publicly to a non-technical user, maybe in different languages, or whether to log it or send it who knows where, whether to trace or not, how, how to deal with duplicates or similar errors...

I’ve tried searching for articles that talk about people deal with this in the context of web apps but have found it difficult to find content. It’s a tricky topic to google. Most of what I find are (usually content marketing) articles about how to log errors and/or how to send them to some service.

I’ll give you an example that is admittedly a little paranoid. Take a switch case statement where you branch of an enum-like value, meaning there’s a set of known values you expect. What do you do for the default case? In theory you don’t need a default case because you “know” the switch won’t hit it, but it’s weird to me to write code that has no logic to handle a possible scenario even if it’s highly unlikely. What if there’s a bug in the code or the enum-like value changes to contain a new value or some weird edge case? The point being in JavaScript there’s no way to be 100% sure that the switch condition will not contain an unexpected value.

Given how unlikely this scenario is though, how do you deal with it? (I realize some of it depends on where in the application’s code this is happening in). You don’t want to throw an exception and break the app. Or you can but you’d want to catch at some point. Do you log the event in the backend and create an alert so that you know a user ran into a weird edge case or bug? Do you write it out to the console in case you get a customer support call so that you can identify the issue? Is it a bad idea to write out errors like that to the console?

I’m sure these are questions that most mature apps have had to answer, but I haven’t been able to find what people consider best practices for these types of situations. If anyone knows of good resources I’d love to read them.


Throw an exception.

In PHP we would throw LogicException on these cases, it should never happen thus it something wrong with your code (logic).

https://www.php.net/manual/en/class.logicexception.php

Then in your outmost function , like the main function, you capture it and report it with a error reporting tool like Sentry (so you are aware of it and can fix it).

And for the end user you would show a modal or similar to describe the error in user friendly way.


I recommend again reading the article I linked to. The answer to the first part is: this should be an exception (or abandonment, as the Midori team called it). This is an error in the logic of the code, it's a programming error (even if it's due to later changes or whatever). It's an error that needs to be fixed in the code, not "recovered from".

Now, you can also catch exceptions, indeed. You could have your app catch all exceptions at the root level or wherever you think it's appropriate if your code is modular enough. Once you have caught the exception, you could silence it, as a lot of software does, and pray for the best... or be a bit more serious. If it was me, I would notify the user: "There has been an unexpected error". I would also append the technical info in a "technical details" section or something. And I would also provide a link to let the user report the issue easily. I'm kinda against hidden automatic reports for privacy-related reasons, as errors might sometimes contain sensitive data too, but it would really depend on the application.

There are many ways to make this more robust. Check for report dups on your side, or have some dynamic code to check the status of a specific error to provide the user with even more information, or even silence the error completely or whatever. But all this takes much more work, and it's really dependent on the application you are writing and how entreprisey you are willing to go. Crashes are not nice, data loss is not nice, but neither is corrupted data or subtle bugs due to errors silenced for the sake of the peace of mind of your users. You have to decide what's the right balance based on the type of program you are writing. Indie videogame? Crash as soon as possible, ask nicely for reports, get bugs fixed fast. Editor where a lot of data might be lost if you are lousy with exceptions? Definitely go out your way to auto-save separately if possible before crashing, and let the user know how to try to recover its data and how to get assistance. Non-critical webapp? Just let the user know something unexpected has happened, and allow to report and assure you will look into it soon. It always depends.

EDIT: I missed the most critical part, so I'll add it now... When communicating errors to users, the most important part is properly handling their frustration, not the logging method or the technical details included or anything else. If you have "few users", make sure they have a way to get in touch, and make sure they get a fix or a decent explanation of what's going on, let them know when it will be solved or what can they do meanwhile. Errors happen, but people are most often very understanding as long as you are there and don't leave them alone with their frustration. If you have too many users for that... good luck to you.


I appreciate the write up. I’ll also checkout that article. Thanks!


Author here. Thank you for the link to Midori's error model. It's on my reading list for this week.

> So we should keep the door open, and not pretend we have already solved errors.

Very much this. Even with errors as values, the approach languages like Go take makes composition difficult. I presented the Kleisli approach instead. That recovers compositionality. OCaml does something I find interesting. It makes exceptions performant by not capturing stack traces by default. But it's still too easy to forget handling the exception.

> btw, the article uses Rust, not Go, but anyway...

I actually used TypeScript. The Rust bit was meant to introduce the idea from Rust to TypeScript. It's not new in TypeScript, but it's not popular either. I have updated the article to clarify that.


> Programming with exceptions is difficult and inelegant. Learn how to handle errors better by representing them as values.

Funny how exception were invented because handling errors as values was considered to be tedious. And now, more and more languages are going backward.


I think it's less strange than you think. In most languages that use errors as values, the tediousness is being directly attacked instead of trying to dodge around it. Haskell, in many of its uses, cleans up the tediousness so thoroughly that the code written using errors as values can be almost indistinguishable from code written using exceptions, and yet, nevertheless, the errors are values and no exception machinery is being deployed.

It has been a general trend in pragmatic programming languages in the past couple of decades. Another huge example, in my opinion, is in typing. Static typing in the 20th century was terrible. Tedious, broken, and missing a lot of its value. So a lot of languages were written that basically amount to a "screw that, we're not using types", and they became very successful. But in the 21st century, a lot of work has been done directly attacking the tediousness and problematic aspects of using static types, while also getting more value out of them with safer languages that more pervasively enforce them and make them more reliable, thus more useful, etc. So we're seeing a resurgance of the popularity of very statically-typed languages... but it's not "moving backwards" because it's not the same thing as it used to be.

Much like I don't expect dynamic languages to entirely go away, I wouldn't expect exceptions as we know them to go away either. But I expect "errors as values" to continue attracting more interest over time.

In fact, as test34's sibling post sort of observes, there's some synergy between these two trends here. Making strong typing easier has made it easier to have strongly-typed, rich values that can be used as error values and used in various powerful ways. Now that there are languages where it's much easier to declare and fully exploit new types than it used to be, it's much easier to just go ahead and create a new error type as needed for some bit of code without it having to be a big production.


> Haskell, in many of its uses, cleans up the tediousness so thoroughly that the code written using errors as values can be almost indistinguishable from code written using exceptions, and yet, nevertheless, the errors are values and no exception machinery is being deployed.

I don't have experience with Haskell, but I have mixed feelings about monadic error handling in Scala for precisely this reason. It goes to great lengths to recreate the programming ergonomics of exceptions, with exactly the same drawbacks. Monadic error handling, aka "railway-oriented programming,"[0] splits your logic into two tracks: a "good" track, where all your happy path logic lives, and a "bad" track, which is automatically propagated alongside your happy path logic. In my experience, it induces the same programmer mistakes as exceptions do: errors get accidentally swallowed (especially where effects are constructed and transformed,) different errors that require different handling are accidentally treated the same, and programmers fall into the habit of seeing the error track as an inferior, second-class branch compared to the happy path.

It confuses me when programmers (not talking about you, because I don't know how you write code, but people I've worked with personally) bash exceptions and then use monadic error handling to achieve exactly the same trade-offs.

This hasn't turned me off of monadic error handling, but it has made me think of it as FP's version of exceptions, rather than an upgrade. Personally, I think exceptions are a good enough trade-off in most cases, but when you need to be more careful, it is better practice to give all paths the same prominence in code. FP provides a better way to do this: pattern matching. More verbose, yes; harder to spot the happy path when reading code, yes; encourages more careful and thorough thinking about errors, for me absolutely yes. YMMV.

[0] https://fsharpforfunandprofit.com/rop/


> This hasn't turned me off of monadic error handling, but it has made me think of it as FP's version of exceptions, rather than an upgrade.

This reminds me a lot of Java's checked exceptions just with different window dressing. You move the failure mode type information from the exceptions list ("throws" clause) into the return type.

Typed error return values is definitely an improvement in ergonomics over C-style error code returns, and pattern matching is definitely a big improvement on ergonomics too, but I think error handling has a problem of fundamentally irreducible complexity. For example, if you make network calls, you have to be prepared for network calls to fail, and you have to design your system to recover from it somehow, whether that happens in a try/catch or a match on Either[Throwable, Result].


I think trying to reduce the complexity of error handling is the original sin. Thinking of error handling as a separate case requiring separate mechanisms is the original sin. The structure of your code should not reflect any difference between "error" and "success" cases. They are equal, and neither should be subordinated to the other.


I've started writing a blog post that covers this topic, and once I started thinking clearly about it, I've found errors to be a really hard problem.

To even get out of static vs. dynamic typing, I've expressed the problem as "Given this piece of code, how can I know what types of errors will come out of it?", where I'm using a human, loosey-goosey sense of the word "type" here, rather than necessary a strict type. (If your language wants to answer that in terms of strict types, great, but I'm trying to answer it very generally across programming languages.) It turns out that from what I can see, the underlying problem is that "errors don't compose"; given a function f that returns errors X, Y, and Z and accepts another function f2 to call that may return other errors, it is really difficult to characterize f(f2) in practice. For a concrete, static f2 we can mostly at least imagine taking the union of the two (although even that can be an oversimplification; what if f calls f2 in such a way that one of the types of errors that f2 can produce is guaranteed not to happen, e.g., what if f2 could throw a null pointer exception but it can be easily statically proved that f never passes it one?), but once you let enough polymorphism into the mix to let f2 be an arbitrary closure of some type, all bets are off in terms of what f(f2) can produce in most languages.

This hurts statically-typed languages in that they can't create very strong types for these sorts of situations, but more generally, translating the theory up to practical experience regardless of the language being used, A: it's hard to program in an environment where you have enough polymorphism of some sort (OO, accepting closures, whatever) to have errors mix like this in your code and know what sort of errors may occur where and B: that's nearly everything because programming in an environment that lacks that polymorphism is not something we generally do voluntarily. (Embedded code not allowed to even allocate on the stack is this static, but we can't build everything that way.)

Amusingly, I think what saves us in the end is that to a first approximation, there is no such thing as error handling. All there is is logging something for a human and giving up. Obviously, to a second approximation there is a such thing as error handling. I've got plenty of it. But honestly, it's pretty rare by percentage. Most errors result in a log message and some level of failed task. Fortunately, with some work, we can usually get our systems fed enough good data that we can do things without errors, such that the ones that do occur end up almost always being essentially correctly handled by screaming and dying. If programming actually required us to handle errors, like, in some intelligent manner all the time, we'd have a lot fewer programs in the world!


Well, on the other hand there's difference between handling errors with values like:

-1, 0, 1 and other obscure things

and using proper types like

Result<T>


Yes, errors-as-values only works well with a type system which supports discriminated unions. And programming languages with such support have only recently become popular.


OCaml is a somewhat old language with algebraic types (so including discriminated unions) yet exceptions were introduced precisely to avoid dealing with propagating errors as a values. I agree with your point, but would argue that even with sum types, propagating errors is tedious and there's a case for using exception.

I wonder if another reason why errors as value are making a come back is because of the asynchronous programming style which is becoming quite pervasive, and doesn't play well with exceptions.


More idiomatic error handling has appeared too. It's become a lot easier to bubble up errors with messages (e.g.: golang) without handling every single case.


>Only throw exceptions when something really bad has happened and the program must stop. For example:

> the program cannot connect to its database;

> the program cannot write output to disk because the disk is full;

> the program was not started with valid configuration.

I'd prefer Result over exceptions even in these cases. The only case where I think exceptions should be used is when the type system of the language is not powerful enough to prove the validity of some operation. For example in Rust:

  let v = vec![1, 2, 3];
  let n = v.pop().unwrap();
The `pop` method returns `Option`, but I as a programmer know for sure that the collection isn't empty, I just can't prove it to the compiler. So I use `unwrap` to get the value and panic in the case I'm actually wrong and made a stupid mistake.

Another example is division by zero. Using a `Result` as a return value of the division operator would be extremely inefficient and unergonomic. Panic/exception is the best way to handle this situation.

I believe dependent types can solve both of the problems above so we can get rid of exceptions completely. Unfortunately, there is no a single mainstream language that has them.


Another thing to call out is that you also need to be precise in what the error is. Division by zero is indeed a grave error, but only when your code is not logically thought out - there should never have been any codepaths that divide by zero in the first place. So the error that your should report is whatever cause the denominator to be zero, not division by zero. That's almost entirely useless, and misleading.


That second program must be the single worst example of exceptions ever written; a straw man if ever these was one. The key in understanding why this is the case lies in the realisation that ParseInt is a combined parser/validator, and a validation failure isn't actually an error; it's a normal, expected, situation. In C++, you'd solve this by returning std::optional<int>, and end up with code like this:

  std::optional<int> result = ParseInt (input, 8);
  if (!result) result = ParseInt (input);
  if (!result) result = ParseInt (input, 16);
  if (!result) throw ...;
  return *result;
Note how there's no exceptions (in ParseInt, I mean). Note how there's no error codes either. There's just no error handling needed to begin with, except right at the end, if the number is not in any of the three supported formats.


> There's just no error handling needed to begin with, except right at the end, if the number is not in any of the three supported formats.

I would argue that if (!result) is a form of error handling, as result being falsy indicates that the parsing failed.


To me exceptions are convenient but in the same way global variables are. Both mechanisms relieve you from (explicitly) passing stuff between callers. The stuff still gets passed only in a way that is less visible and harder to reason about.


I'm firmly in the camp that believes that exceptions are a false economy.

The post links to an "Exception Smells" post that doesn't mention one of my pet peeves: exceptions as control flow. For example, Java's parseInt [1] throws a NumberFormatException if the string can't be parsed. IMHO this is terrible design. As a side note, checked exceptions are terrible design.

I wrote C++ with Google's C++ dialect where exceptions were forbidden. Some chafed under this restriction. It was largely a product of the time (ie more than 20 years ago now when this was established). Still there's debate about whether it's even possible to write exception-safe C++ code. In the very least it's difficult.

So Google C++ uses a value and error union type, open sourced as absl::StatusOr [2]. The nice thing was you couldn't ignore this. The compiler enforced it. If you really wanted to ignore it, it had to be explicit ie:

    foo().IgnoreError();
But here's where the author lost me: this chaining coding style he has at the end. To make it "readable" a bunch of functions had to be created. You can't step through that code with a debugger. The error messages may be incomprehensible.

I much prefer Rust's or Go's version of this, which is instead imperative.

[1]: https://docs.oracle.com/javase/7/docs/api/java/lang/Integer....

[2]: https://abseil.io/docs/cpp/guides/status


> To make it "readable" a bunch of functions had to be created.

and in doing so created code that was far more self-documenting and evidently correct. the counter-example was no where as easy to reason about (imo), even for the simple example.

> You can't step through that code with a debugger.

i don't think it's fair to comment the content of the idea based on the quality of existing debuggers. in any case, i don't think it's true that you can't step through this code on a debugger (generally): VSCode & Roslyn have no issue with this sort of structure in C#.

> The error messages may be incomprehensible.

the ones in the example may be. i've worked in a large code-base using this approach before, and there was rich error information. transformations of failure states (i.e. the Error values) are easier to do with context, as opposed to your catch-block which has no knowledge of the context in which an exception was thrown.

> I much prefer Rust's or Go's version of this, which is instead imperative.

go's (err, val) "error handling" paradigm is, imo, it's worst feature. i can't talk for rust. whilst your assertion that a failure condition is impossible to reach may be true for the code you write today, it almost certainly wont be in the future..

how many error dialogues have you seen saying some variant of "unreachable state reached"? :)


> For example, Java's parseInt [1] throws a NumberFormatException if the string can't be parsed. IMHO this is terrible design.

It's unergonomical design, but it's the _correct_ design: the method is declared to return an int, and it can't fulfill its promise: throwing an exception is the right thing to do.


It's the correct design only if we assume that the design space didn't allow for a different return type. Kotlin for example offers toIntOrNull (https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.text/to-...) as an alternative.


A null doesn't contain any information about what went wrong and unless you religiously check your objects for null values at every turn you just turned a clear stack trace into a search for waldo at the international waldo impersonators meetup.


Note that in Kotlin the return type is `Int?`, not `Int`. You can't forget to check for null because the compiler enforces it.

To your first point: Another example in the design space would be Rust which works very similarly to Kotlin but returns more information in the failure case.


> Note that in Kotlin the return type is `Int?`, not `Int`.

How does that fare with unnecessary boxing of primitives?


If the function isn't total (as in: for every string there is an int) then the boxing is necessary, no?


Not if you have user-defined value types (soon coming to Java).


But the mechanism is just wrong; Exceptions are heavyweight and should only trigger with unexpected issues, bugs that a developer wants to see a stacktrace for.

I mean in this case you could consider it developer error; a developer tried to parse an integer without first validating the input and checking if it COULD be parsed. But it's normalized to just "let it crash", instead of writing additional pre-check code.

With errors as values - like Go has normalized - instead of a big, expensive, potentially panicky exception, you just get a lightweight error option back. With the more functional approach of Either, you are basically forced to deal with the "but what if I can't parse it" code branch.


> a developer tried to parse an integer without first validating the input and checking if it COULD be parsed

Is there a meaningful difference between validating that input can be parsed and parsing it?

Any validator that doesn't actually parse the data - according to the same rules the parser uses - runs a risk of incorrectly passing/failing certain cases, no?


So here's a pattern neophytes often end up implementing:

   if (isValidInput()) {
     parseInput();
   }
It seems like a good idea but it's not, for several reasons:

1. It requires an explicit second step. Worse, the behaviour of the second step may be undefined if the first step didn't take place. Either way it's just error-prone;

2. While this may be correct for a static string with static rules, what if this isn't stateless? You've now introduced a race condition;

3. If each step requires context (eg options) you need to correctly pass them to both. This is another potential source of error; and

4. You've created what's likely an artificial differentiation between valid and invalid input. What happens when that changes and you need to distinguish between invalid (not a number) and invalid (number out of range)?

(2) is particularly common in security contexts. You'll see this pattern as:

    if (userCanEditPhoto()) {
      editPhoto();
    }
Usually you can't do this, as in the primitives won't be there. This is for good reason. Engineers should instead change their mindset to implementing these things as an atomic action that gives reason for failure. Exceptions are one version of this. They're just bad for other reasons.


Java needs a TryParseInt (sorta like C# has) so can you use either one as appropriate.

There are actually two main use cases for integer parsing: one where the value is expected to be an integer (you're parsing a file format) and the other where it's just likely not to be an integer (getting input from the user).


To me this is not "ecxeptional" at all, as it is easy to call that function with a non number and it should return "normally" that the input was not an integer. I pretty much prefer the rust Result or C++'s expected<>.


It has nothing to do with "exceptionality". It has to do with the method not being able to fulfill its contract. Rust's returning `Result<>` is a _different contract_.


Checked exceptions are horrible to work with, but they require the programmer to do something about them at the level immediately prior while as unchecked exceptions could just bubble up to an unexpected point in the stack which is arguably worse.


To whomever is going to implement this: Please save the stack trace into the error object at its creation time, at least in debug builds.


On Windows, there is an API for this - https://docs.microsoft.com/en-us/windows/win32/api/errhandli... - you can save the callstack there, before a C++ exceptions happens.


No, I don't want to wrap every single statement of my program in its own if-block, thank you very much.


Rust solves this issue by having a ? operator to bubble up Errors. Before that there was the try! macro with the same semantics. That cuts the boilerplate to a minimum while having a well defined and explicit control flow.

I agree that if you had to write the ifs by hand it would be a pita. Looking at you, Go.


In the end that is equivalent to bubble up exceptions when thy are of the unchecked type.


I don't think that's true because if I understand it correctly, the return type of functions which can possibly throw unchecked exceptions would not indicate that they can throw or what they can throw. On the other hand, with the "errors as values" approach (including "bubbling up" operators like `?`), you can tell exactly from the function's return type if an error can be returned and if so what the set of possible errors is.

Did you maybe mean "the checked type"? In that case I still think it's not equivalent because at least in Rust you can automatically transform the error while it bubbles up, while I don't know of a language with checked exceptions that lets you transform the exception while unwinding (short of manually catching, transforming, and re-throwing).


> the return type of functions which can possibly throw unchecked exceptions would not indicate that they can throw or what they can throw

As far as I know, that's how Java's "throws" method signature works, which has been widely regarded as a mistake.


Throws is for checked exceptions. Unchecked are not listed in the throws list.


It's actually even worse: you can include both checked and unchecked exceptions in the `throws` clause. The compiler will only enforce handling of the checked exceptions included. Unchecked exceptions listed in the `throws` clause serve as an optional hint to others. Note that you can erroneously include any exceptions you like in the `throws` clause, even ones that are never thrown from the method. These quirks are often covered by static analysis.


The key difference for me is that you have to explicitly handle Results somehow, you can't just pretend that the function is infallible and hope that something up-stack is going to deal with all the failure modes. Also while you can have generic Error types it's generally frowned upon for libraries which are encouraged to provide meaningful error types that can be used to decide how a problem should be dealt with. Coupled with Rust's pattern matching it makes for concise and expressive error handling in my experience.


One could solve the problem further and more conveniently by doing the bubble up implicitly at every call and just re-invent exceptions.


If you have to do this, the structure of your program might be wrong.

Don't check the return value of a call to the same API multiple times. Make it such that all calls to the API go through the same code that you write, so you have to check the return value only "once". This may sound extreme, but it's pretty close to what you can realistically achieve.

You can achieve that trivially by exiting if something goes wrong when calling the API. And where exiting is not possible because it's a longer running application, concerns have to be separated: If there are multiple pieces of code that need the same data, feed those pieces the data they need from a central location that interfaces with the API. And have the error handling logic (which is usually higher level control logic) in the central location.


Have you considered using patterns that help dealing with that?

like Railway Oriented Programming

https://www.youtube.com/watch?v=45yk2nuRjj8

I believe that almost every thing should return Result<T>, because almost everything can fail and compiler should scream when you do not handle those fail pathes.

That makes me believe that C#'s "FirstOrDefault" for value types sucks, because you're never sure whether the "Default" comes due to lack of value or because the found value is actually the same as default

e.g

var list = new List<int> {1,2,3};

var found = list.FirstOrDefault(x => x < 1);

found = 0 (default)

meanwhile 0 may be valid value! so we aren't sure whether it is error or an actual value

and we have to perform e.g casts to `int?` or stuff to detect that.

using Result<T> gives very precise information


> meanwhile 0 may be valid value! so we aren't sure whether it is error or an actual value

This function is useful when you don't really care whether it's an error or the value. Imagine you're querying the view count of an item of the user. If the user is anonymous, the select might give an empty result, but you only care about showing a number to the user. So 0 is absolutely fine in that case.

If it's important to you whether the item actually exist, you're using the method in the wrong place.


Yes, I can always use First and catch Exception, which is meh.

First returning Result<T> or something like FirstOrNull would be better, because FoN would work for ref types - classes and stuff the same way (afaik) as FoD does, but it'd make error handling for value types like ints more precise


This was bread and butter error handling in COM, everything old is new again!


Yep, all hail the mighty

    #define CHECK(com_expr) hr = (com_expr); if (FAILED(hr)) { return hr; }
Ah, the memories... glad I don't have to touch it ever again. "And a million other things that, basically, only Don Box ever understood, and even Don Box can’t bear to look at them any more".


COM is where all new OS APIs land nowadays, since Vista.

Even if they are eventually wrapped in .NET libraries.



Not sure why people don't like exceptions. Throw different error classes according to the source of the problem and just handle differently in the upper classes.

throw new ErrorUser('Bad input')

-> Show friendly error messages.

throw new ErrorFatal('Db unavailable')

-> Email error to dev and quit.

I never like the verbosity of returning errors from each methods.

How hard is it to trace the stack when you're supposed to be using error logging tools like Sentry?


It's a trade I'm willing to take for simplifying reasoning about non-happy-paths. Try catch is great in theory but it's super easy to shoot yourself in the foot with in bigger projects, either because it's non exhaustive or someone took the easy way out and made a catch which isn't fine grained anough.


"Which program is easier to read?"

For me it was the second. Am I the only one ?


The first one reads like somebody just found out about functional programming and functors and tried to "improve" a straightforward bunch of if statements. I can read the second program without having read the plain English description, but I definitely would prefer to have the comment for the first one. I think even in languages like Haskell I would prefer (just sometimes) to read just a straight if-elseif-elseif-else.

The whole thing about the way it's written with throwing/catching is a red herring anyway, you should just replace those with a different choice of if's. If you're feeling super adventurous, you can instead replace them with goto's, which is kinda funny; it would actually simplify the code, how often do you see that?


I think part of the problem is that Typescript doesn't have support for error-as-value baked into the language and pervasive in the ecosystem, so adopting that style isn't as ergonomic as it would be in a language that does. The equivalent in Rust would be far more clear and concise due to the ? operator and Result-types being ubiquitous.


The second by far, especially if you just remove the else statements and let program flow continue naturally.

There are very good reasons to prefer Result over exceptions, but this example is not one.


Agree and it could be easier to read in my opinion if you deal with the exceptions first. For example.

    let v = Number.parseInt("a3", 10);
    try {
        if (Number.isNaN(v)) {
            throw new Error("NaN");
        } else if (v > 3) {
            throw new Error("gt 3");
        }
        v += 1;
    } catch (error) {
        v = 3;
    }
    v += 1;

Or writing a "guard function" that throws ...

    function throwIfNaNorGt3(v) {
        if (Number.isNaN(v)) {
            throw new Error("NaN");
        } else if (v > 3) {
            throw new Error("gt 3");
        }
    }

    let v = Number.parseInt("a3", 10);
    try {
        throwIfNaNorGt3(v);
        v += 1;
    } catch (error) {
        v = 3;
    }
    v += 1;


Tbh, to me the first one is easier to read. There's less jumping.

But both are terrible. It should just be a bunch of if (...) { ... } else if (...) { ... } else { ...} etc. with no mutation of variables (what are all those v += 1 for?).


Those v += 1 implement the exact specification given above:

    1. Parse an integer N from a string.
    2. If N is NaN, fail with an error. Otherwise, increment N by 1.
    3. If N is > 3, fail with an error. Otherwise, increment N by 1.
    4. If steps 1-3 failed, set N to 3.
    5. Increment N by 1.
Those are quite strange requirements, and the resulting second code looks strange too, but... it faithfully and obviously correctly represents the given specification.


The example is strange and contrived. I couldn't torture the logic enough to demonstrate how readability breaks down with exceptions. Reading through the comments, I realise that even if the code is unreadable with exceptions, it can be rewritten to be readable. I updated the article to highlight not only readability, but also compositionality. I also tried to draw parallels with Promises (which are also monadic). I tried to target the article to intermediate programmers that may not already know most of the concerns raised in the comments, and I surely can't cover enough space on error handling without writing a book.

Thank you for the comments. I've learned more from reading the comments.


It's the kinda assignment where you should look at a higher level - what does it do? What is N used for? What are the possible inputs?

I mean if you start with a set of tests instead of pseudocode written down in text you could probably write something smarter.


Second one is way easier to read, even though it's verbose on purpose.


Errors as values are fine and useful. However author also says this:

"Programming with exceptions is difficult and inelegant."

I am of completely opposite opinion: to me exceptions are very easy to use and elegant for what they intended. It does not mean that one has to rely only on exceptions or on plain error as values. Use both for the best benefits depending on situations. Why programmers get obsessed doing thing in "there can be only one right and true tool language, concept, style etc. etc." way is completely beyond my understanding.


In my opinion, the difference between errors as return values and checked exceptions is merely one of syntactic sugar. Both are conceptually a sum type together with the regular return value, and the syntactic sugar for handling and/or propagating the error or exception is really a spectrum, not a dichotomy. I believe it would be useful to focus on the possible design choices within that spectrum, regardless of the underlying implementation mechanism.

Of course, the implementation mechanism matters at the machine code level or in the runtime. However, that is mostly a question of performance trade-offs and interoperability, but otherwise just an implementation detail, and doesn’t have to be a question of expressiveness and code-level semantics. You can implement exceptions as subroutine return values, and you can implement error return values with exception-like mechanisms behind the scenes. That should be a different concern from how it looks like at the source-code level.


The main caveat is that oftentimes, checked exception handling doesn't compose well - see what kind of trouble Java gives you, for example.

Recent articles I've read on effect modeling languages seems to give a more uniform construct for bringing checked exceptions in line with other control constructs.


> see what kind of trouble Java gives you, for example.

I program a lot in Java, and the only trouble there is the lack of support for sum types and/or variadic type parameters in generics (i.e. to express functional interfaces that can throw an arbitrary number of checked exceptions, as a type parameter). That’s the only pain point for me and is something that could be fixed.

In fact, the interplay with control structures is exactly what I was referring to by syntactic sugar. Indeed it also concerns the type system.

But let’s talk about that, not about exceptions good/bad or error values good/bad. That’s too simplisitic.


I mean, syntactic sugar is one thing, but being entirely incapable of creating variadic exceptions really hamstrings you in places where you don't wish to write a great deal of redundant code for each combination of exceptions you may see.


Completely agreed about variadic exceptions, but that's not a drawback of exceptions, it's a limitation of Java's supported syntax for generics. In catch clauses, Java already has exception sum types (`catch (FooException | BarException ex)`), and Java also has variadic method parameters, so why not have variadic type parameters, maybe something like `Foo<V extends Bar, T... extends Throwable>` that can be instantiated as `Foo<SomeBar, BazException | QuxException>`.


Yep! I'd love to see something like this in Java, but it's unlikely, probably for the usual runtime type erasure reasons. Exceptions can be implemented in this way, and would strike a nice balance in doing so.


> Some errors are unexpected and should stop the program; you want to use exceptions for those.

Precisely the opposite: exceptions are a fail-fast mechanism that gives you an alternative to terminating the program.

Now, as slx26 mentioned, it's only half of the story. Most APIs (including .NET) document exceptions badly, they're not discoverable, and if you try to use them to _recover_ from a condition, you're in for a world of pain. The workflow is usually: attach the debugger, try to make the exceptional condition happen, inspect relevant info in the debugger and write the catch block.

I wish that programming languages supported some contract-like mechanism of declaring: "This method can throw only X, Y and Z". If the method throws anything else, a system-defined "UnexpectedException" would be thrown, encapsulating the invalid one. C++ used this model once upon a time, but they went away from it due to runtime costs and it being little-used. (Also, it terminated the program instead of rewrapping the exception.)

Exceptions are first-class values, but few programmers treat them as such, probably because the programming language allows them to do so. So we should start by fixing PLs.


There's no need for even that complexity. If base exception type had a single property, something like "CanRetry", all exception handling would be simple.

Because when it comes to exceptions there are really only 2 things you can do: abort the current operation or retry it.

The code generating the exception will know which of these is appropriate and the try/catch handler is what would restart or cancel the operation.

The exact details of the exception are for debugging not for program flow.


> If base exception type had a single property, something like "CanRetry",

"Can retry" _WHAT_? This can work if you meticulously rewrap low-level exceptions into higher-level ones that reflect the high-level operation that failed.

Concrete example: FileNotFoundException. I'd say that in "normal" circumstances it's not retriable: you're looking for a file, it's not there, so an exception is thrown. In "unusual" circumstances you're polling (i.e., waiting for a file to appear somehow) or you're an OS shell and is searching the path for the location of the program.


Where you handle exceptions has nothing to do with where they are thrown. You put your try/catch around whatever operation can be retried at the top level. I don't know why you'd need to rewrap exceptions -- I've never done that.

In that handler, you just need to know if the exception is fatal or a temporary issue for which retrying might succeed. If there was a CanRetry flag on the exception, that makes that determination easy without having to know every potential exception type.

Your FileNotFoundException is a good counter-example as maybe the called code is unable to know the intent. So, yes, in that case the handler has to make this determination based on the type (as we do now) or some code in the middle, that knows the intent, needs to catch and set that flag.

But most of the time one can determine at the point exception is thrown whether or not it's a potentially temporary situation (like a network error) or a unexpected fatal program logic error.

Perhaps "CanRetry" is too prescriptive of a name.


> Where you handle exceptions has nothing to do with where they are thrown.

Funny you say this, when it's demonstrably NOT the case: a throwing method NOT wrapped in a try/catch will NOT have its exceptions handled at the call site. And vice-versa. If you want to retry a particular failing operation, you write try/catch around IT, not several levels up the stack.

You _could_ write it several levels up the stack IF you have precisely typed exceptions, therefore wrapping.

> I don't know why you'd need to rewrap exceptions -- I've never done that. [...] maybe the called code is unable to know the intent

To convey meaningful semantic information about the (business) operation that failed. Updating a record in the database can fail due to business rules (DB constraints), network connection that disappeared, concurrency conflict, transaction deadlock, etc. The user or higher-level code is not interested in the root cause, but in the actual consequence ("Could not update record". And yes, "IsRecoverable" flag, the value of which depends on the inner exception and its properties.).

And yes, the called code rarely knows the intent. There are a bunch of libraries out there being used in diverse contexts. So you catch and wrap the exception. Wrapping wouldn't be needed if library authors were careful about designing their exceptions, but I've rarely seen this to be the case. Even C# guidelines recommend you to use the generic, system-provided exceptions if an "appropriate one" exists. (IMO, a most terrible advice. And I discovered it was terrible by first following it then going back and designing "proper" exceptions for the system.)

> Your FileNotFoundException [...] needs to catch and set that flag.

But the exception type does not have that flag. So you have to wrap it in another exception. (Though all exceptions in C# have a Data field that is object -> object dictionary accessible to anyone. So you could use that.)

> You put your try/catch around whatever operation can be retried at the top level.

What is "top-level" for you? The shell's REPL loop? Exception blocks are non-restartable, so how would REPL continue the path-searching loop that threw FileNotFoundException?

> temporary situation (like a network error)

Ah yes, I love these. Someone pulled the power cable on some router the computer is indirectly connected to. To the program it looks the same as ordinary timeout error. How temporary is it?


> If you want to retry a particular failing operation, you write try/catch around IT, not several levels up the stack.

Generally speaking when I retry and operation it's pretty far up the stack that I restart it. I'm not retrying sending a single byte, I'm retrying the entire file transfer operation (as an example). If it's batch job processing data in a loop, then the processing of each item is typically where I would catch and retry or ignore. If the exception actually said "I think you should retry" then it could retry otherwise it would abort.

> To convey meaningful semantic information about the (business) operation that failed.

Wrapped exceptions tend to provide less information than the root exception. I agree that library authors aren't as careful as they could be about exceptions and you might need to wrap an exception just to make it sane. Generally most libraries throw LibraryException and the real exception, with meaningful information you can action, is in the inner exception. I blame Java's checked exceptions for making that a thing.

> But the exception type does not have that flag.

Right. That's why I proposed it. "If base exception type had a single property... something like "CanRetry"... all exception handling would be simple."

> To the program it looks the same as ordinary timeout error. How temporary is it?

Never forget to put a limit of retries. You could get really clever and put an exponential delay on it.


May I interest you in looking at Java? It has some interesting lessons wrt your proposition.


> I wish that programming languages supported some contract-like mechanism of declaring: "This method can throw only X, Y and Z". If the method throws anything else, a system-defined "UnexpectedException" would be thrown, encapsulating the invalid one

Boy, do I have some news for you... Like about 25 years old news. Did you ever try java?


Yes. What I'm thinking of are _not_ checked exceptions, at least not at compile-time.


Can you elaborate then?


I did in the original post, and that's why I used C++ as example instead of Java. A method declares `throws X, Y, Z` and _runtime_ checks that no other exception escapes. No source changes needed if you add W to the list. And if some other exception escapes, it's wrapped in `UnexpectedException` that is reserved for and throwable only by the runtime.


I guess we could extend Project Lombok to do this. You'll still have your "throws" statement but then the tool would wrap calls to catch those that are not listed in "throws" statement and wrap them as you say.


That'd be cool.

It just occurred to me that I could probably do the same with attributes and DynamicProxy for C#. And also implement additional checks like "IF X is thrown, its properties must satisfiy some constraints.". (I program both in Java and C# these days.)

Such "UnexpectedException" as I suggested would serve two purposes: 1) well, knowing that something unexpected happened and allowing you to handle it with "last chance handler", 2) helping the developers maintain the contract. If you change a method so that it can throw some new exceptions (compared to the previous version), you've broken its contract/compatibility. This would then show up during testing.

> You'll still have your "throws" statement but then the tool would wrap calls t

Yes. Fortunately, Java allows you to mention subclasses of RuntimeException in "throws" declaration.


But... both purposes 1) and 2) are already served perfectly fine with the current exception models already: you catch Exception at the very top-level in the "last chance handler", and if during testing an unexpected exception is thrown, it bubbles up past the existing handlers right into this "last chance handler". What does re-wrapping help with, exactly?


> it bubbles up past the existing handlers

Unless the method 1) throws an exception which it should not have according to its declarative contract (annotations in Java, attributes in C#), and 2) it gets (erroneously) handled by an intermediate handler.

> What does re-wrapping help with, exactly?

It makes it clear that the method broke its declarative contract, which is what exceptions are _for_. And given the restriction that only the runtime can throw such exceptions, you're sure that no other method can randomly throw such exceptions because... they like it so.


Well, "(erroneously) handled by an intermediate handler" is a tricky situation: would it be really handled incorrectly?

Another question is ergonomics. It's trivial (but tedious) to write code like this:

    class Foo {
       // ...

       public void Frob(...) throws FooException {
           try {
               // ...
           }
           catch (Exception e) {
               throw new FooException(e);
           }
       }

       public void Blarg(...) throws FooException {
           try {
               // ...
           }
           catch (Exception e) {
               throw new FooException(e);
           }
       }
    }
which is actually a "best practice" already ("annotate inner exceptions with some high-level context and wrap them in high-level exceptions") — and adhering to it makes your proposition completely extraneous, because nothing can throw an UnknownException ever.

And in before "don't catch and wrap Exception!", consider that Foo maybe parameterized by some dependency that may be implemented as a network service, or a disk file, or a DB: three different implementations will throw completely different exceptions: FileNotFound vs NetworkConnectionClosed vs OdbcInvalidManufaturer. Either dependency interface allows implementations to throw any of those (so that the user of Foo, who knows which implementation it specified, can handle those), or it makes them to wrap them all into DepException, but then, again, this means that Foo's methods will catch-wrap-rethrow DepExceptions instead of just Exceptions.


> is a tricky situation: would it be really handled incorrectly?

Depends. You can't know in a situation like

    try {
        DoA(); DoB();
    }
    catch (SomethingEx e) { ... }
If previously only DoB could throw SomethingEx and now DoA() can also throw it, the handler is almost certainly incorrect handling if this is some "generic" exception. C#/.NET base library is notorious for using InvalidOperationException for all kinds of unrelated crap.

> It's trivial (but tedious) to write code like this

Eh. Methods without "throws" would be "unchecked" at runtime as well. So my (imagined) best practice would be that non-private methods declare their exceptions and leave it to the caller whether and how to wrap them.

> And in before "don't catch and wrap Exception!"

Oh, I do that all the time for precisely the reasons you mentioned. Exceptions from the lowest level are most precise and least useful as there's no information about the context. (Unless you go down the unmaintainable rabbit hole of parsing the call stack.)


So, if I understand your proposal correctly, you want to split the current 2-pronged approach "catch everything, handle what you can, re-throw what you can't wrapped in FooException" into a 3-pronged one: "catch what you believe can be handled by you or your users, handle what you can, re-throw what you can't wrapped in FooException, and the system will re-throw everything else wrapped in UnexpectedException". The difference is that if now some dependency of Foo would change drastically, the Foo's users won't be able to accidentally swallow or even see the new exceptions, those will go all the way up as UnexpectedExceptions and will draw the due attention.


> So, if I understand your proposal correctly,

You understand it correctly.

> if now some dependency of Foo would change drastically

The most frequent "drastic change" being writing new code and fixing bugs. It's extremely easy to widen the set of exceptions thrown by a method and zero tooling to help you with finding out what exceptions can be thrown from the code.

Say what you want about Java, but its division of throwables into errors and exceptions makes sense. Errors like stack overflow, VM faults, etc., should not be exceptions. Under this scheme, they would always propagate out of the method unwrapped. Again, .NETs predefined exception types are botched beyond repair.


"> Some errors are unexpected and should stop the program; you want to use exceptions for those.

Precisely the opposite: exceptions are a fail-fast mechanism that gives you an alternative to terminating the program."

I don't see how this logic flows.

Exceptions do give you the alternative to recover, sure, but the author is saying 'the kinds of errors that produce states where you should stop the program ... use exceptions'.

You're not really disagreeing it seems.

Where you might disagree, is that the author is indicating the 'recovery cases' are more suited to being straight error returns while you're hinting at exception recovery.


> You're not really disagreeing it seems.

The author wrote "stop the program". I took it to mean literally: stop the program, i.e., exit immediately, i.e., crash. That's not acceptable in long-running "service" programs.


I am writing some C++ code for a web application, and there I am handling errors via exception. There are two broad types of exceptions, one that is internal and one that needs to be reported to the user. Following is how I an handling the errors, please could you all suggest a better approach if my approach is sub-optimal designwise?

// Base class

    HandleRequest(req, res)
    {
        try {
            try {
                post_processing(req, res) // implemented by derived class
                process_request(req, res) // implemented by derived class
                pre_processing(req, res) // implemented by derived class
            } catch(send_to_user_exception) {
                send_error_to_user(send_to_user_exception.what()) // implemented by derived class
            }
        } catch(internal_exception) {
            log_error(internal_exception.what())
            send_internal_error_to_user(internal_exception.what()) // implemented by derived class
        } catch(unknown_exception) {
            log_error(unkown_exception.what())
            send_internal_error_to_user(unkown_exception.what()) // implemented by derived class
        }
    }
// Each request type is handled by its corresponding derived class and implements the following methods of the base class.

post_processing(req, res) // will throw exceptions of type send_to_user_exception and internal_exception

process_request(req, res) // will throw exceptions of type send_to_user_exception and internal_exception

pre_processing(req, res) // will throw exceptions of type send_to_user_exception and internal_exception

send_error_to_user(error)

send_internal_error_to_user(error)


There's nothing wrong design-wise with your approach, IMO. I've seen several people (including very well-known C++ personalities) argue that exceptions should be used for X and error codes for Y, but this is just convention.

C++-wise, you probably want to catch std::exception and "..." too.

Finally, you said that there's two types of exceptions and only one of them is supposed to be reported to the user, but in your code you seem to report everything to the user. You should edit your message to clarify what you meant.


> C++-wise, you probably want to catch std::exception and "..." too.

Yeah, the "unknown_exception" in the above pseudocode represents that :)

> Finally, you said that there's two types of exceptions and only one of them is supposed to be reported to the user, but in your code you seem to report everything to the user. You should edit your message to clarify what you meant.

Yeah, only one will be reported because only one exception handler will be called. So the internal error will be reported to the user as "internal error" and some internal code that the user can report back to me if they want to. The other error is user error. So broadly there are only two categories of errors.


“Programming with exceptions is difficult and inelegant”

Nonsense.


Personally, i really like having multiple return values, since being able to give a function multiple inputs but only being able to return a single thing always felt weird - if your require any metadata in a language like Java, then you'd have to come up with wrapper objects and so on.

That said, i really dislike the following from the article:

  if (error) {
    // you can handle the error as you see fit
    // you can add more information, end the request, etc.
  }
To me, that's an example of "opt in" error handling, which in my eyes should never be the case. The compiler should force you to handle every exception in some way, or to check for it. My ideal programming language would have no unchecked runtime exceptions of any sort - if accessing a file or something over a network can go wrong in 101 ways, then i'd expect to be informed about these 101 things when i make the dangerous call.

Handling those wouldn't necessarily have to be difficult, in the simplest case just wrap it in an implementation specific exception, like InputBufferReadException regardless of whether you're working with a file or network logic and let them bubble upwards to the point where you actually handle them properly in some capacity, be it with retry logic or showing a message to user, or letting external calling code handle it.

Why? Because whenever you're given the opportunity to ignore an exception or you're not told about it, someone somewhere will forget or get lazy and as a consequence assumptions will lead to unstable code. If NullPointerExceptions in Java were always forced to be dealt with, we'd either have nullable types be a part of the language that's separate from the non-nullable ones (like C# or TypeScript i think), or we'd see far more usages of Optional<T> instead of stack traces in our logs in prod, because we wouldn't be allowed to compile code like that into an executable otherwise.

Of course, that's my subjective take because of my experience and things like the "Null References: The Billion Dollar Mistake": https://www.infoq.com/presentations/Null-References-The-Bill...

I think languages like Zig already work a bit like that: https://ziglang.org/learn/overview/#a-fresh-take-on-error-ha...


> The compiler should force you to handle every exception in some way, or to check for it.

This is the single most unproductive mis-feature a language could have for me. Programming is already a tedious excercise of wrangling your thoughts into an alien form the computer can understand. You want, on top of everything else, the computer to refuse to run your program at all, unless you explicitly handle every possible edge case?

I get that some people are engineers with rigid requirements. I'm an artist - I sculpt the program to produce output I'm not entirely clear on. I'm trying to make the computer to interesting, unexpected things.

Say I'm making a game. I wanna load a character sprite from an image file and draw it on the screen. Do I really need to handle all the possible ways that file could fail to load right now, before even seeing a preview of what it should look like? Hell no!

It's like having an assistant who refuses to do anything unless you specify everything! Hey assistant, get me a coffee. "I refuse to get you a coffee because you didn't specify what I should do in case the coffee machine is broken." Aargh!


I don't quite follow. You always have to somehow handle the case the file does not load successfully. In exception languages that handling might be implicit (raise an exception and crash your program) and in "errors as values" languages you at least have to acknowledge that it could go wrong with something like `image.unwrap()` (which turns it into a program aborting panic).


One of my personal favorite examples of exception handling was a small GUI app with a single top-level exception handler at the event loop that displayed an error message and continued.

That application was extremely robust. You try and save a file and 100 different things could go wrong (network drive unavailable, file is read-only, etc) but it nicely recovered and you could see what the problem was, correct it, and re-save. One single exception handler for the whole app.


> You always have to somehow handle the case the file does not load successfully. In exception languages that handling might be implicit

I.e. you don't have to handle it.

Until you're polishing the program for a stable release, that is.


Right, in both approaches you can choose to handle the error by ignoring it and crashing. In "errors as values" languages you have to make that choice explicit by marking the line with `unwrap`. Saying that this requirement is "the single most unproductive mis-feature a language could have" is extreme hyperbole, no? Adding `unwrap`s during development to imitate implicit exceptions for fast prototyping takes no time or thought at all.

On the contrary, when you later want to polish your program for release these explicit markings make it very easy to find the points in your code where errors can occur and which you don't properly handle yet.


Okay, if there's a simple way to mark some code as "compile this even if it's wrong", it's only a minor annoyance.

But the commenter I responded to seemed to me to be wishing for a language that explicitly disallows that. Maybe I misunderstood?


> You want, on top of everything else, the computer to refuse to run your program at all, unless you explicitly handle every possible edge case?

Precisely!

Even better - let the IDE suggest to you all of the possible exceptions and when you're feeling lazy or are hacking away at a prototype, either let it add a "throws SomeException" to the method signature and make it someone else's problem up the call chain, or just add a catch all after you've handled the ones that you did want to handle!

After all, none of us can recall the hundreds of ways network calls can get screwed up, but we're pretty sure what to do at least in a subset of those, but we'd also forget about those without these reminders. Not only that, but when you're writing financial code or running your own SaaS, you'll at the very least will want your error handling code to be as bulletproof as the guarantees offered to you by your language's rigid type systems.

Then, when you've finished hacking together your logic, your instance of SonarQube or another tool could just tell you: "Hey, there are 43 places in your code where you have used logic to catch multiple exceptions" and then you could review those to decide whether further work is necessary, or whether you can add a linter ignore comment to the code explaining why you don't want to handle the edge cases, or just do so in the static code analysis tool, so all of your team members know what's up.

Alternatively, if you're just writing something for yourself, just leave it as it is, knowing that if you'll ever need to publish your code for thousands of others to use, then you probably should go back to those now very visible places and review it.

So essentially:

  /** 
    * Attempts to load a Sprite from a file. You can then use the instance to display it on screen.
    * @param file This is the file that we want to load the image from. Use relative path to "res" directory.
    * Our engine loads PNG files and technically can also load GIF files because someone hacked that functionality together in an evening. 
    * That's kind of slow though, so we should use PNGs whenever possible. See ENGINE-33452 for more details.
    * @return A Sprite instance that you can pass to the rendering logic to put it on the screen, or alternatively process the loaded image in memory.
    */
  public Sprite loadSprite(@NotNull File file) throws SpriteGenericException, FileSystemGenericException {
    try {
      return FileSystemSpriteLoader.loadPNG(file);
    } catch (ImageWrongFormatException e) {
      wrongImageFormatLogger.warn("We found a " + e.getActualFormat() + " format file: " + file.getPath(), e); // the art team should have a look at this
      if (e.getActualFormat().equals(ImageFormats.GIF)) {
        return FileSystemSpriteLoader.loadGIF(file); // TODO unoptimized call because we needed GIFs for ENGINE-33452, remove later
      } else {
        throw SpriteGenericException("We failed to load sprite from file: " + file.getPath() + " because of wrong format: " + e.getActualFormat(), e);
      }
    } catch (SpriteCorruptedException e) {
      brokenImageLogger.warn("We found a corrupted sprite in file: " + file.getPath(), e); // maybe the pipeline is broken again?
      throw SpriteGenericException("We failed to load sprite from file: " + file.getPath() + " because of image corruption", e);
    } catch (Exception e) { // TODO ENGINE-44551 handle the file system access cases later once the API is stable and we know how it'll work on Android
      throw FileSystemGenericException("We failed to load sprite from file: " + file.getPath(), e);
    }
  }
I prefer software blowing up in predictable ways as opposed to doing so unexpectedly. Even Java is vaguely close to being what i'm looking for, however unchecked exceptions simply isn't acceptable from where i stand.


If I had to write that kind of boilerplate every time I had an artistic inspiration, I'd never ship anything!

We are on far apart sides of a wide industry. I couldn't work productively in your dream language but hey, I'm happy we can have our different tools for our different needs. More power to us! :)

> let the IDE suggest to you all of the possible exceptions

So, programming without an IDE becomes untenable. I use a text editor. It feels like you're shifting language features into the IDE. What's the difference between the compiler doing it automatically vs the IDE doing it automatically?


I definitely agree that we're on the complete opposite ends of a wide spectrum of concerns and goals!

> So, programming without an IDE becomes untenable. I use a text editor. It feels like you're shifting language features into the IDE. What's the difference between the compiler doing it automatically vs the IDE doing it automatically?

I very much agree with this observation, but from the opposite side - for many development stacks and frameworks, working without an IDE feels like being a fish out of the water, since there are numerous plugins, snippets and integrations that provide intelligent suggestions, auto-completions and warnings about things that are legal within the language but are viewed as an anti-pattern.

I'd say that the difference between the two is pretty simple, just a matter of abstraction layers. Something along the lines of:

  - the business people have certain abstract goals, which they can hopefully synthesize into change requests
  - the developer has to implement these features, by thinking about everything from the high level design, to the actual code
  - the IDE takes some of the load off from the developer's shoulders, by letting them think about the problem and offering them suggestions, hints and assistance of other sorts to help in translating the requirements into code; of course, it's also useful in refactoring and maintenance as well, letting them navigate the codebase freely
  - the language server, linter, code analysis tools, plugins, AI autocomplete and anything else that the developer should want hopefully integrate within the IDE and allow using them seamlessly, to make the whole experience more streamlined
  - the compiler mostly exists as a tool to get to executable artifacts, while at the same time serving as the last line of defense against nonsensical code or illegal constructs
In essence, the IDE gives you choices and help, whereas the compiler works at a lower level and makes sure that any code (regardless of whether written by the developer with an IDE, one with a text editor or an AI plugin) is valid. In practice, however, the parts that the IDE handles are always more pleasant because of the plethora of ways to interact with it, whereas the output of a compiler oftentimes must be enhanced with additional functionality to make it more useful (for example, clicking on output to navigate to the offending code).

In my eyes, the interesting bits are where static code analysis tools and linters fit into all of this, because i think that those should be similarly closely integrated within the ecosystem of a language, instead of being seeked out separately, much like how RuboCop integrates with both Rails and JetBrains RubyMine. Our views may differ here once again, but i think that some sort of a convergence of tooling and its concerns is inevitable and as someone who uses many of the JetBrains tools (really pleasant commercial IDEs), i welcome it with open arms.


Ohh, you could have dependency management built into the IDE (probably already do, I don't know). An integrated profiler could tell you how fast a function is as soon as you write it. I'm getting funny ideas.

What if the IDE worked with a distributed function database, rather than flat text files? Where you could browse (shop?) all the code written by others, by licence, performance, etc.?

Wonder if there are any programming streams/channels I could uh, spy IDE-based development from.


> Personally, i really like having multiple return values, since being able to give a function multiple inputs but only being able to return a single thing always felt weird - if your require any metadata in a language like Java, then you'd have to come up with wrapper objects and so on.

MRV is nice and useful, and “error as value” languages usually have ways to return multiple values (usually in the form of tuple), but it’s not proper and correct for error signalling, because the error and non-error are almost always exclusive.

In that case, using MRV means you have to synthesise values for the other case (which makes no sense and loses type safety), and that you can still access the “wrong” value of the pair.

> To me, that's an example of "opt in" error handling, which in my eyes should never be the case. The compiler should force you to handle every exception in some way, or to check for it.

That is what Rust does (including a clear warning if you drop a `Result` without interacting with it at all), although for convenience reasons (because it doesn’t have anonymous enums and / or polymorphic variants) the errors you get tend to be a superset of the effectively possible error set.

Though that’s also a factor of the underlying APIs, when you call into libc it can return pretty much any errno, the documentation may not be exhaustive, and the error set can change from system to system. Plus the error set varies depending on the request’s details (a dependency which again may or may not be well documented and evolving).

So when you call `open(2)`, you might assume a set of possible errors which is not “everything listed in errno(3) and then some”, but a wrapper probably can not outside of one that’s highly controlled and restricted (and even then it’s probably making assumptions it should not).


Does a panic count as "handling" the error?

I actually agree with Rust's choice here. You, the programmer, know whether some particular error is something you can cope with or not and it's appropriate to panic in the latter case. Where you draw the line is up to you, in a ten line demo chances are "the file doesn't exist" is a panic, in your operating system kernel maybe even "the RAM module with that data in it physically went away" is just a condition to cope with and carry on.

My litmus test here is Authenticated Encryption. The obvious and easy design of the decrypt() method for your encryption should make it impossible for a merely careless or incompetent programmer to process an unauthenticated decryption of the ciphertext. This makes most sense if you have an AE cipher mode, but it was already the correct design for both MAC-then-Encrypt or Encrypt-then-MAC years ago, and yet it's common to see APIs that didn't behave this way especially on languages with poor error handling.

In languages with a Sum type Result like Rust, obviously the plaintext is only inside the Ok Result, and so if the Result is an Err you don't have a plaintext to mistakenly process.

In languages with a Product type or Tuple returns like Go, it's still easy to do this correctly, but now it's also easy to mistakenly fill out the plaintext in the error case, and your user may never check the error. Dangerous implementations can thus happen by mistake.

In languages with C-style simple returns, it's hard to do this properly, you're likely using an out-buffer pointer as a parameter, and your user might not check the error return. You need to explicitly clear or poison the buffer on error and even then you're not guaranteed to avoid trouble.

In languages with Exceptions, the good news is that the processing of the bogus plaintext probably doesn't happen, but the bad news is that you're likely now in a poorly tested codepath that isn't otherwise taken, maybe far from the proximate cause of the trouble. Or worse, your user wraps your annoying Exception-triggering decrypt method and repeats one of the above mistakes since they don't have better options.


> Does a panic count as "handling" the error?

> You, the programmer, know whether some particular error is something you can cope with or not and it's appropriate to panic in the latter case.

Please don't. I've seen enough libraries whose authors had exactly this mindset; I do not enjoy when some fifth-party dependency thrice-removed, upon encountering an unexpected circumstance, decides that it can't bear to live in this cruel world any more and calls "abort()", killing the entire process: which happens to be a server process, running multiple requests in parallel for which a failure to serve any single request for any reason whatsoever does not warrant aborting all other request.


> Does a panic count as "handling" the error?

Undeniably? Fundamentally the language proposes, the developer disposes[0] and short of Rust being a total language, panics were going to be a thing.

So while one can argue that the ability to panic should not be so prominent, it's certainly an error handling strategy which was going to be used anyway, is perfectly valid (in some situations), and is convenient when you're designing or messing around.

Hell, even ignoring an error is a perfectly valid handling strategy, and indeed pretty easy to implement, just… explicit (though not the most visible sadly, it's much harder to grep a `Result` being ignored than one being unwrapped or expect-ed).

The important bit is that Rust warns you about the error condition(s), and lets you decode on how to handle it.

[0] though there are panicing Rust APIs where it doesn't just propose


This took a while for me to get used to coming from Java/Python to Go but I'm very much a convert now - or at least it makes perfect sense for the sorta of Go services we write. It always forces me to think, can this thing fail in normal operation or is this exceptional. If former, it's an error value that eventually should be returned to the client in some form. If latter, it's a panic and I'll see it in Sentry and know I probably have something to fix.


Interesting that the author used joi as the example, when the io-ts validation library fully embraces the functional world with success/fail return value from validation. Realistically, the advantage throw has is that if you have a single error handler in a function it avoids the "go" problem of tons of lines of error handling. The other side is that error handling becomes "optional".


Personally, I gave up on io-ts due to FP complications, and use suretype instead.

In my case, where I was parsing HTTP request body, it just simplifies the code, if I can call a single function, get back validated object of the expected type, or throw an exception if there is a problem.

Global request handler takes care of catching validation exception, and returning back user friendly error on what field(s) failed validation.


suretype looks interesting, will have to do a bit of review there. One of the things we're now doing is using io-ts for both encoding and decoding types. Internally we're more JSON than gRPC so we're using it to provide a standardized way to move data between components.


It looks like GO developers never heard about "not repeat yourself", because after almost each call in GO, you write boilerplate error checking code... So you end up writing at least 2 times more lines of code.


It was a very conscious decision by go devs. To enforce local error fixes instead of the usual wrap and throw the exception again.


The article is not about Go, although that was my guess before reading it as well.


You can always... not treat errors or exceptions. You also create employment for oncall engineers and debugging tools, I'd call it a win-win-win :) /s


You need all those error check code if you want to recover from error -- regardless of whether the error is coming from exception or value.

If you don't want to recover, just throw it down the stack that will exit, then can you do that without exception too. Just call error() function on error which will print the error and exit(-1). No need for per-line error checking in this case too.


Realistic options are not limited to "handle at every point of the call stack where an error is encountered" like Go and "end program execution as soon as an error is encountered". Most programs handle most errors by bubbling them up to some top-level event loop and presenting some variant of abort/retry/fail to users. Exceptions are tailor-made to cover this use-case.


> errors by bubbling them up to some top-level event loop and presenting some variant of abort/retry/fail to users

The pros and cons of this approach have been covered extensively in the last decade.

the points discussed were --

// <well what's the point of arguing on the net anyway??>


You don't need it everywhere. With both exceptions and error monads, you can have a happy-flow path that mostly leaves out the boilerplate, and handle errors at a reasonable point.

Go forces you to add even if you want to defer handling to a later point.


On the same topic: When An Error Is Not An exception Series: https://dev.to/vncz/series/6223


There's a place for both. Error values for conditions that your client will want to handle, and exceptions or panics for all the fatal failures.


I don't really understand the place of panics. It's not really up to a function to determine whether its error is unrecoverable (especially in the case of a library), it's up to the caller. And an unhandled Exception is, practically speaking, a panic. So it seems to me that Exception covers both use cases.


The way I see it, the issues with Exceptions are with 1) types and 2) the try/catch syntax, and the issue with Error-As-A-Value is that it's cumbersome.

Exceptions solve a very real problem. Sometimes I get to the point where there's nothing more I can do and it's time to start unwinding the stack. I eventually either signal with try/catch that I'm ready to start handling the issue somewhere up the stack or never do and I crash.

Error-As-A-Value addresses the "types" problem (specifically Option types do this; Go ignores this problem AFAIK and errors are poorly supported by the type system) and "forces" users to be explicit, except they can always just ignore the value when they need to anyway but now with added boilerplate. Just as importantly, they propagate this boilerplate to any caller, even if the caller doesn't care. Having to say within each and every caller, no, I really don't care about this error and there's nothing I can do about it right now is tedious, cumbersome, and often truly introduces no value.

I think we can do better than either by allowing the use of both. What if I had the compiler and other tooling keep track of the Exceptions that can be thrown?

    const RandomError = new Error("you have bad luck!")
    const DivideByZero = new Error("cannot divide by zero!")

    // this can only throw RandomError
    const maybeAdd = (a: number, b: number): number => {
        if (randrange(0, 1) > 0.5) throw RandomError
        return a + b
    } ?? RandomError

    // myFun can throw RandomError or DivideByZero, and our tooling
    // will help us keep track of that.
    const myFun = (a: number, b: number): number => {
        if (b === 0) {
            throw DivideByZero
        }
        return maybeAdd(a, b) / b
    } ?? DivideByZero
Well, in TS/JS, now I still need to use try/catch at some point to handle the exceptions this will eventually throw. But maybe an error-as-a-value makes more sense. What if I included sugar that optionally replaced try/catch with error-as-a-value, if that's what the use case called for?

    type Result = {
        ok: number
        error: DivideByZero | RandomError
    }

    // myFun(1, 2)? will return the result type indicated above
    let { ok, error } = myFun(1, 2)?
    while (!ok) {
        { ok, error } = myFun(1, 2)?
    }
    return ok
This is an unfortunately contrived example but I think it demonstrates my point. I don't really see any reason we can't have both in modern languages.

1. The problem with "types" in Exceptions being that you usually don't have any insight into whether or what errors can be thrown in a language that uses Exceptions as the main error handling control flow

2. The problem with try/catch syntax is subjective, but sometimes you don't want to introduce new scopes and at least 4 new lines. And code with extensive error handling becomes unnecessarily littered with try/catch when you would have preferred an abbreviated assignment expression as with error-as-a-value.


I agree that both exceptions and error values (aka result types) have their place. I would say that error values are good for when a caller should explicitly handle that case, and that exceptions are good for errors that a caller should not be expected to handle explicitly. A lot of times this breaks down as meaningful application errors vs operational or programming errors. I am struggling to find the right words for this, so I can give an example:

Let's say we have a function used to register a new user account on a site like HN.

An error value would be appropriate to return when the username is already taken, so that we can express to the caller that this is a possibility that must be handled. Most likely the caller would want to tell the user. A maintainer doesn't really care when this occurs, since it's part of the application's healthy behaviour.

An exception would be appropriate if the database is unavailable. The caller would not be expected to tell this to the user, nor is there any logical way for the caller to react to this situation specifically. In this example of a web app, the best course of action is likely returning a generic "unexpected error" message and/or a HTTP 500. The caller can typically let the exception propagate to the web layer's top level exception handler where it will be logged. As a maintainer of the system, a stacktrace is valuable for pinpointing the problem with the code path that lead to it.

(Checked exceptions, where available, blur these lines a bit)

---

In the Java world... (stop reading if you don't care about Java) ...it has been increasingly common to see types like Result<T,E> used for error values. Recently, there have also been additions to the language that make errors-as-values more practical. Sealed classes (a preview feature in Java 16, and a full feature in the soon-to-be-release Java 17) are basically an implementation of product types (with a characteristically verbose Java-ey syntax) that could be used to implement results. Returning to our example with this:

    sealed interface RegistrationResult {
          record Registered(Account newAccount) implements RegistrationResult { }
          record UsernameTaken() implements RegistrationResult { }
          ... 
    }
https://openjdk.java.net/jeps/409

beyond Java 17, you will be able to pattern-match over these with exhaustiveness enforced by the compiler. It will look something like:

    switch(registrationResult) {
            case Registered(Account newAccount) -> ...;
            case UserNameTaken() -> ... ;
            ...
    }
https://openjdk.java.net/jeps/405




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: