I'm firmly in the camp that believes that exceptions are a false economy.
The post links to an "Exception Smells" post that doesn't mention one of my pet peeves: exceptions as control flow. For example, Java's parseInt [1] throws a NumberFormatException if the string can't be parsed. IMHO this is terrible design. As a side note, checked exceptions are terrible design.
I wrote C++ with Google's C++ dialect where exceptions were forbidden. Some chafed under this restriction. It was largely a product of the time (ie more than 20 years ago now when this was established). Still there's debate about whether it's even possible to write exception-safe C++ code. In the very least it's difficult.
So Google C++ uses a value and error union type, open sourced as absl::StatusOr [2]. The nice thing was you couldn't ignore this. The compiler enforced it. If you really wanted to ignore it, it had to be explicit ie:
foo().IgnoreError();
But here's where the author lost me: this chaining coding style he has at the end. To make it "readable" a bunch of functions had to be created. You can't step through that code with a debugger. The error messages may be incomprehensible.
I much prefer Rust's or Go's version of this, which is instead imperative.
> To make it "readable" a bunch of functions had to be created.
and in doing so created code that was far more self-documenting and evidently correct. the counter-example was no where as easy to reason about (imo), even for the simple example.
> You can't step through that code with a debugger.
i don't think it's fair to comment the content of the idea based on the quality of existing debuggers. in any case, i don't think it's true that you can't step through this code on a debugger (generally): VSCode & Roslyn have no issue with this sort of structure in C#.
> The error messages may be incomprehensible.
the ones in the example may be. i've worked in a large code-base using this approach before, and there was rich error information. transformations of failure states (i.e. the Error values) are easier to do with context, as opposed to your catch-block which has no knowledge of the context in which an exception was thrown.
> I much prefer Rust's or Go's version of this, which is instead imperative.
go's (err, val) "error handling" paradigm is, imo, it's worst feature. i can't talk for rust. whilst your assertion that a failure condition is impossible to reach may be true for the code you write today, it almost certainly wont be in the future..
how many error dialogues have you seen saying some variant of "unreachable state reached"? :)
> For example, Java's parseInt [1] throws a NumberFormatException if the string can't be parsed. IMHO this is terrible design.
It's unergonomical design, but it's the _correct_ design: the method is declared to return an int, and it can't fulfill its promise: throwing an exception is the right thing to do.
A null doesn't contain any information about what went wrong and unless you religiously check your objects for null values at every turn you just turned a clear stack trace into a search for waldo at the international waldo impersonators meetup.
Note that in Kotlin the return type is `Int?`, not `Int`. You can't forget to check for null because the compiler enforces it.
To your first point: Another example in the design space would be Rust which works very similarly to Kotlin but returns more information in the failure case.
But the mechanism is just wrong; Exceptions are heavyweight and should only trigger with unexpected issues, bugs that a developer wants to see a stacktrace for.
I mean in this case you could consider it developer error; a developer tried to parse an integer without first validating the input and checking if it COULD be parsed. But it's normalized to just "let it crash", instead of writing additional pre-check code.
With errors as values - like Go has normalized - instead of a big, expensive, potentially panicky exception, you just get a lightweight error option back. With the more functional approach of Either, you are basically forced to deal with the "but what if I can't parse it" code branch.
> a developer tried to parse an integer without first validating the input and checking if it COULD be parsed
Is there a meaningful difference between validating that input can be parsed and parsing it?
Any validator that doesn't actually parse the data - according to the same rules the parser uses - runs a risk of incorrectly passing/failing certain cases, no?
So here's a pattern neophytes often end up implementing:
if (isValidInput()) {
parseInput();
}
It seems like a good idea but it's not, for several reasons:
1. It requires an explicit second step. Worse, the behaviour of the second step may be undefined if the first step didn't take place. Either way it's just error-prone;
2. While this may be correct for a static string with static rules, what if this isn't stateless? You've now introduced a race condition;
3. If each step requires context (eg options) you need to correctly pass them to both. This is another potential source of error; and
4. You've created what's likely an artificial differentiation between valid and invalid input. What happens when that changes and you need to distinguish between invalid (not a number) and invalid (number out of range)?
(2) is particularly common in security contexts. You'll see this pattern as:
if (userCanEditPhoto()) {
editPhoto();
}
Usually you can't do this, as in the primitives won't be there. This is for good reason. Engineers should instead change their mindset to implementing these things as an atomic action that gives reason for failure. Exceptions are one version of this. They're just bad for other reasons.
Java needs a TryParseInt (sorta like C# has) so can you use either one as appropriate.
There are actually two main use cases for integer parsing: one where the value is expected to be an integer (you're parsing a file format) and the other where it's just likely not to be an integer (getting input from the user).
To me this is not "ecxeptional" at all, as it is easy to call that function with a non number and it should return "normally" that the input was not an integer. I pretty much prefer the rust Result or C++'s expected<>.
It has nothing to do with "exceptionality". It has to do with the method not being able to fulfill its contract. Rust's returning `Result<>` is a _different contract_.
Checked exceptions are horrible to work with, but they require the programmer to do something about them at the level immediately prior while as unchecked exceptions could just bubble up to an unexpected point in the stack which is arguably worse.
The post links to an "Exception Smells" post that doesn't mention one of my pet peeves: exceptions as control flow. For example, Java's parseInt [1] throws a NumberFormatException if the string can't be parsed. IMHO this is terrible design. As a side note, checked exceptions are terrible design.
I wrote C++ with Google's C++ dialect where exceptions were forbidden. Some chafed under this restriction. It was largely a product of the time (ie more than 20 years ago now when this was established). Still there's debate about whether it's even possible to write exception-safe C++ code. In the very least it's difficult.
So Google C++ uses a value and error union type, open sourced as absl::StatusOr [2]. The nice thing was you couldn't ignore this. The compiler enforced it. If you really wanted to ignore it, it had to be explicit ie:
But here's where the author lost me: this chaining coding style he has at the end. To make it "readable" a bunch of functions had to be created. You can't step through that code with a debugger. The error messages may be incomprehensible.I much prefer Rust's or Go's version of this, which is instead imperative.
[1]: https://docs.oracle.com/javase/7/docs/api/java/lang/Integer....
[2]: https://abseil.io/docs/cpp/guides/status