Stroustrup’s observation is more general than anything to do with C++ error handling, and he is not exempt from it.

The point is that when a feature is new we instinctively want it to stand out, for worry of missing its use. A better analogy than noexcept might be Rust’s transition from try! to ?, which did indeed make the error propagation syntax lighter-weight after we had more experience with it.

? is about as short and unobtrusive as it can be, though. I wouldn’t ever want to see error propagation become implicit, not least of which because then it would become less obvious which functions do and don’t return Result. For that matter, it would make refactoring more hazardous, because if a function started returning Result, its error would get propagated rather than the return type change leading the programmer to look at call sites and decide whether and how to propagate the error.

Rust has explicit error propagation, not implicit unwinding-style exception handling. A change like this would take away too much of that explicit control.

(I do think, ultimately, that we’re going to need a postfix form of await to avoid the problem of alternating between the left and right sides of an expression when reading. If we had that, I think that’d make explicit await less obtrusive.)

For that matter, it would make refactoring more hazardous, because if a function started returning Result , its error would get propagated rather than the return type change leading the programmer to look at call sites and decide whether and how to propagate the error.

While I prefer having the ? marker, I don’t think this would be that bad because it would break more often than that statement implies. If an infallible function was calling it, it’d break, and you’d have to decide how to handle it. If it was called in a fallible function but the error type was incompatible, it’d break and you’d have to decide how to handle it. So that means the case that just compiles is something like “a try function that’s already returning io::Result, where it’s calling a function that starts also returning an io::Result”, where I feel like the vast majority of the time “yup, propagate it” is what’s wanted anyway. And changing a return type is a major version bump, so things far more drastic that that could be changing in ways that aren’t type changes already.

Thanks @ExpHP, this is the type of thing my original question was hoping to find in an existing discussion since I haven’t explored the corner cases yet myself.

If I understand the point you’re making correctly, you’re saying that if a function foo() returns Result and the Ok and Err types both have the same function bar(), then try { foo().bar() } would need to be disambiguated?

But doesn’t what I wrote handle this case? Since the expression is inside try, it would automatically be taken by the compiler as: try { foo()?.bar() }

On the other hand, having some context in which errors are propagated implicitly would open the door to an interesting kind of polymorphism- call it “try-polymorphism.” The very same behavior that @josh is concerned about—a function call starting to propagate an error without the caller making a decision on how to handle it—would be quite useful in functions that take and call closures.

For example, look at the unwrap_or_else function. Its closure argument must return the same type as the Result's T argument. Which means if you want to use ?, you have to write something like this:

Note the extra .map(Ok) and the double ?. This has to be worked through, ad-hoc, for every combinator you might use. If we made try fns implicitly propagate the errors of other try fns they call, and made those combinators try-polymorphic based on their closure argument, you might instead write this:

This is also doable without the implicit propagation, but it would make writing try-polymorphic functions kind of odd- what would be the type of a function call expression that may or may not be try? We’d have to extend try-polymorphism from functions to values. And then ? would have to operate on those ?Results as well as on Results.

This is far more important for Futures, where you can’t just .map(future::ok) because that results in a differentFuture type than the async closure. You would actually have to use a completely separate combinator function that takes async closures! And a “polymorphic value” of type ?Future<Output=T> is even weirder than the above ?Result<T>- the async-polymorphic combinator’s choice of where to ?await! would change the order of execution, not just where it early-exits.

I doubt this polymorphism is worth that complexity at the value level, but the complexity at the function level is far less. C++ already has it in the form of noexcept expressions, which they use to e.g. select a faster algorithm for std::vector resizing when the element is noexcept-movable. And I know we’ve all run into the desire to write ? in a closure before.

Explicit ? is really helpful to newcomers to an existing codebase. The ? makes it clear which functions return a Result and which functions are infallible. All with just one character! No tooling, no looking at the signature required

This is also doable without the implicit propagation, but it would make writing try -polymorphic functions kind of odd- what would be the type of a function call expression that may or may not be try ? We’d have to extend try -polymorphism from functions to values. And then ? would have to operate on those ?Result s as well as on Result s.

If you’re polymorphic over the error type, then you can have Result<T, !> as a result type. Figuring out the inference/coercions needed to get this to work smoothly might be a trick, though.

On the other hand, having some context in which errors are propagated implicitly would open the door to an interesting kind of polymorphism- call it “ try -polymorphism.”

OO languages with traditional (but somewhat saner) exception handling have been doing something similar since forever; excuse me for the Swift example again, but that’s the language I have to write most code these days in. So, here’s the deal: Swift has almost the same thing in the form of a rethrows keyword which can be applied to higher-order functions, denoting that the higher order function may throw if and only if its function-typed parameter may throw. (Meanwhile, closures passed as an argument to such functions have their throwing/non-throwing nature type-inferred.)

I can tell you from experience that it is very annoying, regardless of whether you are the author or consumer of such higher order-functions:

From the author’s perspective, you basically have to manually annotate every single higher-order function you write.

But from the consumer’s perspective, if the author of a higher-order function forgot to annotate just one of them, then BOOM you now can’t call it with a throwing closure at all.

I think with anything like this, we are drifting more and more in the direction of “traditional” exception handling, which has the severe downside that if exception throwing is explicit in types, then it adds a completely superfluous dimension to function types. And if it’s not, it doesn’t do any good, it basically makes the language dynamically-typed in this regard, making it harder for both humans and the compiler to reason about all code.

The only change that this try-polymorphism will cause apart from enabling one to type one less Ok and one less ? (which is hardly an issue currently anyway), is that now everyone will have to remember to make their functions “try-generic” for absolutely no other benefit. I still don’t think that sometimes having to type .map(Ok) is a problem, and having to type ? everywhere a Result-typed, early-returning call is made is downright an advantage. As an aside, I write a lot of Result-heavy code, and these sort of issues come up very rarely, and if they do, they are trivial to solve with existing language and library features.

So, back to the topic, one of the core strengths in Rust’s Result- and convention-based error handling is that it doesn’t require a completely redundant dimension to the type system. There are no throwing or non-throwing functions; there are just functions that return a Result, which is a regular type, with several useful combinators. Don’t forget that in the example you cited, all parts of the code have a meaning and a purpose: to ensure type safety. This would no longer be the case with the “try-polymorphic” approach: increasingly more things would now be implicit and magical, and I don’t think that’s a very good direction for Rust to go in.

A long time ago we had an effect system and we made pure the default
(since we didn’t want people accidentally leaving it out due to sloth)
and we made the impure specifier a very small and reasonable keyword: “io”. It was still a heavy complexity bill (required a full extra
dimension of subtyping, parametricity, etc.) and still had people
breaking the rule with unsafe, which meant that the putative benefits
like “can do compile time evaluation” or “can spread on a GPU” weren’t
there anyways. And people couldn’t do simple things like “put a printf
in here for logging” (much like in haskell).

Eventually people just took to making everything io, at which point it
was a noise word and we decided to remove it (along with ‘pred’, which
just meant pure, bool, and tracked by the typestate layer).

I understand traditional exception handling has problems, but that doesn’t mean literally everything that touches it is purely a downside. try-polymorphism doesn’t have to add a new dimension to function types or make errors dynamic- it can mean nothing more than “this might return T or it might return Result<T>.”

And, again, this is not about “enabling one to type one less Ok and one less ?.” I never even claimed that was a problem that needed solving. This is about making higher order functions usable in more scenarios, enabling one to avoid duplicating their entire implementation and/or duplicating the plumbing into and out of them.

So, while I do have reservations about this kind of thing, literally nothing you’ve said has anything to do with them! try-polymorphism’s downsides have far more to do with language complexity than anything to do with traditional exception handling.

We already have to “annotate every single higher-order function we write,” in the form of the return type of the closure argument(s).

And that’s exactly why try-polymorphism would be redundant. We already have types to tell whether a function is fallible or not, or whether a higher-order function can work with any function argument, only fallible ones, or only non-fallible ones that return a specific non-Result type.

I don’t see how plain old generics wouldn’t work for that? This compiles regardless of whether or not U is Result:

fn higher_order<F: FnOnce(T) -> U>(arg: T, f: F) -> U {
f(arg)
}

If you need to use a function based on whether or not it returns a Result… well, that’s because fallible operations are conceptually different from non-fallible ones. But even in that situation, you can for the most part just switch between map/map_err and and_then/or_else on Result…

rpjohnst:

I understand traditional exception handling has problems, but that doesn’t mean literally everything that touches it is purely a downside.

And I never asserted that, however several of its particular properties are, including implicit control flow.

I believe this would just be BAD™. As I previously argued in other threads, I felt it a mistake for Rust to adopt try/catch terminology for it’s error handling as it would encourage endless bike-shedding on making it work more like traditional try…catch…finally in other languages. This thread seems to be an example of that. Hard Down-Vote.

I really wish Rust had adopted different terminology. I think this is going to be an endless debate and continuously be “suggested” by everyone new to the language.

Please DO NOT introduce anything implicit into the language that can have effects on control flow, data types, or potentially-expensive or otherwise interesting operations.

One of my personal favourite design aspects of Rust is that it forces everything to be explicit, while still having concise and clear syntax (not being too overly verbose). I like the fact that .clone() is explicit, unlike C++ where implicit deep copies can silently happen in many places unnoticed. I like that all type conversions are explicit and that the language does not silently convert between integer types like C does. I like that there are explicit Result types that I am forced to explicitly do something about (even if it is as simple as a single ? character, or just acknowledging them with let _ = unused_result()). It makes error handling very clear and nice. You can’t have random things throw exceptions out of nowhere.

Please DO NOT introduce implicit things. No implicit await. No implicit ? (FFS it is just a single character, is it really such a problem to type? it makes it clear and obvious where error handling occurs, which is a big benefit).

That said, I am OK with implicit dereferencing, which Rust has had from the beginning (and now has for match since 1.26). It is a really simple operation and it doesn’t hide important information. It is also really obvious where it happens.

You even call some of these out, but you don’t really justify why they’re okay while try/async would not be. There’s been quite extensive discussion here, more blanket “DO NOT” declarations do not really help advance that discussion.

How does Drop and especially Deref affect control flow? Drop is a special case in this regard anyway: the whole point of RAII is that 1. we intuitively expect objects to be destroyed upon scope exit, and 2. we don’t want to manage memory manually, because that leads to errors. So Drop's automatic behavior actually prevents large classes of errors and is very easy to understand, because it’s expected. Furthermore, it really doesn’t do anything at the higher semantic level. It merely cleans up exactly when it is necessary (which is ensured by scopes, ownership, and dropck), and thus it can’t really lead to errors. This isn’t true of several (most) features proposed in this thread.

How does inference affect types? It doesn’t, really… it just deduces or computes types when possible, but it doesn’t change them.

Copy isn’t really potentially expensive as only trivially-memcpy-able types without heap allocation can ever be Copy, and usually they are very small. Deref isn’t allowed to be expensive, either. Either the documentation or the Rust book (I can’t remember which one, maybe both) says explicitly that AsRef, Deref, and similar conversions should only be implemented if the conversion is trivial or almost trivial, and that it must never involve expensive operations such as dynamic allocation or lots of copying.