where the error type is unbounded. The compiler currently rejects this. When we have the never type I think this should pass - as long as there is impl Error for !, we can monomorphise this to fn run() -> Result<(), !>. Will this work like this?

I think never_type used to do something like this, inferring never-set inference variables to !, but that change got dropped because it was a breaking change in some cases where it used to give () instead. I can’t find the issue about that right now, though…

There are some cases where the compiler would reject it and other cases where it would (wrongly) infer (). The plan was to change the latter cases to make it infer ! instead, possibly only in the 2018 edition since it’s a breaking change, though that change got put on the back-burner at some point.

I think it would be nice if the compiler could infer a type in more of these situations. In OP’s example it should infer ! since we have !: Error, though where there are other trait bounds a good mechanism might be to have “default” types for traits, eg. have a new kind of declaration which looks like this:

default<T> IntoIterator<Item = T> + Extend<T> + Default = Vec<T>;

This says that whenever there’s an inferred type which needs to satisfy the bound IntoIterator<Item = T> + Extend<T> + Default then the compiler should infer the type Vec<T>.

Edit: Elaborating on that idea a bit:

There would be a base declaration of default ?Sized = !; built into the
language. Crates can then add their own default SomeTrait = SomeType;
declarations where coherence rules apply to SomeTrait (eg. SomeTrait must
be defined in the same crate, or at least part of it must be in the case of
SomeTrait + SomeOtherTrait). Default type declarations can override other
default type declarations by being more specific, similar to how impl
specialization works, and the compiler will always pick the most specific
declaration which satisfies the required bounds. For example, we could have the
declarations:

default TraitA = A;
default TraitA + TraitB = B;

Then if the compiler needs to pick a type which satisfies TraitA + TraitB + TraitC, it will first check if B: TraitA + TraitB + TraitC, then if A: TraitA + TraitB + TraitC, then
if !: TraitA + TraitB + TraitC before failing if none of them fit. The
compiler should always mark these inferred types as being inferred in case they
ever appear in error messages, and should be able to point the user to the rule
that was used to infer the type.

Edit 2: In case anyone needs a motivating example for this: ! can’t
implement Future without picking some specific Output type. This means it
can’t be used as the inferred type where we have a trait bound of
Future<Output = SomeOtherConcreteType>. We could, however, have this instead:

! cannot satisfy all traits, otherwise you could do <! as Default>::default(). The same basically goes for any other trait fn that doesn’t take self, and is in fact more important for those that also don’t return Self, as they may even have a valid nonpanicking implementation.

On this line, <Input as Function>::Output will initially be a type variable and will then unify with the type of x to give tmp: T. However on the next line tmp gets used and will cause that same type variable to try and unify with U. So it should fail with U != T.

I think the U: Future in the function’s type parameters implies that a specific U: Future impl has been passed to the function. In other words, no, there will be one type variable and a and b will both have that type.

Not niko, but I don’t see any way this could be sound. The whole reason associated types exist in the first place is to serve as type-level functions that guarantee uniqueness. Otherwise they’re exactly the same as type parameters. Allowing multiple impls of the same trait where the associated types are different would violate some very fundamental assumptions about the type system.

I don’t see how this should be any more unsound than allowing multiple impls per a (type, trait) pair, and subjecting the choice of impl to inference. The general case may be problematic because type inference may be insufficient to disambiguate between two different impls, but for the natural, vacuous impls for ! the compiler can exploit the fact that they are all functionally identical, so the choice doesn’t matter.

But even though I don’t think this would be necessarily unsound, I’m not sure this feature would fit Rust very well; it’s quite incongruous with the rest of the type system, where a (type, trait) pair is supposed to determine the impl unambiguously, if one exists. If we add impl relevance for traits, I’d rather have it available for all types, not just the empty type.

I am very not keen on using reasoning like “the impls are vacuous.” You’re talking as if having a ! appear
as a type argument is a logical contradiction… but ! is merely an empty set of values, not an empty set of types.

No, having ! as a type argument is not a contradiction. But having a !-typed value as a function argument is a contradiction.

! has an empty set of possible values. Therefore, it is possible to write an implementation for any function taking a !-typed argument, and any two such implementations will vacuously have the same computational content. Implementing a trait for a type is mostly a matter of implementing the trait’s methods; thus, any trait for which all methods take a Self-typed* argument can be implemented for !, and any two such implementations will behave identically (that is to say, not behave at all, because their methods cannot be invoked in the first place). This property doesn’t hold for all traits (Default is the usual counterexample), but for many of them it does.