I was going to try and look for a "coding partner" to work on PRs; maybe that could be you? :)

Sounds good to me :)

I'm just getting up to speed with, well, everything, but it sounds like lazy normalization is part of the real near-term solution for some const generics bugs. I want to keep on top of work around lazy normalization, but don't want to step on anyone's toes.

If there's a way I can at least follow along with the lazy normalization development (and help out if desired), it would really help me contribute time effectively!

Hi @ranweiler and @Aaron Hill -- so I spent some time investigating "lazy norm" but also const generics recently. I'll try to leave some notes here.

OK, so, I've done some digging, though I also got distracted by some other things.

One thing I've been doing is that I created a branch that "fixes" the generics of constants so that they properly inherit generics from their parents. This is currently ICEing with an assertion failure, and I'm trying to figure out why -- I have been doing some edits to dump more info because the current debug rules don't give enough.

The idea here is that this change is the one that is theoretically blocked by lazy norm, and I'd like to udnerstand why. I suspect we can use the existing "eager norm" strategy here as well and it should work "at least as well" as it does for associated types today, but I'm not entirely sure.

(That's why I'd like to reproduce the problem)

Separately, I think what I'd like to do is to try integrating lazy norm into the compilation in a branch and just see what problems we hit. I spent a bit of time thinking about what that would mean -- but didn't get too far. I'm debating about how to get started, basically. One problem is that the existing code manually invokes normalization in all kinds of places ('eager norm'). I suppose it would suffice to insert some logic into unification to normalize-if-needed and then to remove some of those eager normalization locations -- we don't necessarily have to remove all of them.

Still, there are also some questions about the strategy that chalk is using right now -- so I spent some time thinking over alternatives there. I should leave some notes in chalk#234, but I have to run at the moment.

@nikomatsakis is there anything I can help with at the moment? Should I take a look at that branch, and see if I can figure out the root cause of the assertion failure?

Hello folks! I've been told that you, @nikomatsakis, are the one to ask about anything relating to lazy-normalization, so I'm asking here: I've got a good amount of free time and would love to help get lazy normalization functional with const generics. Alas, opengl knowledge doesn't exactly help with contributing to rustc, or compilers in general, so I'm completely out of my waters and am bassically complete beginner wrt rustc. That stated, are there any problems suitable for first-time contributors that I could work on that would help you get this closer to being up and running?

this is rather hard to read but the key bit is the ReLateBound(DebruijnIndex(1), BrAnon(0)) -- this basically says there is some region with debruijn index 1, but we only have 1 level of binder in scope here, so we would not expect a region with index > 0. Now the question is where this type came from.

I think it has something to do with the core::Searcher trait (which actually is a common source of assertion failures of this kind). I think it's arising when compiling this snippet:

My guess is that somewhere relatively early on we are somehow miscounting the depth or otherwise mishandling the 'a that appears in there. I'm not quite sure. I guess maybe trying to minimize down the hunk of libcore that encouters the problem might be a start, i've done that in the past.

I think it might be caused by the fact that I'm using incremental compilation (./x.py build -i) - that stack trace involves hashing a type when a query is run

This seems like a legitimate error - we're trying to run the const_eval query on a type that still has inference variables,, and we don't know how to hash that type for the query key.

How should lazy normalization interact with incremental compilation? I would assume that we would want to skip caching any queries involving inference variables - but we'd probably want some way of marking which queries expect inference variables, to prevent any regressions

I found two main issues, which I (maybe) fixed on that branch:
1. Trait selection needs to have constants evaluated, since this can affect trait selection (e.g. impl MyTrait for MyStruct<{1 + 1}>
2. The const_eval query was receiving types with inference variables in them. This is a more general problem, affecting any queries that attempt to do any sort of trait selection on types they receieve (e.g. is_sized_raw). When we canonicalize the predicate we're trying to select, we attempt to resolve any region inference variables found in the predicate. However, when we cross a query boundary, we create a new InferCtxt. This means that we end up trying to resolve inference variables from one InferCtxt in a completely different InferCtxt.
The workaround I came up with was to replace all ReVars with ReErased when running const eval. However, this might be completely wrong - my goal was just to see how far the compiler could get.

After applying those workaround, the compiler was able to build libcore, libstd, and several other crates. However, I wasn't able to fully bootstrap the stage1 compiler, due to a legitimate cycle error. Running const_eval eventually ended up needing to run param_env - since param_env does normalization, it needed to be able to const-eval the same type.

Based on that, I don't think it's going to be possible to get further with 'const only' lazy normalization, - we'll need lazy normalization of associated types as well

well, it's not exactly what canonicalization does, but that's on purpose

It is similar - however, I think canonicicalization replaces all region variables.
For the purposes of the query system, I think we only need to care about ReVar. For example, MyType: MyTrait<'static> is fine

I think early bound regions are fine too - e.g. MyType: MyTrait<'a>.
The only problem is ReVar, because it references state that's encoded in the InferCtxt.

I'm working under the assumption that in general, queries might want to do things with the ParamEnv other than passing it to a SelectionCtxt.
That is, we want to provide them with as much information as possible.

Would it make sense to introduce a weaker kind of canonicalization, which only deals with ReVar? We could change all queries taking ParamEvn to take a WeakCanonicalized<ParamEnv>, and remove the Key impl for ParamEnv

WeakCanonicalized<T> would have one method: instantiate(infcx: &InferCtxt), which would return a T with freshly created inference variables from the InferCtxt

It is similar - however, I think canonicicalization replaces all region variables.
For the purposes of the query system, I think we only need to care about ReVar. For example, MyType: MyTrait<'static> is fine

It's better to replace all, it creates a more canonical result, and they don't impact the result in any way.

I'm working under the assumption that in general, queries might want to do things with the ParamEnv other than passing it to a SelectionCtxt.
That is, we want to provide them with as much information as possible.

Well, we'll see, but the current canonicalization system is designed in part to restrict what queries can do -- basically, I want to execute the query once and then be able to re-use the results for all possible lifetimes later