I'm not sure if this is expected, but even on relatively small projects (e.g., a single lib.rs with ~200 lines), when I first open a file in vim, the server pretty much instantaneously prints "workspace loaded, 12 rust packages". Then, when I use "go to definition" on anything, the server will jump to 100% CPU for maybe 2-3 seconds before responding.

On larger projects (e.g., rustc) that 2-3 seconds is probably more like 15 seconds, though I haven't measured. Certainly non-instantaneous. This means that my workflow in pretty much any coding session is to open vim, then go to some random line in a file, and "goto definition" and wait. After the first time I've done this, future queries are pretty much instantaneous -- but waiting for the first time is annoying.

I don't know whether I should file a bug. I guess the behavior I would have expected is to provide some way to say "please eagerly compute things in the background" -- most of the time, I'll only run a query after dozens of seconds have passed, so I don't need instantaneous feedback as soon as my editor opens... but I would prefer that I don't have to wait once I do decide to run a query.

One considerations is that in my case, at least, I don't care at all about battery/CPU usage/memory usage pretty much, because I'm running rust analyzer on a remote server (where I also run rustc and cargo), whereas my editor is local (on my laptop). That server has enough CPU cores and so forth that I'm not worried about rust analyzer doing work up front.

Happy to provide more details -- this is probably long enough already :)

Can we notify the user through progress notifications on the first slow query while everything is 'warming up' so they at least know that something is going on?

We can, but I do feel that there's some low hanging fruit to make it actually faster that we should pick first. Like, an extremely basic thing to do would be to collect profile traces for this case and see if we do something stupid.

Here's a run for one of my projects, haven't investigated this yet

Looked a little bit and found interesting high-level bit we should optimize. A lot of time is spend collecting the impls. We need to collect all impls eagarly for trait solving, that’s how it works. However, our code for impls is lazy: we process each imple one by one, and, I think, we also look into impls bodies for that.

I think we need to change this to store impl headers (trait and type name, but probably not generics and where clauses) inside modules in crate def map, and only compute impl bodies lazily. Cc @Florian Diebold , this looks like a fun find.

@matklad in theory I can try to run some experiments or so if it would be helpful

likewise I'm running coc with vim 8 too

happy to do some debugging

Status update here: in the recent update, we forecfully initiate analysis of opened files after workspace is loaded. It's not the super correct logic, but it shoudl help. In particular, one problem is that, if you modifiy a file during initial analysis, it won't be resceduled.