@Benjamin Brittain it's not docummented, there's only a test. You need one project.json file which descibes the whole web of crates you are compiling

What are your use-case for project.json ?

The problem with it is that I really want to support non-CArgo based proejcts (mostly to make sure that the abstraction is right), but, as almost everyone uses Cargo, this functionality is not really well tested at all

but yeah, agree that JSON is less weird option

I just turned around and asked someone who works on Go

"It does not"

If this is machine-generated and nothing more, than alright...
It's just .json doesn't natively support comments, may be quite verbose and it's just my gory experience of .json programming (i.e. imitating function call syntax and in fact writing programs with logic in .json files...)

so no, you have to resolve it

interesting. so hypothetically It can handle crates with the same name?

roots are the directories containing rust crates; RA will watch all rust code in each of these. All the crates must be in one of these, I think. I don't know if this affects anything else; RA has a notion of library roots, but I don't know how that maps to this

is this just because of the 2k+ crates?

I don't know anything about how this is architected, sorry for the potentially silly questions

IDE sees all kinds of wired incomplete code code, so it tends to crash more often than a compiler. Moreover, the cost of IDE crash is much higher, because it's a long-lived process, and not a batch one. For this reason, we embrace the existance of bugs and try to mitigate their consequences

this might just be that I don't have CFGs wired properly then and it thinks a lot of code is perpetually incomplete

21:20

how many times is this loop usually called?

I haen't measured, but I'd expect less than a dozen calls for typical usecases. I don't think this relates to the number of crates. Rather, it's more likely that, among all that code, there's a particular code pattern that exposes a buggy behavior in our name resolution

that could be helpful

@Benjamin Brittain one thing I would try first is probably just bumping that limit to 10k. Like, my understanding of the code is that this loop should be really really shallow, but it might be the case that my understanding is wrong, and you ineed hit the limit due to share size. I think this is unlikely, but it is much cheaper to check than doing actual minimization.

I've tried switching to the pre-release version ofnvim 0.5 since it has built-in lsp. It seems to be working well actually

Yeah, totally understandable. This actually is I think a problem with LSP adoption across editors different from VS Code.

Microsoft are doing something very far-sighted here. They provide high-level API to VS Code for plugins for things like completion and goto definition. They maintain lsp-vscode library, which is a, well, library to bind VS Code API to LSP processes. For each language, community maintains a language-specific plugin which uses the library. So, the maintenance is distributed, for each language you need to install a separate plugin, which can be pretty high-quality and specialized.

Other editors in contrast try to do a single universal things: an LSP plugin which simultaneously supports all languages and maps LSP concepts directly to low-level editor's UI elements.

Its about vscode*

however, to recap, my concern is with the words "trusted client"

Input is still validated, right?

I think it means "non-buggy client". I'm not sure if there are any security implications. If you can run vim and run rust-analyzer, even if you trick the latter into doing something bad, it's nothing you couldn't have done in vim already.

I'm not super worried about the security of the LSP engine in my life

So, we validate that requests confirm to the protocol and “crash
gracefully” (via assert and stack unwinding) if it is not the case. We can
replace those asserts with error bubbling and reporting, but it doesn’t
really make sense to me. “Typing requests by hand” is a use case where it
actually might make sense to not terminate outright, but this is pretty
niche.

Security-wise, we also assume that everything is trusted. We don’t directly
execute code today, but we, eg, run cargo check, which can run build.rs. At
some point we’ll start executing proc macros, which are also arbitrary
code. Memory-safety wise, rust-analyzer has very little unsafe in general,
except for syntax trees. Syntax trees are 100% cursed crazy unsafe code
internally though, and, while safe interface should be fine, we haven’t
done security oriented testing to make sure that everything is really
as memory safe as we think.

TL;DR it’s probably not a good idea to expose rust-analyzer via an
Internet-visible port.

I disabled cargo check immediately :) but mostly because we don't have cargo

my point is that text editors are increasingly becoming "internet-visible" things

Yeah, that’s true, but we need to support arbitrary code execution for
proc macros, and that restricts the threat model to “everything should be
fine”. If proc macros move to some kind of wasm sandbox, rust-analyzer
will be able to provide good security, as it’ll guarantee no I/O except for
stdin/stdout/stderr and memory safety, but for the time being we are not
explicitly testing for that.

I'll post my config here when I'm all done

every time I open a file now my fans goes crazy :laughing:

This seems interesting. I have been wanting to try and automatically toml -> json but add on json-ld schema in the conversion process, I wonder how open you are to having a json-ld schema for rust-project.json?

I'd start with just documenting rust-project.json. I don't think its stable or complex enough to really benefit from a formal schema

Not heard about json-ld, but making a json-schema will be beneficial for the user experience. VSCode provides a json schema validation contribution point. By supplying that, the editor will be able to give the users IntelliSense in the rust-project.json file similarly to how it does in settings.json

Glad I asked first then. Not really familiar with json-schema, what I like about json-ld, is the ability to avoid fields like cargo's package.metadata, because its validating the contents rather than the file as a whole thing, so you can tell this field belongs to this schema and some other field another. I don't really see that from the few json-schema examples i've seen. Anyhow I don't want to press the issue really.