Because indexing is desugared to Index::index or IndexMut::index_mut very early on the compilation process, and Rust does not check if arbitrary functions are guaranteed to panic because it can’t know if it can even compute the function at compile time. Heck, it doesn’t even even warn that this print statement is unreachable.

While Rust could possibly check this, it would either need to be special cased (many in the Rust team would be against this) or you would need to make the compiler significantly more powerful to handle random expressions at compile time.

I am just speculating based on the fact that in the past team menbers were against special casing other parts of the language. I don’t know if there are Rust team members who want this. This may be fine as a Clippy lint.

Well, from another perspective, it would slow down the compile to improve runtime speed, which makes it sounds better. If you do bounds checking at compile time then you don’t have to do it at runtime, which would be awesome.

So you’re saying the optimizer can eliminate bounds checks sometimes by proving them redundant, but can’t prove that the check will fail and trigger a compile time error. That makes sense actually. It would be lovely if there were a way to get this information back from the optimizer so we could see compile time errors more often.

Note that indexing (along with my fork, indexing-str) provide truly sound compile-time checked indices and ranges (in exchange for some bogus lifetime errors when you get it wrong).

Fixed size arrays definitely aren’t as useful as they could be in Rust. It comes down to const evaluation, though: as more things become const-evaluable and Rust does more Mir-level inlining, you’ll see more “this array access will always be out of bounds” caught.

What you want is dependent typing. It is in short a technique to typecheck over values like numbers and range. The reason why Rust doesn’t support such feature is basically nobody implemented it yet.

Beside, there’s a feature called “const generics” which enables polymorphism over constants, like array length. Core teams did really hard work on it and it will (hopefully) be merged to nightly soon after initial impl.

Sure, but it’s hard. The compiler could add some extra reasoning to handle this case, but then what about Vec? It’s probably a more commonly used type.

let v = vec![1,2,3];
v[10];

but Vec is technically quite different from arrays, so even this simplest case isn’t caught. And what about:

let mut v = vec![];
v.push(1);
v.clear();
v[0];

Now the compiler has to know about all of Vec’s methods! And what about:

let mut v = vec![1,2,3];
let cond = true;
if cond {v.clear();}
v[0];

That’s another case that could be statically known, but requires the compiler to symbolically execute code, including conditional code (with nested ifs that gets quadratic!), and track state of simulated values. There are static analysis tools that do these things, but it’s complicated and expensive. And in the end all it finds is code that is always broken, so you’ll always find that bug as soon as you run this code anyway.