Separate out the concepts of "search infrastructure" (how documents and posting lists are stored in terms of bits on disk & RAM) and "ranking functions" (how queries are matched to documents).

The former is basically a solved problem. Lucene/ElasticSearch and Google are using basically the same techniques, and you can read about them in Managing Gigabytes [1], which was first published over 2 decades ago. Google may be a generation or so ahead - they were working on a new system to take full advantage of SSDs (which turn out to be very good for search, because it's a very read-heavy workload) when I left, and I don't really know the details of it. But ElasticSearch is a perfectly adequate retrieval system, and it does basically the same stuff that Google's systems did circa 2013, and even does some stuff better than Google.

The real interesting work in search is in ranking functions, and this is where nobody comes close to Google. Some of this, as other commenters note, is because Google has more data than anyone else. Some of it is just because there've been more man-hours poured into it. IMHO, it's pretty doubtful that an open-source project could attract that sort of focused knowledge-work (trust me; it's pretty laborious) when Google will pay half a mil per year for skilled information-retrieval Ph.Ds.

That's a bit of a stretch :) The high-level architecture is quite mature and stable, but there's still a lot of research, both in academia and industry, on the data structures to represent indexes, on query execution (see all the work on top-k retrieval), and distributed search systems (for example query-dependent load balancing, novel sharding methods).

It's true that different index codecs are being designed with different tradeoffs (index size vs postings lists access time and cost), but the work is very incremental IMHO, no huge advances to speak of. Also, can you talk about the top-k challenges you mentioned? Priority queues are not optimal enough ?