Introduce a more complete version of context readahead, which is afull-fledged readahead algorithm by itself. It replaces some of theexisting cases.

- oversize read no behavior change; except in thrashed mode, async_size will be 0- random read no behavior change; implies some different internal handling The random read will now be recorded in file_ra_state, which means in an intermixed sequential+random pattern, the sequential part's state will be flushed by random ones, and hence will be serviced by the context readahead instead of the stateful one. Also means that the first readahead for a sequential read in the middle of file will be started by the stateful one, instead of the sequential cache miss.- sequential cache miss better When walking out of a cached page segment, the readahead size will be fully restored immediately instead of ramping up from initial size.- hit readahead marker without valid state better in rare cases; costs more radix tree lookups, but won't be a problem with optimized radix_tree_prev_hole(). The added radix tree scan for history pages is to calculate the thrashing safe readahead size and adaptive async size.

The algorithm first looks ahead to find the start point of nextread-ahead, then looks backward in the page cache to get an estimationof the thrashing-threshold.

It is able to automatically adapt to the thrashing threshold in a smoothworkload. The estimation theory can be illustrated with figure:

So the count of continuous history pages left in inactive_list is always alower estimation of the true thrashing-threshold. Given a stable workload,the readahead size will keep ramping up and then stabilize in range

(thrashing_threshold/2, thrashing_threshold)

This is good because, it's in fact bad to always reach thrashing_threshold.That would not only be more susceptible to fluctuations, but also imposeeviction pressure to the cached pages.