Buffered rendering of grids, commonly known as "infinite scrolling" has improved significantly in 4.1. The major improvements are in the "prefetch buffer" management. The prefetch buffer used to just be a MixedCollection of records keyed by the ordinal position in the global dataset which greatly complicated cache lookup and eviction. This has now become a true page cache which maintains a set of page-sized blocks of records, each keyed by page number.

This means that fetching a range of records from the cache is as quick as can be. It's a matter of calculating the page range which encompasses that record range and extracting the set of records. This is done as efficiently as possible using Array.slice where the range does not coincide with the beginning or end of a page.

Simpler API
You no longer have to know about the methods which perform all this magic. In 4.1, you can use the regular Store API.

The autoLoad does what autoLoad has always done: starts at page 1! The 4.0-way of initialization using the guaranteeRange method still works but should be replaced with autoLoad or the new loadPage method. Calling guaranteeRange disables certain internal optimizations to maintain compatibility.

How it works
The grid now calculates how large the rendered table should be using the configuration of the PagingScroller which is the object that monitors scroll position. These are as follows when scrolling downwards:

* trailingBufferZone The number of records to keep rendered above the visible area.
* leadingBufferZone The number of records to keep rendered below the visible area.
* numFromEdge How close the edge of the table should come to the visible area before the table is refreshed further down.

The rendered table needs to contain enough rows to fill the height of the view plus the trailing buffer size plus leading buffer size plus (numFromEdge * 2) to create some scrollable overflow.

As the resulting table scrolls, it is monitored, and when the end of the table comes within numFromEdge rows of coming into view, the table is re-rendered using a block of data further down in the dataset. It is then positioned so that the visual position of the rows do not change

In the best case scenario, the rows required for that re-rendering are already available in the page cache, and this operation is instantaneous and visually undetectable.

To configure these values, configure your grid with a verticalScroller:

This will mean that there will be 40 rows overflowing the visible area of the grid to provide smooth scrolling, and that the re-rendering will kick in as soon as the edge of the table is within 5 rows of being visible.

Keeping the pipeline full
Keeping the page cache primed to be ready with data for future scrolling is the job of the Store. The Store also has a trailingBufferZone and a leadingBufferZone.

Whenever rows are requested for a table re-render, after returning the requested rows, the Store then ensures that the range encompassed by those two zones around that requested data is in the cache by requesting them from the server if they are not already in the cache.

Those two zones have quite a large default value, but can be tuned by the developer to keep fewer or more pages in the pipeline.

Cache Misses
If "teleporting" way down into the dataset to a part for which there are definitely no cached pages, then there will be a load mask and a delay because data will need to be requested from the server. However this case has been optimized too.

The page which contains the range required to create the visible area is requested first, and the table will be re-rendered as soon as it arrives. The surrounding pages covering the trailingBufferZone and leadingBufferZone are requested after the data that is really needed ASAP by the UI.

Pruning the cache
By default, the cache has a calculated maximum size, beyond which, it will discard the Least Recently Used pages. This size is the number of pages spanned by the scroller's leadingBufferZone plus visible size plus trailingBufferZoneplus the Store's configured purgePageCount. Increasing the purgePageCount means that once a page has been accessed, you are much more likely to be able to return to it quickly without triggering a server request later.

A purgePageCount value of zero means that the cache may grow without being pruned, and it may eventually grow to contain the whole dataset. This might actually be a very useful option when the dataset is not ridiculously large. Remember that humans cannot comprehend too much data, so multiple thousand row grids are not actually that useful - that probably means that they just got their filter conditions wrong and will need to re-query,

Pull the whole dataset client side!
One option if the dataset is not astronomical is to cache the entire dataset in the page map.

You can experiment with this option in the "Infinite Grid Tuner" which is in your SDK examples directory under examples/grid/infinite-scroll-grid-tuner.html.

If you set the "Store leadingBufferZone" to 50,000 and the purgePageCount to zero, this will have the desired effect.

The leadingBufferZone determines how far ahead the Store tries to keep the pipeline full. 50,000 means keep it very full!

A purgePageCount of zero means that the page map may grow without limit.

So when you then kick off the "Reload", you can see the first, visually needed page being requested, and then rendered.

Then you can see the Store diligently trying to fulfil that huge leadingBufferZone. Pretty soon, the whole dataset will be cached, and data access anywhere in the scrollable area will be instant.

Compatibility with 4.0
There are new API's in 4.1 related to these changes. Unlike in previous beta releases, the guaranteeRange method should work now. Even so, as notes above, its use is discouraged because (for compatibility) when you use it you are also specifying the size of the rendered table. Since the minimum size is actually dynamic, this can be hazardous to handle this way. The new "zones" configurations are designed to allow you to adjust the amount of rendering you want beyond the minimum.

Effect on dom?

In the event of getting enitre record set down on the client and keeping the purgePageCount to 0, what will that do to the dom? Will it still only render a small portion of the records clearing out ones that are scrolled away from? Or will it just continually add more records to the bottom potentially getting very large?

This is such great news -- really made my day -- thanks dongryphon and Animal! There seemed to be a lot of uncertainty and issues with respect to Grids and virtual/infinite scrolling since 4.0 and throughout the 4.1 betas, but this should certainly ease much of the concerns. Don, just for clarification -- these changes will appear in the RC's, or just the Final GA release?

Filtering problem

Hello

There is still a little filtering problem.
I need remote filtering for buffered grid data.
So I'm setting new extraParams to proxy, then clearing pageMap and loading page 1.
When filtered resultset is at least as long as page size (scroller is needed), the scroller is adjusted to new resultset size. But when resultset is smaller, even one record, the scroller has size of unfiltered dataset.

The reason is in scroller: when the resultset size is smaller than page size, the scroller is disabled no matter what it was before that test, see Ext.grid.PagingScroller.onViewRefresh.

scroller bouncing UP while scrolling

Has anyone noticed in infinite grid that the scroller is moving back up as you mousewheel down? It seems to happen as soon as the grid prefetches more data. I also CANNOT look at the last records in the grid, the scroller doesnt like them or something.