The standard practice of generating an entire HTML document and then sending that to the web browser can be cumbersome for large data sets:

The generation of the entire document can be time consuming

Transmitting a large document to the browser, even when compressed, can appear as a bottleneck for users who are accustomed to quick responses from web servers.

Rendering a large document can be a performance bottleneck for a web browser. With increasingly complex pages, the largest bottleneck is often the web browser. Rendering a large table, for example, can be relatively slow.

Obviously if a data set is too large for a standard web page, it's probably larger than any user would want to see. The user experience is often improved by simply making the page smaller. For those rare times when a vary large web page is required, progressive loading attempts to solve these problems with specific steps:

The web server generates and returns the "outer" page quickly, leaving out the large data set.

Using AJAX, the web browser requests a subset of the data it still needs to display.

The web server, at this initial request, gathers the complete or partial data set and returns the requested subset to the client. If the subset includes the last portion of data, it includes a flag to inform the client it has all of the data.

The web browser renders the subset of data. If the included flag indicates there is more data to be displayed, it continues to make requests for more subsets of data until all are retrieved.

This example includes one web page, containing one table and a drop-down to let users filter the data. Here we use a combination of PHP and JavaScript with the mootools JS library.

There are a few risks and performance considerations with this particular implementation. For example, the JavaScript repeatedly requests data until it's told there is no more, or there is no response (e.g. from a server error). The risk is mitigated by letting the server decide how much data to return. The larger each data subset returned, the less requests will need to be made.

This is merely an example for demonstration purposes. Each implementation should be tailored to its specific scenario. Also, there are various ways the JavaScript can be written and organized; this example uses one style which is easy to read and follow.

The main web page contains an HTML/table empty of the data to display, only containing a visible "Loading..." and a hidden "No data" messages to start. A select element lets the user filter the data, reloading the table. Obviously the table and messages should be styled appropriately, but the CSS is not useful for this demonstration.

The JavaScript initiates the data load as soon as the page is completely rendered. It continues to request more data until the server indicates it's complete. It also stops if nothing is returned, just in case there is a server error. This implementation uses the mootools JS library, but jQuery or another library could just as easily be used.

The PHP code responds at the /products URL. It calculates all of the data on first request, caches the results, and then returns the appropriate subset of data from cache on subsequent requests. If using a relational database, this method helps avoid multiple data queries over the same data set in a very short span of time. To get subsets of data, the database would have to repeatedly scan the same rows to determine which to return. If the query is complicate or expensive, caching the results to disk can be beneficial. Obviously this method should only be used in the appropriate situations.

Notice a few precautions on the server. The server decides how many rows to return. The server also keeps track of the offset of the previous request. This prevents anyone playing with the client code from manipulating the response much.