This is a bit from the docs for NH Prof, which I am sharing in order to get some peer review.

The excessive number of rows returned is a warning that is being generated from the profiler when... a query is returning a large number of rows. The simplest scenario is that we simply loaded all the rows in a large table, using something like this code:

session.CreateCriteria(typeof(Order))
.List<User>();

This is a common mistake when you are binding to a UI component (such as a grid) that perform its own paging. This is a problem is several levels:

We tend to want to see only part of the data

We just loaded a whole lot of unnecessary data

We are sending more data over the network

We have higher memory footprint than we should have

In extreme cases, we may crash as a result of out of memory exception

None of those are good things, and like the discussion on unbounded result sets, this can be easily prevented by applying a limit at the database level to the number of rows that we will load.

But it is not just simple queries without limit that can cause issue, another common source of this error is Cartesian product when using joins. Let us take a look at this query:

Assuming that we have ten orders, with ten order lines each and five snapshots each, we are going to load 500 rows from the database. Mostly, they will contain duplicate data that we already have, and NHibernate will reduce the duplication to the appropriate object graph.

The problem is that we still loaded too much data, with the same issues as before. Now we also have the problem that Cartesian product doesn't tend to stop at 500, but escalate very quickly to ridiculous number of rows returned for trivial amount of data that we actually want.

The solution for this issue is to change the way we query the data. Instead of issuing a single query with several joins, we can split this to several queries, and send them all to the database in a single batch using Multi Queries.