General

Using the NHibernate Profiler is easy. First, we need to make the application that we profile aware of the profiler. Then, just start the profiler.

Preparing an application to be profiled

Add a reference to the HibernatingRhinos.Profiler.Appender.dll assembly, located in the downloadable zip. In the application startup (Application_Start in web applications, Program.Main in Windows / console applications, or the App constructor for WPF
applications), make the following call:

Getting NHibernate statistics

If you want to view the NHibernate internal statistics, you need to modify your NHibernate configuration and set generate_statistics to true. This is how you do it using the standard NHibernate configuration mechanism:

One of the DLLs that comes with the profiler is HibernatingRhinos.Profiler.Integration, this DLL can give you programmatic access to the profiler object model. For example, you may do so inside your unit tests to be able to assert on the behavior
of your code.

This approach should make it simply to integrate into your CI process. The XML output allows you
programmatic access to the report, while the HTML version is human readable. You can generate both xml and html
reports by specifying the ReportFormat option twice.

One thing that you might want to be aware of, writing the report file is done in an async manner, so the shutdown command may return before writing the file is done. If you need to process the file as part of your build process, you need to wait until
the first profiler instance is completed. Using PowerShell, this is done like this:

Licensing

Please note that from a licensing perspective, the CI mode is the same as the normal GUI mode. On one hand, it means that you don't need to do anything if you have the profiler already and want to run the CI build on your machine. On the other, if you want to run it on a CI machine, you would need an additional license for that.

Usage

NHibernate Profiler can aggregate related sessions under the same scope. By default, in web applications, we aggregate sessions which are opened in the same web page under the same scope. But we provide an API that you can use to aggregate related session
in your applicatoin using a different strategy.

You can use:

HibernatingRhinos.Profiler.Appender.ProfilerIntegration.StartScope();

in order to start new scope, and dispose the returned object in order to end the scope. You can also give a name to the scope using:

ProfilerIntegration.StartScope("Task #" + taskId);

In addition, you can override the ProfilerIntegration.GetCurrentScopeName() method in order to set the current scope using your application specific details:

While the easiest way to prepare the application to be profiled is to just call NHibernateProfiler.Initialize(), this is often not a desirable way of doing things, since adding/removing the profiler requires changing the application code.

Therefore, we also made it possible to configure the application without this explicit initialization approach. The method of doing so is:

Create a log4net.config file with the following content to log to a live instance of the NHibernate Profiler:

You'll want to use this feature when you have a massive amount of operations that you don't need to profile. A typically scenario for this will be building a huge database as part of the test process, which can be time consuming to profile.

Sometimes you want to profile your production application. It can be a website running on a remote machine or a service application. Using the production profiler feature of NHibernate Profiler, you can connect to your production application using the
Options > Remote Servers:

Once you’re connected to the remote application, the profiler will start showing data from the remote application.

In order to allow production profiling, your application should add the following line on the very beginning of your application:

The above method will make your application start listening on port 9090 for incoming connections. The second parameter we send to this method is a password which will be used to filter unauthorized access to your application. You should make it hard
enough, so your application will be secured.

The traffic over the connection between the profiler to your application will use SslStream so it is encrypted and secured.

Troubleshooting

In order to diagnose issues in your production profiling environment you can set the following value:

ProfilerInfrastructure.ThrowOnConnectionErrors = true;

Common errors are missing permissions in the running service. Setting this value to true will
throw an error in your application, so you can diagnose what the error is.

While using the profiler you may be prompted by your firewall to allow the profiler to continue working.

The way the profiler works is by listening on a TCP sockets for events from Hibernate/NHibernate, in order to do that, it needs to listen to a socket, an action that sometimes trigger an alert in firewalls.

Denying permission to start listening on a socket will disable the profiler's ability to do live profiling. It is strongly recommended that you would allow it.

The profiler also makes a check for updated version at each startup. In the case of a profiler error, the profiler will ask you if it is allowed to submit the error details so it can be fixed.

In addition to that, the profiler will periodically report (anonymously) which features are being used in the application, which allows us to focus on commonly used parts of the product.

The Problem

When you are using SQLite in conjunction with guids, NHibernate Profiler will show formatted queries that will not find any data in the database, even if the data actually exists.

The actual reason is related to the way SQLite parses guids. Since it isn't actually aware of guids, SQLite treats them as strings, and that obviously fails when matching against the actual guid value.

The Solution

The SQLite ADO.NET provider has a workaround for that. You can use the following connection string parameter to do so:

"Data Source=database.sqlite;Version=3;New=True;BinaryGuid=False"

You can specify this parameter when using the Show Results feature and when using a SQLite browser to inspect the actual database content. That will ensure that the SQL that NH Prof generate will be able to find the actual results in the database.

Faq

If you bought them, you may use them wherever you want. As long as only you are using it.

If your company bought the license, then you would require a separate license if you want this for home. Per seat means the number of people in an organization using it. Not the number of machines it is used on.

Note that there is a corollary to this, a single machine used by two developers, all of them using NH Prof requires two licenses, one for each user of the profiler.

The profiler shows the session but without any statements

If you try to run a program that exits very fast, you may see the session in the profiler without any log statements. This is because that the profiler didn't get the chance yet to log the statements because your application already exits. To solve this,
you can force the profiler the get all the statements by adding the following line at the end of your code:

Since the NHibernate Profiler 3.0 have the option called Detect Connection String.
When it's turned on, it will add the connection string of latest connection performed, if this connection does not exist yet in the list of connections, and will set it as the default connection.

Since connection string can contain sensitive information, that we don't want to expose. We provided an option to ignore passing of the connection string, when HibernatingRhinos.Profiler.Appender is initialised:

From time to time, while you are using NHibernate, you also use additional methods to query the database. The profiler provide hooks that allow you to integrate those external querying options into the profiler as well.

You can do that using the following code:

HibernatingRhinos.Profiler.Appender.CustomerQueryReporting.ReportQuery(
sessionId: Guid.NewGuid().ToString(), // create a new session
queryText: "select * from Users where Id = @users", // the sql
parameters: dbCmd.Parameters, // contains the @users parameter
databaseTime: 10, // time in milliseconds from executing query to first row
totalTime: 15, // time in millseconds from executing query to completing processing the query
rowCount: 10 // number of rows returned by this query
);

This will cause the select * from Users where Id = @users query to show up as a new session in the profiler.

You might get an error about access to the license file being denied when trying to use the Profiler for the first time. This is usually a result of Windows' locking out the Profiler because it was downloaded from the Internet.

To resolve this issue, go to the folder where you extracted the Profiler, right click on HibernatingRhinos.Profiler.Client.exe and select Properties. You should see something similar to this:

Click on the Unlock bottom (marked in red in the above picture) to grant the Profiler standard application permissions and resolve the issue.

There is a known compatibility issue of the NHibernate Profiler with UltraMon program on some computers. If you running UltraMon program on your computer and you try to maximize the profiler screen on one of your monitors you may get
the following exception:

An error occurred in NHibernate Profiler which is known to be caused by using UltraMon on your computer.
Please follow the instructions in the following URL in order to solve this issue: http://nhprof.com/Learn/Faq/UltraMon

This is because the UltraMon program try to inject some controls to the NHibernate Profiler windows.

If you get the above message, please instruct your UltraMon program to not inject buttons to the NHibernate Profiler windows. This can be done by following the following steps:

Right click on the UltraMon icon in the System Tray and choose option.

On the Compatibility tab select Configure Settings.

Select the NHibernate Profiler from the Running Applications list and press Next.

Make sure to check the disable window buttons option and press Finish.

Each join requires the database to perform additional work, and the complexity and cost of the query grows rapidly with each additional join. While relational database are optimized for handling joins, it is often more efficient to perform several separate
queries instead of a single query with several joins in it.

For OLTP systems, you should consider simplifying your queries or simplifying the data model. While we do not recommend avoiding joins completely, we strongly discourage queries with large numbers of joins.

Another issue to pay attention to is possible Cartesian products in queries contains joins, it is very easy to create such a thing and not notice it during development.

With an OR/M abstraction such as NHibernate, it easy to generate queries which are not optimized. In particular, this alert is raised when we detected that the type of the entity's field, which you have been used as a parameter in the query that has
this alert, is not an exact match to the table’s column type in the database.

Column type mismatch between the entity’s property type and the tables column type can cause several implicit conversion issues, which can lead to performance issues and conversion overflow issues. Most commonly you’ll see them causing issues
because they prevent the database from using indexes properly.

Queries still work, and everything looks fine externally, but the cost of the query is many time what it would be otherwise.

Common examples of type mismatches are:

Int32 vs bigint

Int64 vs int

String vs char / varchar

AnsiString vs nvarchar / nchar

In order to resolve this issue, you need to make sure that your column type and entity definitions are a match in all respects. Pay attention to the common type mismatches shown above to guide you in this.

The profiler detected identical statements that use different sizes for the same parameters. Let us say that we issue two queries, to find users by name. (Note: I am using syntax that will show you the size of the parameters, to demonstrate
the problem).

This sort of code result in two query plans stored in the database query cache, because of the different parameter sizes. In fact, if we assume that the Username column has a length of 16, this single query may take up 16 places in the query
cache.

Worse, if you have two parameters whose size change, such as username (length 16) and password (length 16), you may take up to 256 places in the query cache. Obviously, if you use more parameters, or if their length is longer, the number of places that
a single query can take in the query cache goes up rapidly.

This can cause performance problems as the database needs to keep track of more query plans (uses more memory) may need evict query plans from the cache, which would result in having to rebuild the query plan (increase server load and query time).

NHibernate provides the following configuration option to control this:

<property name='prepare_sql'>true</property>

After specifying this, NHibernate will generate consistent query plan for all identical queries, regardless of the actual parameter size.

This alert is raised when the profiler detect that a query was generated from the view in an MVC application. Issuing queries from the view is a bad practice for several reasons.

It increase the time that the connection to the database have to be open. The recommendation is to keep that open only for the duration of the action, not throughout the lifetime of the request.

It make it that much harder to understand what are the data requirements for a particular action is.

When writing views, you shouldn't be bothered with thinking about persistence, or the number of queries that you views are generating.

The views are often the most changeable parts in the application, and having the application issue queries from the views may result in significant changes to the way the application data access pattern between revisions.

Most often, queries from the views result from lazy loading, Select N+1 or similar bad practices.

We strongly recommend that you'll avoid generating queries in the view, instead, perform all your queries in the action, and provide in memory access only to the view for them to render themselves.

The database is very good in answering queries that look for an exact match such as this:

select * from Users where Name = 'ayende'

But it requires a lot more work when you are using a like, such as this query:

select * from Users where Name like 'ayende%'

In many cases, the database can still optimize this query, and assuming that you have an index on this field, use the index.

But when things are drastically different when you have a query that checks for contains:

select * from Users where Name like '%ayende%'

Or using ends with:

select * from Users where Name like '%ayende'

The problem is that the databases cannot use an index for this sort or query, and it is force to issue a full table scan, inspecting each of the values in the database for a match. This tends to be very inefficient process.

You should carefully consider whatever you should use this feature, and if you really need to support ends with and contains queries, you should consider using either the database's own full search indexing, or using an external full text search option, such as Lucene or Solr.

The excessive number of rows returned warning is generated from the profiler when... a query is returning a large number of rows. The simplest scenario is that we loaded all the rows in a large table, using something like the following code snippet:

session.CreateCriteria(typeof(Order)).List();

This is a common mistake when you are binding to a UI component (such as a grid) that performs its own paging. This is a problem on several levels:

We tend to want to see only part of the data.

We just loaded a whole lot of unnecessary data.

We are sending more data than necessary over the network.

We have a higher memory footprint than we should.

In extreme cases, we may crash as a result of an out of memory exception.

None of these are good, and like the discussion on unbounded result sets, this problem can be easily prevented by applying a limit at the database level to the number of rows that we want
to load at any given time.

But it is not just simple queries without limit that can cause this issue. Another common source of this error is the Cartesian product when using joins. Check out this query:

Assuming we have 10 orders, with 10 order lines each and 5 snapshots each, we are going to load 500 rows from the database. Mostly, they will contain duplicate data that we already have, and NHibernate will reduce the duplication to the appropriate
object graph.

The problem is that we still loaded too much data, with the same issues as before. Now, we also have the problem that Cartesian products don't tend to stop at 500, but escalate very quickly to ridiculous numbers of rows returned for a trivial amount of
data that we actually want.

The solution for this issue is to change the way we query the data. Instead of issuing a single query with several joins, we can split this into several queries, and send them all to the database in a single batch using Multi Query or Multi Criteria.

This warning is raised when the profiler detects that you are writing a lot of data to the database. Similar to the warning about too many calls to the database, the main issue here is the number of remote calls and the time they take.

We can batch together several queries using NHibernate's support for MultiQuery and MultiCriteria, but a relatively unknown feature for NHibernate is the ability to batch a set of write statements into a single database call.

This is controlled using the adonet.batch_size setting in the configuration. If you set it to a number larger than zero, you can immediately start benefiting from reduced number of database calls. You can even set this value at runtime, using session.SetBatchSize().

Using more than one session per request is generally a bad practice. Here is why:

Each session has its own database connection. Using more than one session means using more than one database connection per request, resulting in additional strain on the database and slower overall performance.

Typically we expect the session to keep track of our entities. When we have a multiple sessions, each session is not aware of the entities that tracked by the other session and might have to query the database again for its current state or have to
issue an unnecessary update.

Having more than a single session also mean that we can't take advantage on NHibernate's Unit of Work and have to manually manage our entities' change tracking and we might end up with multiple instances of the same entities in the same request (which
using a single session for the whole request would prevent).

Having more than one session means that the ORM has more work to do. In most cases, this is unnecessary and should be avoided.

You can no longer take advantage of features scoped to the session, such as the first level cache.

It's usually recommended to use one session per request. You should investigate the Session Per Request pattern.

Typically this situation results from micromanaging the session, meaning that we open the session just before the query and close it immediately after the query or operation is executed. For example, see the following code:

Writing to the database from more then one session in the same request it is bad for several reasons. Here is why:

Each session uses a separate database connection. Using more than one session means using more than one database connection per request. This can hurt overall performance, and puts more pressure on the database.

Using different sessions mean that we cannot take advantage of the NHibernate built-in transaction support and have to rely on System.Transactions which is significantly slower.

We can't rely on the database to ensure transactionally safe view of the data, since we are using several different transactions to access the database. Note that this is the case even when using System.Transactions.

When using System.Transactions it forcing you to use DTC in order to keep all sessions in the same transaction. Using DTC leads to bad performance and is more brittle
than not using it.

Without using System.Transactions, there is no ability to use a transaction across all the sessions.

For example, let us consider the following code, which is using multiple sessions to perform a single operation:

In this example, we call to the database twice, each time from different session. Because of that, we cannot take advantage of the database native transaction support, since each operation happen in a different transaction. Our choices are to either operate
essentially without transactions (in which case, money can literally disappear in the air), or participate in a distributed transaction (System.Transactions).

We are also unable to take advantage of NHibernate's features to increase system performance such as batch multiple statements against the database.

The solution for this is simple: Use just one session per request. The strong recommendation of the NHibernate development team is that you should always use the ambient / contextual session. NHibernate also provide API that can help you to do so.

This alert is raised whenever the profiler detects that a query has been generated as a result of a data binding operation.

The simplest from of this is binding to a linq query against the database:

DataContext = from user in ctx.Users
where user.IsActive
select user;

The problem with this is that the Win Forms / WPF data binding code was designed with the assumption that it would work against in-memory data. Therefore, setting a data source usually triggers multiple calls to interface methods to obtain the data. As
a consequence, when you perform data binding against an a database query directly, the query is often evaluated a couple of times, triggering multiple round-trips to the server.

Another problem with data binding that the profiler will detect and warn about is lazy loading as a result of data binding operation. For example, binding to Order.Customer.Name will lead to loading the customer entity and binding to its
Name property. The problem with those sort of queries is that they are likely to repeat for as many items as there are in the query, leading to Select N+1.

Even assuming that you are binding to just a single entity, lazy loading through data binding is a bad practice. You should use eager loading to load the data up front, rather than let the UI load it in the worst possible manner.

We have detected a query that was made on an unindexed column. The problem here is that a query on an unindexed query force the database to perform a table scan. Such an act require the database to go through all the rows in the table.

In some cases, on very small tables, that is fine and the best way to find the results you want. On larger tables, however, that can be a prohibitively expensive operation which usually end up biting you only after the data set has grown to production size and then fail at 2 AM or there about.

Consider this query carefully and evaluate whatever an index needs to be created, the query needs to be modified or whatever this table is small enough that a table scan is the appropriate behavior.

Select N + 1 is a data access anti-pattern where the database is accessed in a suboptimal way. Take a look at this code sample, then we'll discuss what is going on. Say you want to show the user all comments from all posts so that they can delete all
of the nasty comments. The naive implementation would be something like:

In this example, we can see that we are loading a list of posts (the first select) and then traversing the object graph. However, we access the collection in a lazy fashion, causing NHibernate to go to the database and bring the results back one row at
a time. This is incredibly inefficient, and the NHibernate Profiler will generate a warning whenever it encounters such a case.

The solution for this example is simple: Force an eager load of the collection up front. Using HQL:

In each case, we will get a join and only a single query to the database.

Note: this is the classical appearance of the problem. It can also surface in other scenarios, such as calling the database in a loop, or more complex object graph traversals. In those cases, it it generally much harder to see what is causing the issue.

Having said that, NHibernate Profiler will detect those scenarios just as well, and give you the exact line in the source code that cause this SQL to be generated.

Other options for solving this issue are MultiQuery and MultiCriteria, which are also used to solve the issue of Too Many Queries.

Take a look at this code sample, then we'll discuss what is going on. Say you want to show the user all comments from all posts so that they can delete all of the nasty comments. The naive implementation would be something like:

In this example, we can see that we are opening a session and loading a list of posts (the first select) and then for each of the posts that we loaded, we open up a new session, including a new connection to the database, and make an additional query.

Because we are using multiple sessions in this fashion, we can't really take advantage of NHibernate features that are meant to deal with this exact situation. The first step is to avoid using
more than one session per request, and the next step is to follow the guidance on solving select N + 1 problems
for NHibernate.

One of the more common mistakes that are made when mapping associations from the database is not setting the proper inverse attribute to let NHibernate know which side of the association is the owner of the association.

This is important because while in the object world, all associations are unidirectional, in the database world, all associations are bidirectional. As a result, NHibernate assumes by default that all associations are unidirectional and require and explicit
step (setting inverse='true' on the association to recognize bidirectional associations).

The actual problem is simple, NHibernate issue a superfluous update statement. The NHibernate Profiler is able to detect and warn against such mistake.

Example of the problem:

The following object model shows a very simple bidirectional association. Blog has Posts, Post has Blog.

This is mapped to the following table model:

Notice that while on the object model this is a bidirectional association and is maintained by two different places, it is maintained on a single place in the database.

This is a very common case, and quite easy it wrong. By default, NHibernate has to assume that it must update the column on both sides, so creating a new post and adding it to the Blog's Posts collection will result in two statements being written to
the database:

As you can see, we are actually setting the BlogId to the same value, twice. Once in the insert statement, the second using the update statement.

Now, there is a very easy fix for this issue, all you have to do is to tell NHibernate on the Blog's Posts mapping that this is a collection where the responsibility for actually updating the column value is on the other side. This is also something that
I tend to check in code reviews quite often. The fix is literally just specifying inverse='true' on the one-to-many collection association.

NHibernate Profiler has detected a large number of cache calls being made in a single request. Note that this alert is only applicable if you are using a distributed cache, if you are using a local cache, you can safely ignore this alert.

If you are using a distributed cache, however, this alert indicate a possible performance issue that you should be aware of. While making a single call to a cache tends to be significantly faster than calling the database, making a large number of fast calls can end up being slower than making a smaller amount of slow calls.

It is easy to get into situations where you are making hundreds of calls to the cache, and with a distributed cache, this turn out to hundreds of remote calls, which end up being very costly in terms of performance.

To avoid this issue, consider limiting the number of returned results in cache queries (since NHibernate will have to make a cache calls for each returned result). This is a best practice in general, even if you aren't using the cache or using a local cache.

Another common reason for this alert is trying to rely on the cache as a crutch, making large amount of queries and assuming that the cache will make it work fast enough after the first time. The problem with this approach is that hitting the cache does take time, and even when using a local cache, consider the case when the data is expired from the cache or is being purged from the cache.

NHibernate Profiler has detected a large number of cache calls being made in a single session. Note that this alert is only applicable if you are using a distributed cache, if you are using a local cache, you can safely ignore this alert.

If you are using a distributed cache, however, this alert indicate a possible performance issue that you should be aware of. NHibernate currently does not batch calls to the cache (this is a planned feature), and while making a request to a cache tends to be significantly faster than calling the database, making a large number of fast calls can end up being slower than making a smaller amount of slow calls.

It is easy to get into situations where you are making hundreds of calls to the cache, and with a distributed cache, this turn out to hundreds of remote calls, which end up being very costly in terms of performance.

To avoid this issue, consider limiting the number of returned results in cache queries (since NHibernate will have to make a cache calls for each returned result). This is a best practice in general, even if you aren't using the cache or using a local cache.

Another common reason for this alert is trying to rely on the cache as a crutch, making large amount of queries and assuming that the cache will make it work fast enough after the first time. The problem with this approach is that hitting the cache does take time, and even when using a local cache, consider the case when the data is expired from the cache or is being purged from the cache.

One of the most expensive operations that we can do in our applications is to make a remote call. Going beyond our own process is an extremely expensive operation. Going beyond the local machine is more expensive still.

Calling the database, whether to query or to write, is a remote call, and we want to reduce the number of remote calls as much as possible. This warning is raised when the profiler notices that a single request is making an excessive number of calls to the database. This is usually an indication of a potential optimization in the way the sessions are used in the request.

There are several reasons why this can be:

A large number of queries as a result of a Select N + 1

Calling the database in a loop

Updating (or inserting / deleting) a large number of entities

A large number of (different) queries that we execute to perform our task

Calling the database in a loop is generally a bug, and should be avoided. Usually you can restructure the code in such a way that you are not required to call the database in that way.

The last two issues have to be dealt with on an individual basis. Sometimes, using a different approach (such as bulk copy for batching inserts) or a specific query or stored procedure that will get all the data from the database in a single round trip.

One of the most expensive operations that we can do in our applications is to make a remote call. Going beyond our own process is an extremely expensive operation. Going beyond the local machine is more expensive still.

Calling the database, whether to query or to write, is a remote call, and we want to reduce the number of remote calls as much as possible. This warning is raised when the profiler notices that a single session is making an excessive number of calls to the database. This is usually an indication of a potential optimization in the way the session is used.

There are several reasons why this can be:

A large number of queries as a result of a Select N + 1

Calling the database in a loop

Updating (or inserting / deleting) a large number of entities

A large number of (different) queries that we execute to perform our task

For the first reason, you can see the suggestions for Select N + 1. Calling the database in a loop is generally a bug, and should be avoided. Usually you can restructure the code in such a way that you are not required to call the database in that way.

Updating a large number of entities is discussed in Use statement batching, and mainly involves setting the batch size to reduce the number of calls that we make for the database.

The last issue is more interesting. We need to get data from several sources, and we issue multiple queries for that data. The problem is that we issue multiple separate queries to accomplish this, which has the problems listed above.

NHibernate provides a nice way to avoid this by using MultiQuery and MultiCriteria, both of which allow you to aggregate several queries into a single call to the database. If this is your scenario, I strongly recommend that you take a look at MultiQuery and MultiCriteria and see how to use them in your applications.

With the abstraction of an OR/M such as NHibernate, it easy to generate queries which are hard to follow and has performance penalties, since you don’t see the exact SQL that is produced at the end. This often encourages you to write code that produces a bad SQL queries.

The SQL is bad not only because of the for the readability of the SQL, but also because the work that the database’s query optimizer has to work in order to serve such a query.

In this case, we detected that you have a query with a lot of work inside where clauses. Having too much expressions inside the where statement can lead to poor performance, especially when you grow and have a big data set.

Consider the complexity of your statement and whatever you can reduce the work done by refactoring your code to reduce the amount of effort required by the database to answer your queries.

With the abstraction of an OR/M such as NHibernate, it easy to generate queries which are hard to follow and has performance penalties. Since you don’t see the exact SQL that is produced at the end, you can write code that produces a bad SQL. The SQL is bad not only because of the for the readability of the SQL, but also because the work that the Database’s query analyzer has to work in order to serve such a query.

To demonstrate such a query, look on the query below. The query have lots of nested SELECT statements which makes the SQL hard to follow and to also slower for the database to run. The query below goes on for quite a while, and it doing a lot of things, none of them very fast.

The problem with such queries is that the code that generate them looks innocent and hides its costly impact. Consider modifying the queries that raised this warning, even splitting them up to separate queries executed independently might be a faster approach.

SELECT Project4.Id AS Id,
Project4.Name AS Name,
Project4.Description AS Description,
Project4.CategoryTemplateId AS CategoryTemplateId,
Project4.MetaKeywords AS MetaKeywords,
Project4.MetaDescription AS MetaDescription,
Project4.MetaTitle AS MetaTitle,
Project4.ParentCategoryId AS ParentCategoryId,
Project4.PictureId AS PictureId,
Project4.PageSize AS PageSize,
Project4.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
Project4.PageSizeOptions AS PageSizeOptions,
Project4.PriceRanges AS PriceRanges,
Project4.ShowOnHomePage AS ShowOnHomePage,
Project4.IncludeInTopMenu AS IncludeInTopMenu,
Project4.HasDiscountsApplied AS HasDiscountsApplied,
Project4.SubjectToAcl AS SubjectToAcl,
Project4.LimitedToStores AS LimitedToStores,
Project4.Published AS Published,
Project4.Deleted AS Deleted,
Project4.DisplayOrder AS DisplayOrder,
Project4.CreatedOnUtc AS CreatedOnUtc,
Project4.UpdatedOnUtc AS UpdatedOnUtc
FROM (SELECT Limit1.Id AS Id,
Limit1.Name AS Name,
Limit1.Description AS Description,
Limit1.CategoryTemplateId AS CategoryTemplateId,
Limit1.MetaKeywords AS MetaKeywords,
Limit1.MetaDescription AS MetaDescription,
Limit1.MetaTitle AS MetaTitle,
Limit1.ParentCategoryId AS ParentCategoryId,
Limit1.PictureId AS PictureId,
Limit1.PageSize AS PageSize,
Limit1.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
Limit1.PageSizeOptions AS PageSizeOptions,
Limit1.PriceRanges AS PriceRanges,
Limit1.ShowOnHomePage AS ShowOnHomePage,
Limit1.IncludeInTopMenu AS IncludeInTopMenu,
Limit1.HasDiscountsApplied AS HasDiscountsApplied,
Limit1.SubjectToAcl AS SubjectToAcl,
Limit1.LimitedToStores AS LimitedToStores,
Limit1.Published AS Published,
Limit1.Deleted AS Deleted,
Limit1.DisplayOrder AS DisplayOrder,
Limit1.CreatedOnUtc AS CreatedOnUtc,
Limit1.UpdatedOnUtc AS UpdatedOnUtc
FROM (SELECT Distinct1.Id AS Id
FROM (SELECT Extent1.Id AS Id1,
Extent1.ParentCategoryId AS ParentCategoryId,
Extent1.LimitedToStores AS LimitedToStores
FROM dbo.Category AS Extent1
LEFT OUTER JOIN dbo.AclRecord AS Extent2
ON (Extent1.Id = Extent2.EntityId)
AND (N'Category' = Extent2.EntityName)
WHERE (Extent1.Published = 1)
AND (Extent1.Deleted <> 1)
AND ((Extent1.SubjectToAcl <> 1)
OR ((Extent2.CustomerRoleId IN (3))
AND (Extent2.CustomerRoleId IS NOT NULL)))) AS Filter1
LEFT OUTER JOIN dbo.StoreMapping AS Extent3
ON (Filter1.Id1 = Extent3.EntityId)
AND (N'Category' = Extent3.EntityName)
WHERE (Filter1.ParentCategoryId = 7 /* @p__linq__0 */)
AND ((Filter1.LimitedToStores <> 1)
OR (1 /* @p__linq__1 */ = Extent3.StoreId))) AS Distinct1) AS Project2
OUTER APPLY (SELECT TOP (1) Filter3.Id2 AS Id,
Filter3.Name AS Name,
Filter3.Description AS Description,
Filter3.CategoryTemplateId AS CategoryTemplateId,
Filter3.MetaKeywords AS MetaKeywords,
Filter3.MetaDescription AS MetaDescription,
Filter3.MetaTitle AS MetaTitle,
Filter3.ParentCategoryId AS ParentCategoryId,
Filter3.PictureId AS PictureId,
Filter3.PageSize AS PageSize,
Filter3.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
Filter3.PageSizeOptions AS PageSizeOptions,
Filter3.PriceRanges AS PriceRanges,
Filter3.ShowOnHomePage AS ShowOnHomePage,
Filter3.IncludeInTopMenu AS IncludeInTopMenu,
Filter3.HasDiscountsApplied AS HasDiscountsApplied,
Filter3.SubjectToAcl AS SubjectToAcl,
Filter3.LimitedToStores AS LimitedToStores,
Filter3.Published AS Published,
Filter3.Deleted AS Deleted,
Filter3.DisplayOrder AS DisplayOrder,
Filter3.CreatedOnUtc AS CreatedOnUtc,
Filter3.UpdatedOnUtc AS UpdatedOnUtc
FROM (SELECT Extent4.Id AS Id2,
Extent4.Name AS Name,
Extent4.Description AS Description,
Extent4.CategoryTemplateId AS CategoryTemplateId,
Extent4.MetaKeywords AS MetaKeywords,
Extent4.MetaDescription AS MetaDescription,
Extent4.MetaTitle AS MetaTitle,
Extent4.ParentCategoryId AS ParentCategoryId,
Extent4.PictureId AS PictureId,
Extent4.PageSize AS PageSize,
Extent4.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
Extent4.PageSizeOptions AS PageSizeOptions,
Extent4.PriceRanges AS PriceRanges,
Extent4.ShowOnHomePage AS ShowOnHomePage,
Extent4.IncludeInTopMenu AS IncludeInTopMenu,
Extent4.HasDiscountsApplied AS HasDiscountsApplied,
Extent4.SubjectToAcl AS SubjectToAcl,
Extent4.LimitedToStores AS LimitedToStores,
Extent4.Published AS Published,
Extent4.Deleted AS Deleted,
Extent4.DisplayOrder AS DisplayOrder,
Extent4.CreatedOnUtc AS CreatedOnUtc,
Extent4.UpdatedOnUtc AS UpdatedOnUtc
FROM dbo.Category AS Extent4
LEFT OUTER JOIN dbo.AclRecord AS Extent5
ON (Extent4.Id = Extent5.EntityId)
AND (N'Category' = Extent5.EntityName)
WHERE (Extent4.Published = 1)
AND (Extent4.Deleted <> 1)
AND ((Extent4.SubjectToAcl <> 1)
OR ((Extent5.CustomerRoleId IN (3))
AND (Extent5.CustomerRoleId IS NOT )))NULL) AS Filter3
LEFT OUTER JOIN dbo.StoreMapping AS Extent6
ON (Filter3.Id2 = Extent6.EntityId)
AND (N'Category' = Extent6.EntityName)
WHERE (Filter3.ParentCategoryId = 7 /* @p__linq__0 */)
AND ((Filter3.LimitedToStores <> 1)
OR (1 /* @p__linq__1 */ = Extent6.StoreId))
AND (Project2.Id = Filter3.Id2)) AS Limit1) AS Project4
ORDER BY Project4.DisplayOrder ASC

With the abstraction of an OR/M such as NHibernate, it's easy to generate queries which are not optimized.

In this case, we detected that you query over a large number of tables, which may lead to poor performance, and run too slow especially once you grow to a large data set. The more tables that are in the query, the more work the database has to do, whatever
this is via subselect, join or nested queries.

Consider optimizing the query to use less tables, even splitting it up to multiple separate queries if that proves to be more performant.

With the abstraction of an OR/M such as NHibernate, it easy to create code which is hard to follow and has performance penalties, since you don’t see the exact SQL that is produced at the end. This often encourages you to write code that produces a bad SQL. The SQL is bad not only because of the for the readability of the SQL, but also because the work that the Database’s query analyzer has to work in order to serve such a query.

In this case, we detected that you have a query with a lot of Where statements. Having too much Where statements can lead to poor performance, especially when you grow and have a big data set.

This piece of code will work and save the new comment. If you want to rollback the transaction, it is recommended that you would do so using an explicit Rollback(), since that would be much easier to understand when reading the code later.

An unbounded result set is where a query is performed and does not explicitly limit the number of returned results using SetMaxResults() with NHibernate, or TOP or LIMIT clauses in the SQL. Usually, this means that the application assumes that a query
will always return only a few records. That works well in development and in testing, but it is a time bomb waiting to explode in production.

The query may suddenly start returning thousands upon thousands of rows, and in some cases, it may return millions of rows. This leads to more load on the database server, the application server, and the network. In many cases, it can grind the entire
system to a halt, usually ending with the application servers crashing with out of memory errors.

Here is one example of a query that will trigger the unbounded result set warning:

Now we are assured that we only need to handle a predictable, small result set, and if we need to work with all of them, we can page through the records as needed. But there is another common occurrence of the unbounded result set problem from directly
traversing the object graph, as in the following example:

Here, again, we are loading the entire set (which is, in fact, identical to the query we issued before) without regard for how big the result set may be. NHibernate provides a robust way of handling this scenario using filters:

A common mistake when using a database is to use transactions only when orchestrating several write statements. In reality, every operation that the database is doing is done inside a transaction, including queries and writes (update, insert, delete).

Note: In SQL Server, Implicit Transactions are usually called Autocommit Transactions.

When we don't define our own transactions, it falls back into implicit transaction mode, where every statement to the database runs in its own transaction, resulting in a large performance cost (database time to build and tear down transactions), and
reduced consistency.

Even if we are only reading data, we should use a transaction, because using transactions ensures that we get consistent results from the database. NHibernate assumes that all access to the database is done under a transaction, and strongly discourages
any use of the session without a transaction.

Leaving aside the safety issue of working with transactions, the assumption that transactions are costly and that we need to optimize them is false. As previously mentioned, databases are always running in a transaction. Also, they have been heavily optimized
to work with transactions.

The real question here is: Is the transaction per-statement or per-batch? There is a non-trivial amount of work that needs to be done to create and dispose of a transaction; having to do it per-statement is more costly than doing it per-batch.

It is possible to control the number and type of locks that a transaction takes by changing the transaction isolation level (and, indeed, a common optimization is to reduce the isolation level).

NHibernate treats the call to Commit() as the time to flush all changed items from the unit of work to the database, and without an explicit call to Commit(), it has no way of knowing when it should do that. A call to Flush() is possible, but it is frowned
upon because this is usually a sign of improper transaction usage.

We strongly suggest using code similar to that shown above (or another approach to transactions, such as TransactionScope, or Castle's Automatic Transaction Management) in order to handle transactions correctly.

Transaction and the second level cache

Another implication of not using explicit transactions with NHibernate is related to the use of the second level cache.

NHibernate goes to great length in order to ensure that the 2nd level cache maintains a consistent view of the database. This is accomplished by deferring all 2nd level cache updates to the transaction commit. In this way, we can assert that the data
in the 2nd level cache is the one committed to the database.

Forgoing the use of explicit transactions has the effect of nulling the 2nd level cache. Here is an example that would make this clear:

Even if the 2nd level cache is enabled for Post, it is still not going to be cached in the 2nd level cache. The reason is that until we commit a transaction, NHibernate will not update the cache with the values for the loaded entities.

NHibernate Profiler has detected a session that is used in a different thread than the one it was opened on.

NHibernate sessions are not thread safe, and attempting to use them in multiple threads requires careful synchronization. It is generally better to consider a session only useful within the thread that created it.

There are valid scenarios for cross thread session usage (background loading with careful sync, multi request spanning session), but usually this alert indicate a problem in serializing access to the session.

Error

NHibernate Warning Surfacing

The NHibernate Profiler will surface logged warnings from NHibernate, so you could figure out what was the reason for the warning and fix that.

NHibernate is quite good in detecting problematic situations and warning you about that, so please pay attention to whatever warning are surfaced.

NHibernate Error Detection

Whenever NHibernate is running into an error (transaction deadlock, optimistic concurrency exception, etc), the NHibernate Profiler will detect that and show you the full error (including the exception message and the stack trace).