Blackbaud CRM Performance

Posted on: October 12, 2017

Optimizing performance in Ad Hoc Query can be challenging. The optimizer does well with simple queries, but struggles with larger datasets pulled from multiple tables. This is because the optimizer does not have time to examine every option and must use a simplified approach. Essentially, it looks for a few good first steps, then a few good second steps, and keeps working until it has a good plan or it runs out of time. With complex queries, it might run out of time before it can find the best approach. Given that we cannot always avoid complex queries, how do we ensure good performance? Queri... Read More

Posted on: October 31, 2014

To improve the performance of your Blackbaud CRM databases, you can apply data compression in SQL Server. Data compression applies mathematical techniques to make database objects such as tables and indexes take up less space than they normally require. The benefits of compression vary based on factors such as the database and workload, but tests by the Blackbaud Enterprise Performance Team found significant space savings and performance improvements. One test even reduced the space that the database used by 44 percent. To help customers realize the benefits of compression technology, the Perf... Read More

Posted on: June 3, 2014

It is often mentioned that Query is limited to 500 rows and Data Lists to 10,000. This is due to providing some "governor" limits so as to not let users do runaway queries or lists. This limit can be changed. This is how to do it. The settings that you’ll need are AdHocQueryMaxRecordsDefault and DataListMaxRecordsDefault. They’ll both need to go somewhere in the <appSettings> section of the web.config and should look something like this: <add key=" AdHocQueryMaxRecordsDefault " value="100000" /> <add key=" DataListMaxRecordsDefault " value="100000" /> To get some i... Read More

Posted on: April 11, 2014

You’ve probably heard that indexes and database statistics are key to a healthy database. And that maintaining them is vital to getting the most out of your SQL Server system. Well, when someone tells me something like that, I want them to prove it! So the Enterprise Performance team conducted an investigation into the impact of indexes and database statistics. We used a realistic scenario and test data, and we observed substantial performance gains. Our test found that rebuilding indexes and updating statistics significantly improved response times for the most heavily used pages (less waitin... Read More

Posted on: April 11, 2014

Database indexes can significantly improve the performance of queries, search lists, and other features. Blackbaud CRM includes a variety of out-of the-box indexes to optimize core system performance, and it also allows you to add your own indexes to Blackbaud production tables when necessary. Before you add indexes to your Blackbaud CRM database, however, you should understand the performance tradeoffs that indexes can incur. Likewise, you should follow a handful conventions to make sure that your custom indexes are easy to find and will not be overwritten. To help database administrators and... Read More

Posted on: March 5, 2014

This article is to help out with increasing your BBDW refresh/ETL (Extract, Transform and Load) times. Things to consider are as follows: 1) Updating Stats on the CRM Database Just by running the SQL Server Stored Procedure sp_updatestats on the BB CRM database and letting it run (potentially for a few minutes) can potentially save hours for a BBDW refresh. exec sp_updatestats Some ETL performance work I've been doing recently at a client with larger data volumes (6 Million+ constits, 25 Million+ gifts, 60 Million+ gift attributes) initially resulted in Financial Transaction and Financia... Read More

Posted on: February 10, 2014

As with any SQL Server database, managing the growth of your Blackbaud CRM database files can be critical to the performance of your database. For example, auto-growth is a useful tool in scenarios where files need to expand automatically because some abnormal event requires more space. However, while auto-growth is useful as a fallback mechanism, you do not want to rely on it for normal database growth because it can cause delays or even time-outs for end users. Similarly, monitoring the growth of your Blackbaud CRM database is crucial to plan ahead for storage space requirements. After all,... Read More

Posted on: September 4, 2013

When you create a search list with the Infinity SDK, it is important to write an efficient SQL stored procedure so that the search list returns results as quickly as possible. To do so, you must adhere to a set of performance principles: Choose highly selective filters Use covering indexes to retrieve results Return only the columns that you absolutely need Limit the number of tables referenced by a query Additionally, consider the following advice when writing search lists: Avoid “or” statements – If you use an OR statement to link filters from different tables, the optimizer cannot use index... Read More

Posted on: April 5, 2013

I’m very excited about the release of Blackbaud CRM 3.0. My favorite new feature is the ability to build ad-hoc queries against the Blackbaud Data Warehouse using query views. Query views against BBDW can do most of the things that query views against the Infinity database can do including: Save queries for later re-use Export directly from the query results grid Create an export process Create an export definition Create a smart query based on an ad-hoc query Create a report using Report Builder Save a static selection (note that dynamic selections are not supported at this time) Use a static... Read More

Posted on: July 5, 2012

This blog post provides guidance for measuring the growth of a BBCRM database; specifically how to identify the fastest growing tables, the total number of tables in the database, and then how to analyze the data so that you can develop a plan to curb the growth. Identifying Fast Growing Tables: The SQL script at the bottom of this article is written to identify database tables that are greater than 100MB in size (you can adjust the minimum size by changing the WHERE clause). The script’s output will show the table name, number of table rows, reserved space, actual used space, index size, and... Read More