Mater artium necessitas

Archive for April, 2009

In the perspective of upsizing my current Access 2007 application, I have been trying to understand a bit more about the possible performance impact of various choices of Primary Keys.

My problem is that currently, the Access application uses autoincrement numbers as surrogate Primary Keys (PK).
Since I will need to synchronise the data over multiple remote sites, including occasionally disconnected clients, I can’t use the current autoincrement PK and will need to change to GUID.

To see for myself what could be the impact, I made a series of benchmarks.
This first part is fairly simple:

For the table using a GUID, we use the NewSequentialID() instead of NewID() to create new keys. This is supposed to offer much better performance as the generated GUIDs are guaranteed to be sequential rather than random, resulting in better index performance on insertion.

For the Access version of the tables, we basically use the same definition, except that we used 4 tables:

Basically, we perform 1000 transactions each inserting 1000 records into the table ProductGUID or ProductINT.

Access 2007 Test code

To duplicate the same conditions, the following VBA code will perform 1000 transactions each inserting 1000 records.
Note that the recordset is opened in Append mode only.
The importance of this will be discussed in another article.

' Run this to inset 1,000,000 products in batches of 1000
' In the given table
Public Sub Benchmark(TableName as String, InsertSeqGUID as Boolean)
Dim i As Integer
For i = 1 To 1000
Insert1000Products TableName, InsertSeqGUID
Next i
End Sub
' Insert 1000 products in a table
Public Sub Insert1000Products(TableName as String, InsertSeqGUID as boolean)
Dim i As Long
Dim db As DAO.Database
Dim rs As DAO.Recordset
Dim ws As DAO.Workspace
Dim starttime As Long
Dim timespan As Long
Set ws = DBEngine.Workspaces(0)
DoEvents
starttime = GetClock ' Get the current time in ms
ws.BeginTrans
Set db = CurrentDb
Set rs = db.OpenRecordset(TableName, dbOpenDynaset, dbAppendOnly)
With rs
For i = 1 To 1000
.AddNew
If InsertSeqGUID Then !ID = "{guid {" & CreateStringUUIDSeq() & "}"
!SKU = "PROD" & i
!Description = "Product number " & i
.Update
Next i
End With
ws.CommitTrans
rs.Close
timespan = GetClock() - CDbl(starttime)
Set rs = Nothing
Set db = Nothing
' Print Elapsed time in milliseconds
Debug.Print timespan
DoEvents
End Sub

ProductGUIDRandom table: we let Access create the Random GUID for the primary key.

ProductGUIDSequential: we use the Windows API to create a sequential ID that we insert ourselves.

Test results

Without further ado, here are the raw results, showing the number of inserted record per second that we achieve for each test over the growing size of the database (here are only shown tests comapring Sequantial GUID and Autoincrement on SQL Server and Access, see next sections for the other results):

What we clearly see here is that performance when using autoincrement and Sequential GUID stays pretty much constant over the whole test.
That’s good new as it means that using Sequential GUIDs do not degrade performance over time.

As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth.

Using Sequential GUID vs Autoincrement in Access

The results show that we do take a performance hit of about 30% when inserting Sequential GUID vs just using autonumbers.
We’re still getting good results, but that’s something to keep in mind.

In terms of CPU consumption, here is what we get:

Random PK, whether they are simple integer or GUID do consume substantially more CPU resources.

Using Sequential GUID vs Identity in SQL Server

Out-of-the box, SQL Server performs quite well and there is not much difference whether you’re using Sequential GUIDs or autoincrement PK.

There is however a surprising result: using Sequential GUIDs is actually slightly faster than using autoincrement!

There is obviously an explanation for this but I’m not sure what it is so please enlighten me 🙂

CPU Consumption:

Using Random GUID vs Sequential GUID vs Random Autonumber in Access

So, what is the impact of choosing a Sequential GUID as opposed to letting Access create its own random GUIDs?

It’s clear that random GUIDs have a substantial performance impact: their randomness basically messes up indexing, resulting in the database engine having to do a lot more work to re-index the data on each insertion.
The good thing is that this degradation is pretty logarithmic so while it degrades over time, the overall performance remains pretty decent.
While GUIDs are larger than Random Integers (16 bytes vs 4 bytes) the actual performance of inserting records whose PK is a random integrer is actually worse than random GUID…

Provisional conclusions

Here we’ve check the baseline for our performance tests.
In the next article, we’ll look exclusively at the performance of inserting data from a remote Access 2007 front end using our VBA code.

Having this baseline will allow us to check the performance overhead of using ODBC and letting Jet/ACE manage the dialogue with the SQL Server backend.

Feel free to leave your comments below, especially if you have any resources or information that would be useful.

Updates

I’ve just lost 2 days going completely bananas over a performance issue that I could not explain.

I’ve got this Dell R300 rack server that runs Windows Server 2008 that I dedicate to running IIS and SQL Server 2008, mostly for development purposes.

In my previous blog entry, I was trying some benchmark to compare the performance of Access and SQL Server using INT and GUID and getting some strange results.

Here are the results I was getting from inserting large amounts of data in SQL Server:

Machine

Operating System

Test without Transaction

Test with Transaction

MacbookPro

Windows Server 2008 x64

324 ms

22 ms

Desktop

Windows XP

172 ms

47 ms

Server

Windows Server 2008 x64

8635 ms!!

27 ms

On the server, not using transactions makes the query run more than 8 seconds, at least an order of magnitude slower than it should!

I initially thought there was something wrong with my server setup but since I couldn’t find anything, I just spend the day re-installing the OS and SQL server, applying all patches and updates so the server is basically brand new, nothing else on the box, no other services, basically all the power is left for SQL Server…

Despair

When I saw the results for the first time after spending my Easter Sunday rebuilding the machine I felt dread and despair.
The gods were being unfair, it had to be a hardware issue and it had to be related to either memory or hard disk, although I couldn’t understand really why but these were the only things that I could see have such an impact on performance.

I started to look in the hardware settings:

And then I noticed this in the Policies tab of the Disk Device Properties :

Moral of the story

If you are getting strange and inconsistent performance results from SQL Server, make sure you check that Enable advanced performance option.
Even if you’re not getting strange results, you may not be aware of the issue, only that some operations may be much slower than they should.

Before taking your machine apart and re-installing everything on it, check your hardware settings, there may be options made available by the manufacturer or the OS that you’re not aware of…

When you start building an Access application, it’s tempting to just think about today’s problem and not worry at all about the future.
If your application is successful, people will want more out of it and, over time, you’ll be faced with the task of moving the back-end database to a more robust system like SQL Server.

Naming conventions

Access is pretty liberal about naming conventions and it will let you freely name your tables, columns indexes and queries.
When these get moved to another database you’ll most probably be faced with having to rename them.
In some cases, you could actually create subtle bugs because something that used to work fine in Access may be tolerated in the new database but be interpreted differently.

Do not use spaces or special characters in your data object names.
Stick to characters in the range A through Z, 0 to 9 with maybe underscores _ somewhere in between (but not at the start or the end).
Also try to respect casing wherever you reference this name (especially for databases like MySQL which are case-sensitive if the hosted on a Linux platform for instance).
eg:Customer Order Lines (archive) should be CustomerOrderLines_Archive.Query for last Year's Turnover should be QueryLastYearTurnover.
Index ID+OrderDate should become instead ID_OrderDate.

Do not use keywords that are reserved or might mean something else whether they are SQL keywords or functions names:
A column called Date could be renamed PurchaseDate for instance.
Similarly, OrderBy could be renamed SortBy or PurchaseBy instead, depending on the context of Order.
Failing to do so may not generate errors but could result in weird and difficult to debug behaviour.

Do not prefix tables with Sys, USys, MSys or a tilde ~.
Access has its own internal system tables starting with these prefixes and it’s best to stay away from these.
When a table is deleted, Access will often keep it around temporarily and it will have a tilde as its prefix.

Do not prefix Queries with a tilde ~.
Access use the tilde to prefix the hidden queries kept internally as recordsource for controls and forms.

Database design

Always use Primary keys.
Always have a non-null primary key column in every table.
All my tables have an autonumber column called ID. Using an automatically generated column ID guarantees that each record in a table can be uniquely identified.
It’s a painless way to ensure a minimum level of data integrity.

Do not use complex multivalue columns.
Access 2007 introduced complex columns that can record multiple values.
They are in fact fields that return whole recordset objects instead of simple scalar values. Of course, this being an Access 2007 only feature, it’s not compatible with any other database.
Just don’t use it, however tempting and convenient it might be.
Instead use a table to record Many-To-Many relationships between 2 tables or use a simple lookup to record lists of choices in a text field itself if you’re only dealing with a very limited range of multivalues that do not change.

Do not use the Hyperlink data type.
Another Access exclusive that isn’t available in other databases.

Be careful about field lookups.
When you create Table columns, Access allows you to define lookup values from other tables or lists of values.
If you manually input a list of values to be presented to the user, these won’t get transferred when upsizing to SQL Server.
To avoid having to maintain these lookup lists all over your app, you could create small tables for them and use them as lookup instead; that way you only need to maintain a single list of lookup values.

Be careful about your dates.
Access date range is much larger than SQL Server.
This has 2 side-effects:
1) if your software has to deal with dates outside the range, you’ll end-up with errors.
2) if your users are entering dates manually, they could have made mistakes when entering the year (like 09 instead of 2009).
Ensure that user-entered dates are valid for your application.

VBA

While most of your code will work fine, there are a few traps that will bomb your application or result in weird errors:

Always explicitly specify options when opening recordsets or executing SQL.
With SQL Server, the dbSeeChange is mandatory whenever you open a recordset for update.
I recommend using dbFailOnError as well as it will ensure that the changes are rolled back if an error occurs.

Get the new autonumbered ID after updating the record.
In Access, autonumbered fields are set as soon as the record is added even if it hasn’t been saved yet.
That doesn’t work for SQL Server as autonumbered IDs are only visible after the records have been saved.

' Works for Access tables only
' We can get the new autonumber ID as soon as the record is inserted
rs.AddNew
mynewid = rs!ID
...
rs.Update
' Works for ODBC and Access tables alike
' We get the new autonumber ID after the record has been updated
rs.AddNew
...
rs.Update
rs.Move 0, rs.LastModified
mynewid = rs!ID

Never rely on the type of your primary key.
This is more of a recommendation but if you use an autonumbered ID as your primary key, don’t rely in your code or you queries on the fact that it is a long integer.
This can become important if you ever need to upsize to a replicated database and need to transform your number IDs into GUID.
Just use a Variant instead.

Parting thoughts

These simple rules will not solve all your problems but they will certainly reduce the number of issues you’ll be faced with when upsizing you Access application.
Using a tool like SSMA to upsize will then be fairly painless.

If you have other recommendations, please don’t hesitate to leave them in the comments, I’ll regularly update this article to included them.