Choosing an optimal hash size

The UltraLite default maximum hash size of 4 bytes was chosen to suit most deployments. You can increase the size to include
more data with the row ID. However, this change could increase the size of the index and fragment it among multiple pages.
This change can possibly increase the size of the database as a result. The impact of an increased maximum hash size depends
on the number of rows in the table: for example, if you only have a few rows, a large index hash key would still fit on the
index page. No index fragmentation occurs in this case.

When choosing an optimal hash size, consider the data type, the row data, and the database size (especially if a table contains
many rows).

The only way to determine if you have chosen an optimal hash size is to run benchmark tests against your UltraLite client
application on the target device. You need to observe how various hash sizes affect the application and query performance,
in addition to the changes in database size itself.

The data type

If you want to hash the entire value in a column, note the size required by each data type in the table that follows. UltraLite
only uses the maximum hash size if it really needs to, and it never exceeds the maximum hash size you specify. UltraLite always
use a smaller hash size if the column type does not use the full byte limit.

Data type

Bytes used to hash the entire value

FLOAT, DOUBLE, and REAL

Not hashed.

BIT and TINYINT

1

SMALL INT and SHORT

2

INTEGER, LONG ,and DATE

4

DATETIME, TIME, TIMESTAMP, and BIG

8

CHAR and VARCHAR

To hash the entire string, the maximum hash size in bytes must match the declared size of the column. In a UTF-8 encoded database,
always multiply the declared size by a factor of 2, but only to the allowed maximum of 32 bytes.

For example, if you declare a column VARCHAR(10) in a non-UTF-8 encoded database, the required size is 10 bytes. However,
if you declare the same column in a UTF-8 encoded database, the size used to hash the entire string is 20 bytes.

BINARY

The maximum hash size in bytes must match the declared size of the column.

For example, if you declare a column BINARY(30), the required size is 30 bytes.

UUID

16

For example, if you set a maximum hash size of 6 bytes for a two-column composite index that you declared as INTEGER and BINARY
(20) respectively, then based on the data type size requirements, the following occurs:

The entire value of the row in the INTEGER column is hashed and stored in the index because only 4 bytes are required to hash
integer data types.

Only the first 2 bytes of the BINARY column are hashed and stored in the index because the first 4 bytes are used by the INTEGER
column. If these remaining 2 bytes do not hash an appropriate amount of the BINARY column, increase the maximum hash size.

The row data

The row values of the data being stored in the database also influence the effectiveness of a hashed index.

For example, if you have a common prefix shared among entries of a given column, you may render the hash ineffective if you
choose a size that only hashes prefixes. In this case, you need to choose a size that ensures more than just the common prefix
is hashed. If the common prefix is long, you should consider not hashing the values at all.

In cases where a non-unique index stores many duplicate values, and UltraLite cannot hash the entire value, the hash likely
cannot improve performance.

The database size

Each index page has some fixed overhead, but the majority of the page space is used by the actual index entries. A larger
hash size means each index entry is bigger, which means that fewer entries can fit on a page. For large tables, indexes with
large hashes use more pages than indexes with small or no hashes. The more pages required increases the database size and
degrades performance. The latter typically occurs because the cache can only hold a fixed number of pages thereby causing
UltraLite to swap pages.

The following table gives you an approximation of how the hash size can affect the number of pages required to store data
in an index: