Shared Database Proposal

NSS has been using an old version of the Berkeley DataBase as its database engine since Netscape Navigator 2.0 in 1994. This database engine is commonly described in NSS documents as "DBM" and has a
number of limitations. One of the most severe limitations concerns the
number of processes that may share a database file. While any process has
a DBM file open for writing, NO other process may access it in any way.
Multiple processes may share a DBM database ONLY if they ALL access it
READ-ONLY. Processes cannot share a DBM database file if ANY of them wants
to update it.

This limitation has been cumbersome for applications that wish to use NSS.
Applications that want to share databases have resorted to these strategies:

Synchronized updates, with application down time: The applications share the database read-only. If any update is desired, all the applications are shut down, and a database update program performs the update, then all the applications are restarted in read-only mode. Some server products, for example, have an administration program that stops the servers, updates the database that they share, and then restarts the servers. This results in undesirable downtime and desired database changes are delayed until the next interval in which such downtime is acceptable.

Multiple copies with duplicated updates. Each application keeps its own copy of its databases, and applications communicate their changes to each other, so that each application may apply received changes to its own database. FireFox and Thunderbird are examples of this. When one of those applications gets a new certificate and private key, the user may "export" that pair to a PKCS#12 file, and then import that file into the other application. Most users never master these steps, and so have databases entirely out of sync.

These workarounds for the DBM engine's limitations are sufficiently onerous
that they prevent many applications from adopting NSS. The desire to make
NSS more ubiquitous now motivates the elimination of these limitations. There is a strong desire to make NSS be the native OS network security service for Linux.

In 2001 NSS was modified to enable applications to supply their own database engines. Applications could share a common database if they supplied their
own shareable database implementation, and configured NSS to use it.

Today, there exists a process level, ACID, open source, and widely available database engine with which multiple processes may simultaneous have read and write access to a shared database. It is named SQLite. The NSS team proposes to use this database to give all NSS-based applications Shared Database access.

A more flexible schema which can store meta information about certificates and keys, such as finer grained information about trust for certificates.

The need to match the underlying certificate and key storage with its reflection into NSS (that is, PKCS #11).

To satisfy the FIPS requirements for integrity checks on keys and trust objects, NSS must be able to store integrity information in the databases.

Where we are today

At initialization time, the application gives NSS a string that it uses as the pathname of a directory to store NSS's security and configuration data. NSS typically stores 3 DBM files in that directory:

If it has very large security objects (such as large CRLs), NSS will store them in files in a subdirectory named cert8.dir. (Yes, really!)

If the cert8.db and/or key3.db files are missing, NSS will read data from older versions of those databases (e.g., cert7.db, cert5.db, if they exist) and may build new cert8.db and/or key3.db files with that data (upgrade).

These files are all accessed exclusively by the softoken shared library, making it the only NSS library that must be linked with libdbm.

The application-supplied database feature

If the initialization string given to NSS starts with 'multiaccess:', NSS does not use it as a directory pathname. Instead, NSS parses the string as follows:

multiaccess:appName[:directory]

Where:multiaccess is a keyword.appName uniquely identifies a group of applications which share an
application-supplied database, effectively a new database name.directory is the pathname for a directory containing NSS DBM databases
whose contents will be used to update the application-supplied database during NSS initialization.

In the presence of a multiaccess initialization string, during initialization
NSS will try to find a shared library named librdb.so (rdb.dll on Windows) in its path and load it. This shared library is expected to implement a superset of the old DBM interface. The main entry point is rdbopen, which will be passed the appName, database name, and open flags. The rdb shared library will pick a location or method to store the database (it may not necessarily be a file), then handle the raw database records from NSS. The records passed to and from this library use exactly the same schema and record formats as the records in the DBM library.

The proposal

We propose to replace key3.db and cert8.db with new SQL databases called key4.db and cert9.db. These new databases will store PKCS #11 token objects, the same types of objects whose contents are currently stored in cert8.db and key3.db

Optionally the new databases could be combined into a single database, cert9.db, where private and public objects are stored in separate tables. Softoken would automatically identify any cert9.db that also has an embedded key store and open that key store up, instead of opening a separate key4.db and cert9.db. However, in the first release, there will be no way to create a cert9.db containing both cert and key tables.

The new databases will be called 'shareable' databases. They may or may not be shared by multiple processes, but they are all capable of being shared.

schema

The schema for the database will be simple.

Each row will represent an object.

The row schema will contain the the Object ID and the list of known Attribute Types.

Newer versions of NSS may add new attribute types on the fly as necessary (extending the schema).

The Attribute values will be stored as binary blobs.

Attributes that represent CK_ULONG values will be stored as 32-bit values in network byte order.

For all other attributes, byte order is already specified by PKCS #11.

Private attributes will be encrypted with a PKCS #5 PBE in the same way the pkcs8 private and secret key data is encrypted today.

Softoken will only set those attributes appropriate for the given object. If The attribute is not appropriate it will be left blank. (Note that sqlite does not distinguish between a NULL attribute and an empty one. This will be handled by storing a special value which means 'NULL' when writing a NULL record.

integrity will be maintained by a PBE based MAC on critical attributes.

Other data that is necessary for the proper operation of softoken, but that is not defined as part of any PKCS#11 objects, (such as data used to verify token user passwords), will be stored in separate tables in the database with their own schemas.

Database extension will be accomplished in 2 ways:

New attribute types can augment the already-implemented attribute types for objects already implemented in softoken. Attributes of these new types can be added to older database objects, which will be detected because they will have 'invalid' values for these attributes. For example, we could add a new attribute type to hold additional extensions for certificate objects.

Define new PKCS #11 object types. For example, we could add new objects to store mappings between various certificates pairs of cipher suites and host names.

Softoken will be able to store the following objects and attributes. In the table below, attributes marked CK_ULONG will be written to the database as a 32-bit network byte order unsigned integers. Attributes marked 'encrypted' will be encrypted with the token's pbe key, and attributes marked 'MACed' will be MACed with the token's PBE key.

Legal Attributes and objects

While the key and certificate database format is extensible, the initial implementation has to understand a particular subset of attributes. The following list of attribute types will be understood, and any special coding conditions for those attribute types.

Stored in the key database:

CKO_PRIVATE_KEY

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_ID

CKA_START_DATE

CKA_END_DATE

CKA_DERIVE

CKA_LOCAL

CKA_KEY_TYPE - CK_ULONG

CKA_DECRYPT

CKA_SIGN

CKA_SIGN_RECOVER

CKA_UNWRAP

CKA_SUBJECT

CKA_SENSITIVE,

CKA_EXTRACTABLE

CKA_NSS_DB - (allowed, but not required)

CKA_MODULUS - RSA only - MACed

CKA_PUBLIC_EXPONENT - RSA only - MACed

CKA_PRIVATE_EXPONENT - RSA only - encrypted

CKA_PRIME_1 - RSA only - encrypted

CKA_PRIME_2 - RSA only - encrypted

CKA_EXPONENT_1 - RSA only - encrypted

CKA_EXPONENT_2 - RSA only - encrypted

CKA_COEFFICIENT - RSA only - encrypted

CKA_SUBPRIME - DSA only - MACed

CKA_PRIME - DH, DSA only - MACed

CKA_BASE - DH, DSA only - MACed

CKA_VALUE - DH, DSA, and ECC only - encrypted

CKA_EC_PARAMS - ECC only - MACed

CKO_SECRET_KEY

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_ID

CKA_START_DATE

CKA_END_DATE

CKA_DERIVE

CKA_LOCAL

CKA_KEY_TYPE - CK_ULONG

CKA_SENSITIVE

CKA_EXTRACTABLE

CKA_ENCRYPT

CKA_DECRYPT

CKA_SIGN

CKA_VERIFY

CKA_WRAP

CKA_UNWRAP

CKA_VALUE - encrypted

Stored in the cert database:

CKO_PUBLIC_KEY

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_ID

CKA_START_DATE

CKA_END_DATE

CKA_DERIVE

CKA_LOCAL

CKA_KEY_TYPE - CK_ULONG

CKA_ENCRYPT

CKA_VERIFY

CKA_VERIFY_RECOVER

CKA_WRAP

CKA_SUBJECT

CKA_MODULUS - RSA only - MACed

CKA_PUBLIC_EXPONENT - RSA only - MACed

CKA_SUBPRIME - DSA only - MACed

CKA_PRIME - DH, DSA only - MACed

CKA_BASE - DH, DSA only - MACed

CKA_VALUE - DH, DSA, ECC only - MACed

CKA_EC_PARAMS - ECC only - MACed

CKO_CERTIFICATE

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_CERTIFICATE_TYPE - CK_ULONG

CKA_VALUE

CKA_SUBJECT

CKA_ISSUER

CKA_SERIAL_NUMBER

CKA_NSS_OVERRIDE_EXTENSIONS (sql database only) - MACed

CKO_NSS_TRUST

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_ISSUER - MACed

CKA_SERIAL_NUMBER- MACed

CKA_CERT_SHA1_HASH- MACed

CKA_CERT_MD5_HASH - MACed

CKA_TRUST_SERVER_AUTH - CK_ULONG - MACed

CKA_TRUST_CLIENT_AUTH - CK_ULONG - MACed

CKA_TRUST_EMAIL_PROTECTION - CK_ULONG - MACed

CKA_TRUST_CODE_SIGNING - CK_ULONG - MACed

CKA_TRUST_STEP_UP_APPROVED - MACed

CKO_NSS_CRL

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_SUBJECT

CKA_VALUE

CKA_NETSCAPE_URL

CKA_NETSCAPE_KRL

CKO_NSS_SMIME

CKA_CLASS - CK_ULONG

CKA_TOKEN

CKA_PRIVATE

CKA_LABEL

CKA_MODIFIABLE

CKA_SUBJECT

CKA_NETSCAPE_EMAIL

CKA_NETSCAPE_SMIME_TIMESTAMP

CKA_VALUE

Special coding for CK_ULONG

All CK_ULONG values are encoded as 4 byte values, most significant byte first.

Special coding for encrypted entries

Encrypted entries are stored in the database as PKCS5 encoded blobs.

SEQUENCE {
AlgorithmID algorithm,
OctetString encryptedData
};

The algorithm parameter must be a valid PKCS5 or PKCS12 PBE oid, and contain the appropriate salt value. NSS understands the following PBE oids:

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_TRIPLE_DES_CBC

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_2KEY_TRIPLE_DES_CBC

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_3KEY_TRIPLE_DES_CBC

SEC_OID_PKCS5_PBE_WITH_MD2_AND_DES_CBC

SEC_OID_PKCS5_PBE_WITH_MD5_AND_DES_CBC

SEC_OID_PKCS5_PBE_WITH_SHA1_AND_DES_CBC

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_128_BIT_RC2_CBC

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_40_BIT_RC2_CBC

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_128_BIT_RC2_CBC

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_40_BIT_RC2_CBC

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_128_BIT_RC4

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_2KEY_TRIPLE_DES_CBC

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_3KEY_TRIPLE_DES_CBC

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_TRIPLE_DES_CBC

SEC_OID_PKCS5_PBE_WITH_MD2_AND_DES_CBC

SEC_OID_PKCS5_PBE_WITH_MD5_AND_DES_CBC

SEC_OID_PKCS5_PBE_WITH_SHA1_AND_DES_CBC:

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_128_BIT_RC2_CBC:

SEC_OID_PKCS12_V2_PBE_WITH_SHA1_AND_40_BIT_RC2_CBC:

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_128_BIT_RC2_CBC:

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_40_BIT_RC2_CBC:

SEC_OID_PKCS12_PBE_WITH_SHA1_AND_128_BIT_RC4:

SEC_OID_PKCS5_PBES2 (actual encryption and prf algorithms are stored in parameters)

Valid Encryption Algorithms

SEC_OID_AES_128_CBC

SEC_OID_AES_192_CBC

SEC_OID_AES_256_CBC

Valid PRF Algorithms

SEC_OID_HMAC_SHA_1

SEC_OID_HMAC_SHA256

SEC_OID_HMAC_SHA384

SEC_OID_HMAC_SHA512

SEC_OID_PKCS5_PBMAC1 (actual hmac and prf algorthms are stored in parameters)

Valid HMAC and PRF Algorithms

SEC_OID_HMAC_SHA_1

SEC_OID_HMAC_SHA256

SEC_OID_HMAC_SHA384

SEC_OID_HMAC_SHA512

The base key used in the PBE is the token password hashed with the token's global salt stored in the password entry. <In FIPS mode this resulting key is further transformed by setting it to value 'x' and using x to raise generator 'g' to the 'x' power modulo 'p'. The final result is the token key. g and p are stored with the base user's protection in the filesystem in the same directory as the certificate and key database. g and p should be considered 'secret' values.>*

Purpose: by choosing an appropriate g and p, we can control password attempt rates from multiple processes. By making g and p secret, we can remove our reliance on the security of the PBE - the security of the password is at least as strong as the security of g and p.

This will be implemented by checking for the g/p file in the database directory. If no g/q file exists, key processing would continue as normal. If g/p does exist it will be used in the key formation. If a new key is being created, then g/p is create/used if in FIPS mode, and not create/used if not in FIPS mode. This way passwords continue to work even after mode switching.

Special coding for MACed entries

Some specialized entries in the database are MACed to either for FIPS protection or to protect trust integrity. MACed entries are only processed if the user has logged into the token. If the user has not logged into the token, then the MAC is not used.

MACs are not processed for the legacy database.

Entries which require MACs require that the user be logged into the token to write or modify the entry. On readback, they are not checked unless the user is logged in.

Tokens which have no key database (and therefore no master password) do not
have any stored MACs.

Attributes that are MACed are:

Trust object hashes and trust values

public key values.

Certs themselves are considered properly authenticated by virtue of their
signature, or their matching hash with the trust object.

MACS are formed by concatenating the object id (encoded as a database ULONG) with the attribute type (also encoded as a ULONG) and with the atttribute data. This concatenation is fed into a SHA1 HMAC keyed with the token key described in the 'Special coding for encrypted entries'. The resulting Mac is stored in a special table in the key database called the
'metadata'. This table could store other types of meta data as well in the
future. Currently the password check is also stored in the metadata table.

Integrity entries are indexed by the string

sig_[cert/key]_{ObjectID}_{Attribute}

Where cert/key indicates which logical database the actual object is stored in,
{ObjectID} is the id of the object and {Attribute} is the attribute this
integrity entry is for.

Entries are check when the user tries to get and attribute. Once the attributes
are fetched, GetAttributes loops through the attributes looking for attributes
that require integrity checks. When such an attribute is found, the integrity
check entry is fetched and an HMAC is calculated for the attribute. If the
attribute matches the integrity entry, processing proceeds. If it doesn't that
attribute data is cleared and CKR_SIGNATURE_INVALID is returned.

MACS are PBMAC1 data structures defined in pkcs5 2.0. Since pkcs5 v1
does not have integrity checks and pkcs12 has no definition for storing
purely MAC data,the shared database integrity checks use pkcs5 v2 to store that
PBE and MAC data.

Database coherency issues

In our previous database, we had issues with database corruption resulting in hard to diagnose issues. In order to mitigate that, this section analyzes how various forms of corruption can affect the new database design, and possible ways of repairing that corruption in the field.

Hidden Meta-data records

The most obvious corruption issues surround data records which do not have direct visibility to the application, or NSS itself (beyond softoken). In the previous design these included such records as the SubjectList records which kept track of all the certs with a given subject. In this design, we have the following meta-data records:

g/p files

password entries

MAC data

g/p files only exist in FIPs mode. Loss of g/p files are unrecoverable. All private data and MAC data will become inaccessible. This means all private and secret keys will be lost. Lost MAC data would have to be regenerated. This would require a special tool which modified the sqlite database directly.

loss of password entries would be almost as devistating. If there are any keys or MAC data, it is theoretically possible to verify a password using one of those entries. From that password the password entry could be regenerated. This would require a speical tool that modified the sqlite database directly.

loss of MAC data would interfere with the ability of the database to validate certificates. NOTE: this would only affect the databases ability to validate certificates while the user is logged in. Lost MAC data would have to be regenerated by walking the database and creating fresh MACs.

Linked records

Since each record represents a PKCS #11 object, the current database design is significantly less reliant on two related records being self-consistant. There are 2 areas, however where there is linkages: CKA_IDs and CKO_NSS_SMIME.

CKA_IDs link certificates with their related private keys. This is a PKCS #11 specified linkage. CKA_IDs are generated by NSS when keys are created. NSS sets this value to a hash related to the public key of the key pair. If this value is corrupted in the private key, then NSS will no longer be able to find that private key. A tool at the PKCS #11 level could repair such damage. If the CKA_ID of a certificate is corrupted, NSS will stop recognizing the certificate as a user certificate, and it will be unable to find the key associated with it. This can also be repaired by a PKCS #11 level tool. The corrupted CKA_ID also be repaired by deleting and reimporting the certificate. Both cases could be repaired by reimporting a pkcs12 file with the certificate and key.

If the Private key is deleted, but the corresponding public key is not, then NSS may be confused and think that the private key exists for the certificate. Also, if the public key is deleted, but the private key is not, NSS may be confused and think the private key does not exist for the certificate. Both of these cases will act correctly if the token is logged in. This kind of corruption can be repaired by reimporting the pkcs12 file, or by a PKCS #11 level tool to delete or restore the public key.

CKO_NSS_SMIME object holds the email address, subject of the S/MIME certificate, and the s/mime profile. In the legacy database, multiple email addresses could hold the same profile data and certificate subject, but only one s/mime profile data and subject would be allowed for each email address. In the new database, multiple independent email records can exist for the same email address. While S/MIME will function without CKO_NSS_SMIME objects, Certificates that verify multiple email address can not be found by the email addresses (other then the 'primary' address) without CKO_NSS_SMIME objects. If S/MIME records are corrupted, the Certificates will not be findable for other email addresses. Unlike the legacy database, these records are destroyed by S/MIME records for other certificates with the same email address. Now if you have multiple certificates with the same email address, all those certificates can be found. This corruption can be repaired with a PKCS #11 level tool.

Accessing the shareable Database

In order to maintain binary compatibility, the following keywords will be understood and used by softoken.

multiaccess:appName[:directory] works as it does today, including using the cert8/key3 record version.dbm:directory opens an existing non-shareable libdbm version 8 database.sql:directory1[:directory2] opens a shareable database,
cert9.db (& key4.db) in directory1 if cert9.db does exist. If the database does not exist, then directory2 is searched for a libdbm cert8.db and key3.db. If directory2 is not supplied, directory1 is searched. extern:directory open a sql-like database by loading an external module, a. la. rdb and multiaccess:. This option would not be implemented in the initial release, but the extern: keyword would be reserved for future use.

Plain directory spec. For binary compatibility, the plain directory spec as the same as dbm:directory unless overridden with the NSS_DEFAULT_DB_TYPE environment variable. Applications will not need to change for this release of NSS. (particularly unfriendly applications that want to tweak with the actual database file). Users can force older applications to share the database with the environment variable. The environment variable only affects non-tagged directories.

When accessing the dbm: and multiaccess: directories, external shared library will be loaded which knows how to handle these legacy databases. This allows us to move much of the current mapping code into this shared library.

Secmod.db

In the dbm: and multiaccess: cases, there will be no changes to secmod.db.

In the sql: case, a new directory with separate flat files containing text files of the format specified in the PKCS #11 working group, but not yet included in any spec. This directory will be opened, locked, used, then closed (much like the current secmod.db). The directory will live as a sub directory of the directory that holds cert9.db/key4.db. As a directory of flat files it would not use the sqlite database to access these records. The file name should become pkcs11.txt.

User App Initialization and System App Initialization

One of the goals of making a shareable database version of NSS is to create a 'system crypto library' in which applications will automatically share database and configuration settings. In order for this to work, applications need to be able to open NSS databases from standard locations.

This design assumes that new NSS init functions will be defined for applications wanting to do 'standard user initialization', rather than building special knowledge into softoken or the database model. Note: This is different from the 2001 design, or and earlier prototype shareable database, where the database code knew the location of the shareable database.

Database Upgrade

NSS has traditionally performed automatic updates when moving to new database formats. If NSS cannot find a database that matches it's current database type, it looks for older versions of it's database and automatically updates those to the new database version. In these cases database upgrade is automatic and mandatory for all applications.

In the shareable database design, upgrade is no longer mandatory. Applications
may choose to continue to use the old DBM database, update to use the new shareable database from old DBM databases, or update and merge old DBM database into a new location shareable by multiple apps. There is still a desire for this update to be automatic, at least as far as the application user is concerned. The following describe how NSS deals with update in different applications, and what the different applications must do to get the correct update behavior.

To understand the issues of migration to the Shareable Database version of NSS from the traditional (legacy) versions, we group applications that use the new version of NSS into three 'modes' of operation, and into two types for a total of five valid combinations (Mode 1 B is not valid)..

Mode 1

Mode 1: Legacy applications which formerly used DBM databases and upgrade to the new version of NSS without making any changes to the applications' code, or applications that chose to continue to use the DBM database.

These applications will continue to use the legacy database support and the
old DBM database format. The applications cannot take advantage of new features
in the shareable database. In this Mode, the nssdbm3 shared library must be
present. No update from legacy DBM to sharable is needed in this mode.

Mode 2

Mode 2: Applications that use the new shareable database engine, but choose not to share copies of their cert and key stores, or applications which would prefer to merge databases as a separate step. They may or may not have existing legacy DBM databases from older versions of those applications. (Some servers might be like this.) Typically users of these applications are aware of the NSS databases and the locations of these databases.

These applications use the new shareable database engine. The first time the new application version runs, when NSS first creates the shareable databases, NSS will automatically detect instances of old DBM databases and will upgrade those legacy databases to the new shareable ones without user interaction. This is similar the the traditional NSS database update found in previous versions of NSS.

Mode 3

Mode 3: Applications that intend to share their keys and certs with other applications (the common case - browsers, mail clients, secure shells, vpns, etc.) and which the users typically have little or no awareness of what the NSS databases are and where they might be.

To achieve that sharing, these applications must share a single common set of databases. If older versions of these applications created legacy DBM databases, those legacy databases must be merged. To perform such a merge, NSS will need some extra support from the application, and possibly user intervention (in the form of password authentication) as well.

Type A

Type A applications are new versions of applications that existed before NSS supported sharable databases. They have existing legacy NSS databases. The new versions of these applications have been upgraded to the new NSS that supports shareable databases. If they intend to share the contents of those old databases, they need to merge the old database contents into the new ones.

All Mode 1 applications are type A and need nssdbm3 at all times. Mode 2 and Mode 3 applications of Type A need nssdbm3 to upgrade from the old legacy DBM databases to the new shareable databases. Mode 3 Type A applications need libnssdbm3 to merge data from legacy DBM databases into shareable ones. Mode 2 and Mode 3 type A applications do not need nssdbm3 except for upgrading and merging data from old legacy databases.

Type B

Type B applications are applications that never used any old version of NSS that supported only DBM databases. All NSS databases used by Type B applications are sharable databases. There are no legacy DBM databases for Type B applications. All Type B applications are either Mode 2 or Mode 3.
Type B applications do no database upgrades, and do not need nssdbm3.

How Applications Use Upgrade

NOTE: While database Upgrade may involve a merge (mode 3), database upgrade is not merging. See the section on how to manage merging databases.

Mode 2A

Mode 2A Applications can also continue to call traditional NSS_Initialize() functions. The should, however, prepend the string "sql:" to the directory path passed to NSS in the configdir parameter. If the sql databases do not exist, NSS will automatically update any old DBM databases in the config directory to shareable databases. Like the upgrade from cert7 to cert8, if the update does not work, the app will open and use the old DBM database. Upgrade will not happen if

NSS is opened readOnly.

NSS_Initialization fails.

The database is password protected and the user never logs into the token during the lifetime of the application.

Applications can avoid that third failure case by forcing the user to authenticate to softoken using PK11_Authenticate().

Mode 3A

Mode 3A Applications are the most complicated. NSS provides some services to help applications get through an update and merge with the least interaction with the user of the application. The steps a Mode 3A application should use whenever initializing NSS are listed below.

Step 0: Preparation: collect the directory and prefix names of both the source and target databases. Prepare two strings for the operation:

A string to uniquely identify the source database, for the purpose of
avoiding a repeat of this merge (making the merge idempotent). This
string could be derived from the name of the application that used the
source database, from any application "instance" names (such as profile
names), from the absolute path name of the source database directory and the
database prefixes, and from the last modification time of the source databases.

The algorithm for deriving this string should always produce the same
result for the same set of source files, so that the code can detect a
second or subsequent attempt to merge the same source file into the
destination file.

Note: The purpose of this string is to prevent multiple updates from the same old database. This merge sequence is meant to be sufficiently light weight that applications can safely call it each time they initialize.

A string that will be the name of the removable PKCS#11 token that
will represent the source database. This string must follow the rules for a
valid token name and must not contain any colon (:) characters.

Step 2: Determine if a merge is necessary. If a merge is necessary,
NSS will set the slot to a 'removable slot'. You can use PK11_IsRemovable to
test for this.

If the database slot token is not removable, then no update/merge is necessary, goto step 7.

(optional) If PK11_NeedLogin() is not true then NSS has already completed the merge for you (no passwords were needed), skip to step 7.

Otherwise it is necessary to authenticate to the source token, at step 3 below.

Step 3. Authenticate to the source token. The substeps are:

(optional) Call PK11_GetTokenName to get the name of the token. With
that name, you can be sure that you are authenticating to the source token. Skipping this step is not harmful, it is only necessary if the application or user absolutely needs to know which token the following PK11_Authenticate() will be called on (for instance pwArg contains the actual password for the token). For some NSS applications the underlying password prompt system will properly disambiguate the appropriate password to the user (or it's password cache).

If the token name does not match the token name skip to step 5.

Otherwise proced to the next substep.

Call PK11_Authenticate() to authenticate to the source token. This
step is likely to call the application-supplied PKCS11 password callback
function to retrieve the password.

If this step fails: stop. A Failure at this point is described below as "Exception A".

Otherwise, continue with step 4.

Step 4. Determine if it is necessary to authenticate to the target database.
This is done by calling PK11_IsLoggedIn for the database slot.

If the function indicates that the database token is NOT logged in, then it is necessary to authenticate to the target database, with the step 5 below.

(optional) Otherwise skip down to step 7.

Step 5. Call PK11_IsPresent(). You may think of this step as telling
you if the removable source token has been removed and the target token
has been inserted into the database slot. In reality, this call makes those
things happen. After this call succeeds the token name should be that
of the target token (see next step).

If this fails, stop. (If this call indicates that the token is NOT present, something fundamentally wrong in the NSS softoken engine. Applications should treat this the same as an NSS initialization failure).

Otherwise continue to step 6.

Step 6: Call PK11_Authenticate to authenticate to the target token.
This step is likely to call the application-supplied PKCS11 password
callback function to retrieve the password.

If this step fails: stop. A failure at this point is described below as "Exception B".

Otherwise, continue with step 7.

Step 7. SUCCESS! You have successfully performed the merge. Future calls using the same unique identifier will single that the merge is not necessary, skipping to here from step 2 above. At this point NSS is fully enabled and the application can start making NSS calls as normal.

Failures and recovery

Exception A. Failure to authenticate to the source database

Application needs to decide what happens if the legacy password
is not supplied. Application can choose to:

continue to use the legacy database and try to update later. (Probably a future restart of the application).

reset the legacy database password, discarding any private or secret keys in the old database.

shutdown NSS and initialize it only with the new shareable database.

The exact strategy for recovering is application dependent and depends on factors such as:

the sensitivity of the application to losing key data.

possible input from the user.

the likelihood that the password will every be recovered.

Exception B. Failure to authenticate to the target database

Applications needs to decide what happens if the new shareable database
password is not supplied. Application can choose to:

continue to use the legacy database and try to update later.

force NSS to update those objects it can from the legacy database, throwing away private keys and saved passwords, and trust information from the legacy database.

1. The actual merger may take place during step 1, or step 3b, or
step 7; that is, during the call to NSS_InitWithMerge or during either
of the calls to PK11_Authenticate. This will depend on the ability of
the code to open the necessary databases, the presence or absence of passwords
on the databases, and if both have passwords, it will depend on whether they
have the same password or different passwords, and when the
authentication attempts, if any, succeed. The system tries to complete the
merge as soon as it is able, to increase reliability of the merge update actually completing. The API does not make it
possible to predict, accurately, which step will actually perform the
merge. The application can only follow the steps. Since multiple calls to PK11_Authenticate() do not hurt, the application can simply follow each step in order, and fail only on bad returns from PK11_Authenticate. (PK11_Authenticate will automatically return without prompting, so applications that just need to merge, without caring which step does the merge, may do so.)

2. If the attempt to open the
source database fails for any reason, the operation will behave as if the
source database was empty. It will record the unique source database identifier
string in the target database and act as if the merger is complete. This is similar to what happens in all previous versions of NSS during database update. See "Database Merge" below for how to recover from this.

In Mode 2, the new database is uninitialized, so NSS only needs the
password for the legacy database so it can read the secret keys
in that legacy database, and so the new shareable database password
matches the old one. NSS can find the legacy database because
it's in the same directory that the shareable database lives in. NSS opens both
databases at initialization time and uses the legacy database until the user
authenticates (providing the legacy database password). NSS then uses that
password to update the new shareable database with the records from the old.
The new database takes on the password from the legacy database, and the
legacy database is closed. Future NSS initializations only open the new
shareable database. If the user never supplies a password, NSS will continue to
treat the new shareable database as uninitialized and will attempt to update from
the old database on future opens until the update succeeds.

In Mode 3, the new database may or may not be initialized. For the first mode 3
application, the new database will be uninitialized. NSS can proceed the with
the same procedure as Mode 2. When the second and subsequent applications
start, the new shareable database will already be initialized with it's own
password. We potentially need both passwords, the first to read the keys
out of the legacy database, and the second to write those keys, as well as
the required authentication values for any trust data. In order to preserve
the existing data in the second application, NSS must be able to merge the
data in the second application with the data in the existing database.
We only want the merge to happen once, not every time the application starts.
In Mode three we need to be able to identify when a database has already been
updated, so the applications needs to tell us some unique identifier for its
database. The application must be able to tell us where the old database lives,
since it's an application private directory compared the the multiple
application shared directory that the shared database lives in.

Merge Conflicts (Mode 3A only)

When merging databases in, it's possible (even likely), that the shared
database and legacy databases have the same objects. In the case of certs and keys,
the merge is a simple matter of identifying duplicates and not updating them.

Trust records are made up of several entries, such as one for SSL Server Auth, SSL Client Auth, S/MIME, etc. Each entry could have several values, including CKT_NSS_MUST_VERIFY, CKT_NSS_TRUSTED_DELEGATOR, CKT_NSS_TRUSTED, CKT_NSS_VALID, etc).

Merge updates the trust records by the entries in that trust record separately:

if the trust record entries are identical, no update is done.

if either trust record entry has an explicit unknown (CKT_NSS_TRUST_UNKNOWN) or invalid trust record entry (entry does not exist), then the one that is valid and known is used.

if one of the trust record entries has hard trust attributes (Trust flags with NSS_TRUSTED or NSS_UNTRUSTED in the name) and the other has soft attributes (NSS_VALID or NSS_MUST_VERIFY) the entry with hard attributes is used. Hard trust attributes are attributes that will terminate a certificate validation.

if non of these cases apply, then the value in the target database is preserved.

Mozilla Applications

Mozilla applications are Mode 3A applications. In fact, for all intents and
purposes, mozilla applications make the complete set of interesting 3A
application. This section tries to map the issues above into actual code for
Mozilla applications.

As 3A applications, Mozilla apps need to send NSS a unique identifier for
the old cert and key database, as well as the old profile directory where
the databases are stored. The profile 'salt' value would make a good unique
identifier for mozilla products.

On startup, Mozilla apps should note when they are not in done state at
initial nss startup (see flow chart for Mode 3A update above). If mozilla apps
are not at 'done' state after startup, they should proceed to attempt to
enter done state before PSM initialize completes.

Mozilla app will be in done state in the following cases
(any of the below apply):

The Mozilla app is starting as a fresh instance.

The Mozilla app has already been updated.

The shared database does not have a master password set and the legacy database for Mozilla app does a master password set.

These are the most common cases.

If the state is not done, then we know that this app has not already been
updated, and either the shared database or the legacy database for the Mozilla
app has a master password set.

UI question. At this point should we notify the user that we are updating
the database to a shareable database? In order to complete this we will need
to do user interaction below.

If the legacy database for the mozilla app has a master password set, we prompt for
it. This prompt must be clear we are asking for the master password for
the running Mozilla app (Thunderbird, Firefox, Seamonkey, etc).

Exception case A

If we fail to get this password, we need to handle the exception A case. If the user has a master password set, but does not know what the master password is, then the following data is lost for sure:

The user's private keys.

The user's secret keys.

Any data encrypted to the private keys.

Any data encrpted with the secret keys.

I believe we can identify if the private keys are associated with a certificate. If so, then we can tell the user what certificate would no longer work. Data encrypted with the private keys in Mozilla products are currently only email messages. Secret keys encrypt saved passwords. The Mozilla app knows which saved passwords are encrypted with that key.

If we hit Exception case A we can do one of the following:

attempt to just update the certs, trust, crl and s/mime records, skipping the all the keys. We would loose all the data described above.

decide not to update. In this case we would loose all the data in the paragraph above as well as all the certs, trust crl and s/mime records.

run with the legacy database and allow the user to update later.

run with the new shared database and allow the user to update later.

I would suggest we only offer the user the choice of 1 or 4. Note: if the user selects 1, the update could fail again in exception case B. From a UI perspective, we may want to handle exception case B as we handle case A so the user is only asked once about forcing an update while losing data.

Once we have a legacy database password, or if we determine we don't need the legacy
database password (either because there isn't one, or because we are willing to loose
the data that was protected by it). We need to acquire the shareable database's
password so we can encrypt and MAC the data properly. If the shareable database doesn't
have a password we can proceed with the update without further prompting the
user. If the shareable database has the same password as the legacy database, then we can
detect that and again proceed with the update without further prompting.

If both of these fail, we prompt for the password for the shareable database. This
prompt is trickier, because we need to ask the user for the password that
he percieves to be the Master password for a different mozilla app. Note: at
this point we are in a pretty uncommon corner case. Most users will not have
different Master passwords for both Thunderbird and Firefox, for instance.
However if we do arrive at this case, it is highly likely the user is not an
experienced/informed user, so we need to treat this case carefully.

If we get the password, we complete the update as planned.

Exception case B

If we fail to get this password, we need to handle the exception B case. If the user has a master password set on his shareable database, but does not know what that master password is, we now have the following choices:

eshew any private keys, secret keys and trust updates from the legacy database.

reset the password on the shareable database (losing all private and secret keys, possibly losing some trust).

run with the legacy database and allow the user to update later.

run with the new shareable database and allow the user to update later.

It seems pretty unlikely that the user truly does not know the shareable database password, since he had to create or set it recently. However as the deployment time increases, this becomes more likely.

Again, I think giving the user a choice between options 1) and 4) are the best alternatives. If the user had already tripped over Exception case A, we can presume the user intends to make a similiar choice here. Case 2 can be handled later under the same way the user handles a forgotten master password today (only now resetting the master password affects all mozilla apps).

Profile issues

Mozilla apps can create more than one profile. Developers use this capability
to test bugs that new users are likely to run into without losing their own
production environment.

Shared databases, in general, mean that some of the current semantics of user
profiles will break. Creating a new profile will not create a new
key/cert/master password profile. For developers (the primary users of profiles)
it seems important to preserve some of the existing semantics. I can see a
couple of options.

Allow profiles to be marked with 'private key/cert databases. This will change The Mozilla app from a Mode 3A app to a Mode 2A app. This will return developers to their previous semantic if they want, while allowing them to also test the interaction of different profiles and the same database. It would require UI changes to the profile manager, and it will require action on the part of the developer to get back to the old semantic.

Treat only the default profile as Mode 3A and all other profiles as Mode 2A. This will allow profile separation to operate as is today with no changes. It does mean, however, that only default profiles will share keys with application.

Provide the checkbox in option 1, but make it default as in option 2.

I think option 3 probably provides the best solution for all worlds.

Database Merge

While not necessarily a feature of shareable databases, it is an important tool for successful shareable database deployments.

Database merge is different from database update, and in particular, database update with merge, in the following ways.

Database merge is not part of the automatic update support which is handled at initialization time. As such it does not need to record things like "this particular database has already been 'updated'".

Database merge is typically instigated under the control of the user or administrator, so much of the automated support is not necessary.

Because merge does not require the complicated state machine to manage password acquisition, it can (and is) implemented outside the softoken itself.

Characteristic 3 allows database merge to work on arbitrary database types. You can merge a shareable database into a shareable database as well as an old database into a shareable database (in fact, to a point, on arbitrary tokens - you can merge a hardware token into a shareable database as long as the keys are extractable).

To merge 2 databases, the application simply opens both databases (using SECMOD_OpenUserDB) and then calls the new PK11_MergeTokens() function. PK11_MergeTokens() has the following signature:

log An optional pointer which returns a link list of errors that may have occured during the merge.

pwdata password arg

The targetSlot and sourceSlot parameters could be slots that are simply looked up, or additional databases opened with SECMOD_OpenUserDB(). In order for the merge to be successful, targetSlot must support all the object types in the following list for which token objects exist in the sourceSlot:

CKO_CERTIFICATE,

CKO_PUBLIC_KEY,

CKO_PRIVATE_KEY,

CKO_SECRET_KEY,

CKO_NSS_TRUST,

CKO_NSS_CRL,

CKO_NSS_SMIME.

The source Slot must also have extractable keys or the merge will fail (sensitive keys are OK, as long as the source slot supports PBE's if it contains private keys). All softoken slots (including those opened with SECMOD_OpenUserDB()) support these charateristics.

Multiple calls to merge will only attempt to merge those objects which were created since the last merge, or failed to merge in the last call to merge.

Returns:

The function returns one of these values:

If successful, SECSuccess.

If one or more entries failed to merge, return SECFailure. PORT_SetError() is set to the value of the last failing entry. All the failed entries are returned in the variable log.

Layering

In order to keep clean separation between the data and database operations, we will continue to maintain an layer between the actual data handling and interpretation and the database itself. The database code will not need to understand:

What objects are actually stored in it.

The types of the attributes.

The meaning of the stored data.

Softoken (not the database adapter layer) will manage canonicalizing any CK_ULONGs, encrypting or decrypting private data blobs, checking integrity and deciding what attributes an object should have and setting the appropriate defaults if necessary.

Since softoken deals with PKCS #11 templates internally, its interface to the database will be in terms of those templates.

The database layer must be multi-thread safe. If the underlying database is not thread safe, sdb_ layer must implement the appropriate locking.

s_open

The database API consists of an initialization call, which returns an SDB data structure (defined below).

sdb_FindObjectsInit

This function is the equivalent of PKCS #11 C_FindObjectsInit(). It returns a SDBFind context with is opaque to the caller. The caller must call sdb_FindObjectsFinal with this context if sdb_FindobjectsInit succeeds.

sdb_FindObjects

This function is the equivalent of PKCS #11 C_FindObjects(). It takes a SDBFind
context returned by sdb_FindObjectsInit. This function has the same semantics as C_FindObjects with respect to handling how many objects are returned in a single call.

sdb_FindObjectsFinal

CK_RV (*sdb_FindObjectsFinal)(SDB *sdb, SDBFind *find);

This function is the equivalent of PKCS #11 C_FindObjectsFinal(). It frees any resources associated with SDBFIND.

sdb_GetAttributeValue

This function is the equivalent of PKCS #11 C_GetAttributeValue(). It has the
same memory allocation and error code semantics of the PKCS #11 call.
The attributes passed to sdb_GetAttributeValues are already transformed from
their native representations in the following ways:

sdb_CreateObject

This function is the equivalent of PKCS #11 C_CreateObject(). The value of 'object' is chosen by the implementer of sdb_CreateObject. This value must be unique for this sdb instance. It should be no more than 30 bits long.

sdb_DestroyObject

CK_RV (*sdb_DestroyObject)(SDB *sdb, CK_OBJECT_HANDLE object);

This function is the equivalent of PKCS #11 C_Destroy object(). It removed the object from the database.

sdb_GetPWEntry

CK_RV (*sdb_GetPWEntry)(SDB *sdb, SDBPasswordEntry *entry);

Get the password entry. This only applies to the private database.

sdb_PutPWEntry

CK_RV (*sdb_PutPWEntry)(SDB *sdb, SDBPasswordEntry *entry);

Write the password entry. This only applies to the private database.
Writing a password entry will overwrite the old entry.

sdb_Begin

CK_RV (*sdb_Begin)(SDB *sdb);

Begin a transaction. Any write to the database (sdb_CreateObject, sdb_DestroyObject, sdb_SetAttributeValue) must be accomplished while holding
a transaction. Transactions are completed by calling sdb_Commit to commit the change, or sdb_Abort to discard the change. More than one write operation may be made while holding a transaction. Aborting the transaction will discard all writes made while in the transaction.

sdb_Commit

CK_RV (*sdb_Commit)(SDB *sdb);

Commit a transaction. Any write to the database (sdb_CreateObject, sdb_DestroyObject, sdb_SetAttributeValue) must be accomplished while holding
a transaction. Transactions are completed by calling sdb_Commit to commit the change, or sdb_Abort to discard the change. More than one write operation may be made while holding a transaction.

sdb_Abort

CK_RV (*sdb_Abort)(SDB *sdb);

Abort a transaction. Any write to the database (sdb_CreateObject, sdb_DestroyObject, sdb_SetAttributeValue) must be accomplished while holding
a transaction. Transactions are completed by calling sdb_Commit to commit the change, or sdb_Abort to discard the change. More than one write operation may be made while holding a transaction. Aborting the transaction will discard all writes made while in the transaction.

sdb_Close

CK_RV (*sdb_Close)(SDB *sdb);

Close the SDB and free up any resources associated with it.

sdb_Reset

CK_RV (*sdb_Reset)(SDB *sdb);

Reset zeros out the key database and resets the password.

legacy DB support

The old DBM code can be supported with the above SDB structure with the following exceptions:

The old database code cannot be extensible (can't dynamically handle new types).

A private interface may be needed to unwrap the private keys, or provide a handle to the password so the keys can be presented in the attribute format.

This code would live in its own shared library, called lgdbm (with the appropriate platform semantics, lgdbm.dll on windows, liblgdbm.so on unix, etc). Most of the low level cert, CRL, key handling, and translation to PKCS #11 objects and attributes that was part of softoken will moved to this legacy shared library. When access to old databased are needed, the lgdbm shared library will be loaded, and the following symbols will be dynamically found:

legacy_Open - This has the same signature as s_open and returns SDB handles for the legacy database.

legacy_Shutdown - This is called when NSS is through with all database support (that is when softoken shuts down).

legacy_SetCryptFunctions - This is used to set some callbacks that the legacy database can call to decrypt and encrypt password protected records (pkcs8 formatted keys, etc.). This allows the legacy database to translate it's database records to the new format without getting direct access to the keys.

NSS will automatically load the legacy database support under the following conditions:

The application requests that the old databases be loaded (either implicitly or explicitly).

The application request that new databases are loaded, but the new databases do not exist and the old databases do.