An Optimistic Record Locking Technique Using AVR for .NET Programming Language

the Essence
(hopefully)

This TechCrystal provides an example of a technique called Optimistic Record Locking or Optimistic Concurrency Control.

From an application software developer's perspective, the essential technical feat involved in optimistic record locking is insuring that your user's database changes will not be overlaying another user's changes, without explicitly locking the database file or record in the interim.

Thus, one key step is to record the initial database values presented to the user. Shown
below is a part of one of the lines of code in this example. It is capturing database values into an interim result string:

memfilCustomerUpdFlds.DataSet.GetXml().ToString()

memfilCustomerUpdFlds is a "memory file", essentially an OO data structure based on an externally described database record format.
In this example, it contains all of the customer table file columns related to the subject matter that can be changed by a casual user on the web or by an internal company user.

A Little More Background

Has Elvis left the building?

The externally described record format comes from a physical file object stored in a Db2 for i database (see DDS source sample at the bottom of this TechCrystal). The field/column attributes/properties from the Db2 for i database are fixed into the .NET executable at compile time and are not referenced at run time.

As mentioned, memfilCustomerUpdFlds is a "memory file". A memory file is a wrapper object around the (Microsoft .NET Framework) System.Data.DataSet (object type) that was created by ASNA for use with the AVR.NET language. The memory file object can be manipulated by using OO syntax ( such as memfilCustomerUpdFlds.DataSet.GetXml().ToString() ) or by using RPG file I/O operation codes (such as OPEN memfilCustomerUpdFlds or WRITE memfilCustomerUpdFlds).

The optimistic record locking technique becomes more relevant within a web application, as the web server cannot recognize that a user (or Elvis for that matter) has vacated the process until the session times out. For example, a user can direct the browser to a different website, even after they have pulled up information from your site and indicated they are going to update it. So it's probably a bad idea to allow or try to allow any lock to persist for very long at all when it is based on a web page.

Many of us first heard the term "Optimistic Record Locking" at ASNAPALOOZA 2005, an event hosted by ASNA, a software language development company and author of AVR for .NET programming language, based in San Antonio, Texas. Several
of us had worked with the technique in a commercially available master file maintenance template
starting back in 1996.

Why are we here?

This technique does not require use of commitment control, rollback functions, hard-locks or soft-locks. It does not rely on resolving conflicting update requests after the fact or any form of staged updates.

Other locking techniques (suitable for enterprise application software) explicitly isolate or allocate the database row or entire file against all potential updates (a hard-lock) or logically designate the row is in use (a soft-lock) by updating a field to reflect the user or process that is in progress.

Our example takes the optimistic record locking technique a little further than most. This is because it allows the database files involved to be changed by unrelated automated processes during the timeframe of the user controlled update (not the same data elements of course). Thus it can not use an update time stamp in the record to control the process when obtaining this level of granularity.

Just mentioning a few of the cooler development capabilities and techniques used

Our example also does a nice job of keeping the data comparison process concise, consistent, efficient and simple. A key is the ability to convert any number of data fields into a character string in one step, and to store the result at the web page level (this code actually converts all of the rows in the file but the instance in our example contains only 1 row). Essentially this web page level storage is known as "viewstate" within the .NET framework. Viewstate also contains info such as GUI control context information and pending data changes (automatically for the most part).

One other thing we do at times is to condense the XML information before storing it, please see the section regarding XML string compression towards the bottom of this TechCrystal.

The above code snippet converts all of the in-memory field values of a database record format into an XML representation, converts the result to a string (object type) and then creates and populates (or updates) a viewstate variable all in one step. Woo-hoo!

The database record format is based on an external data structure stored in a Db2 for i database.

The method to create the XML is neat because not only does the method convert all of the file's data into a single alphanumeric character string (even packed decimal data, a compressed format for numbers only, is human readable) but it includes field name metadata, which is useful when debugging. An example XML string is shown towards the bottom of this TechCrystal.

So essentially we must capture and store the database values that are used to initially populate the web page UI. This example requires database values from two files. However, let's just do one DB file here first.

Let's basically concatonate all of those initial data values from the customer master file into a giant string and call the string variable/object, "vsv_strB4CustomerRcdXML". Here is the code:

ViewState["vsv_strB4CustomerRcdXML"] = ""

Example Code

A Glimpse at Record Level Access (RLA) Technique

RLA I/O extensions in AVR.NET language are provided through the ASNA DataGate product

// connect
AppDB
// not required because already connected

open
updcustomer
// open customer master table with record level
access (RLA) capability

open memfilCustomerUpdFlds // open memory file object

// access first record in database
file matching by customer number (jcstmr) and type (jctype) key values

// automatically populates in-memory
field values (in record format)

// access(*nolock) means do not lock
record (even though database file is defined as update capable)

chain updcustomer key(jcstmr,
jctype) err(*extended)
access(*nolock)

if (updcustomer.IsFound())

// populate all field values in first row of
memfilCustomerUpdFlds dataset (in memory)

write
memfilCustomerUpdFlds

// create/set a viewstate variable equal to the xml
output of all of the records (just 1 here)

// This routine locks the records pending update after
checking to insure that

// the data the user is editing has not changed via
some other source

*this.LockRecordsPendingUpdate ( *ByRef intContactUniqueID, +

*ByRef
boolErrors,
+

*ByRef boolProceedWithUpdate)

// do not continue if unsuccessful, do not issue redirect
so that the

// page will be redisplayed with postback status

if (boolProceedWithUpdate)

Controlling the flow of the update process

After the user makes changes, indicates acceptance and all edits are passed, the latest current database values must be captured and compared to the original (ViewState["vsv_strB4CustomerRcdXML"]). This insures nothing has changed between the time the user pulled up the information and the time and update is about to occur. Since the memfilCustomerUpdFlds contains a list of all data elements related to user-initiated maintenance (internal or external), this technique detects changes made by anyone.

Note that to completely eliminate any chance of data overlay, no changes can be made to the current database values in between the second time the data is accessed (all records) and the point that the update is realized. This technique satisfies that criteria.

Alternatively, one could use commitment control or a similar technique to roll-back the changes in the instance that one file is updated before it is realized that other data would be overlayed and that no updates should be made.

This example shows how the logic expands when there are two files to update. One could consolidate all of the subject matter fields into a single memory file object for a couple of benefits.

Example Code #2

Controlling the flow of the update process (cont'd)

Read Equal to Key (READE) (not shown) is
perhaps the most productive I/O extension, and it facilitates getting the next row's data into memory only if it has key vales matching the request

However, Visual RPG for .NET (AVR.NET) does not require use of I/O extensions. The full range of standard transaction, stored procedure and SQL building blocks are available as well.

"Someone has updated your customer level information since you first +

accessed this web page. It is likely someone from our +

corporate office has made the update. We apologize for +

the inconvenience. If you still wish to change your +

information, please click on the Refresh button and then re-enter +

your changes.", +

*ByRef boolCursorPosReqd, *ByRef boolErrors, +

tbCtcName *as WebControl, *nothing, *nothing,
*nothing)

endif

endif

close memfilContactUpdFlds

close memfilCustomerUpdFlds

//***********

// if not proceeding with update, begin process to
reverse update process

//***********

if (NOT boolProceedWithUpdate)

// if record no longer exists (shouldn't) or the
record has changed, unlock it

if (ContactsByUnqContactID.IsFound)

unlock ContactsByUnqContactID

endif

close ContactsByUnqContactID

if (updcustomer.IsFound)

unlock updcustomer

endif

close updcustomer

btnRefresh.Visible = *true

LeaveSr

endif

//*******

// proceeding with update, boolProceedWithUpdate
remains = *true

//*******

endif

endsr

-- Important --

Note that the fields to update should be explicitly listed

It is important to note that on the update process, we recommend that you limit the updates to the fields that can be changed by the user, for several reasons (as opposed to a blanket update of all fields in the record). One of the reasons is that it results in more precise code that verifies the developer's intentions.

In the traditional RPG integrated language environment, this technique improves the quality of automatically generated technical cross-reference documentation. Thus future impact analysis tasks can be done in less time and more accurately. But details on that will have to wait for a future TechCrystal topic.

If the amount of data stored in the memory file is gigantic, you may wish to take an additional step and run this hashing algorithm. It transforms the entire XML string into a unique 40 character code (using a method included within the .NET framework) so that only 40 characters is transferred back and forth from the web server and the browser/user desktop.

The above code snippet converts all of the fields in the database record format (of memfilCustomerUpdFlds) into an XML representation, converts the XML representation to a 40 character hash code, and then for clarity's sake, converts the result to a string (object type), finally storing the result in the viewstate. Boom-shaka-laka!

The method that converts the result to a hash code boils the length of the data down to a unique value that is very compressed. Both the GetXML() and GetHashCode() methods are included in the .NET framework and thus are available for any .NET language such as VB.NET, C#.NET and AVR.NET. Although since GetHashCode() does not guarantee a unique result, it can't be used for this technique without introducing at least a tiny bit of possibility for erroroneous results.

Please note the references to DB file XAREF1,
which is a database file that contains essentially the base field types of a data dictionary.

Since the data dictionary in our Financial Portrait accounting suite is so awesome, and the fundamental usage of data dictionary subtypes and supertypes seems to have been relagated to the back burner in most development circles, we should probably delve into the topic in a future TechCrystal.