Main menu

Post navigation

Last October I gave a presentation on database unit testing with utPLSQL, Oracle Unit Testing with utPLSQL. I mentioned design patterns as a way of reducing the effort of building unit tests and outlined some strategies for coding them effectively. In the current set of articles, I develop the ideas further and present a design pattern for web service 'save' procedures by means of a conceptual discussion, together with a working example of base code and unit test code for a procedure to save new employees in Oracle's well-known HR demonstration schema. The working example can be used as a template for real use cases, and I believe can simplify the development process. It can also be used as a basis for comparing other unit testing frameworks, by implementing the same testing in those frameworks.

Design patterns involve abstraction and conceptual separation of general features of a situation from the particular. Therefore we will start with a fairly abstract discussion of unit testing for the database, before proceeding to discuss the use case in question, describe the unit test cases, and show the results. The code is presented in another article, Design Patterns for Database API Testing 1: Web Service Saving 2 - Code. That article lists various extremely common antipatterns in database unit testing, and how they may be avoided. The code itself centralises as much as possible in order to make specific test code as small as possible, and is structured very differently from most unit testing code that I have seen.

Update, 26 June 2016:The code has been re-factored to improve formatting and remove the dependence on utPLSQL, TRAPIT - TRansactional API Testing in Oracle.Update, 09 July 2016:I have added in scenario-level printing of all inputs.

General Discussion of Database Unit Testing

The underlying functionality for unit testing could be described logically as:

Given a list of test inputs, X and a list of expected outputs, E, for function F:

For each x in X, with e in E:

Apply y = F(x)

Assert y = e

As the Functional Programming community knows well, functions having well-defined parameter inputs, returning values as outputs, and with no 'side-effects', are the easiest to test reliably. The difficulty with database unit testing is that most use cases do not fall into that category; instead, database procedures can read from and write to the database as well as using input and output parameters. This means that theoretically the inputs and outputs could include the whole database (at least); furthermore the database is a shared resource, so other parties can alter the data we are dealing with. One important consequence of these facts is that much of the thinking on best practices for unit testing, coming as it does from the non-database world, is not applicable here. So what to do?

Pragmatic testing

To make progress, we note that our purpose with unit testing is not to formally prove that our programs work, but rather includes the following aims:

Improve code quality within a Test Driven Development (TDD) approach

Provide regression tests to allow safe code-refactoring

Detect quickly changes external to the code that cause it to fail, such as reference data changes

That being so, we can note the following guidelines:

Testing code is written, as part of TDD, by the developer of the base code, who can identify the relevant database inputs and outputs

Some, but not necessarily all, test data may be created in a setup step; static reference data that are required for the code to work usually should not be created in setup

Testing code should be instrumented and logged liberally

The base code should be timed and a time limit included in the testing; this will help to quickly identify issues such as necessary indexes being dropped

Consideration should be given to running the unit test suites in performance and other instances

Design Pattern Use Case for Web Service Save Procedure

Purpose of procedure is to save a set of new records to a database table

Surrogate primary key is generated

Input is an array of objects with the records to be saved

Output is an array of objects containing the new primary key plus a description

For records failing validation, zero is returned, plus the error message, and the valid records will still be saved

ERD of Input and Output Data Structures in Relation to Scenarios

In the diagram, a scenario corresponds to a web service call with a set of input records

The result of the call can be described as a set of output groups, each having a set of records

In our case the output array and the base table form two output groups, with a global group for average call timing

The logical diagram in terms of sets of records can be translated into an array structure diagram

If we follow a similarly generic approach at the coding level, it becomes very easy to extend a simple example by adding groups, fields and records as necessary.

General Unit Test Design Process

The design process involves two high level steps

Identify a good set of scenarios with corresponding input records

Identify the expected outputs for each output group identified, for each scenario (there will also be a global group, for timing)

Design Pattern Scenarios

The procedure inserts records in Oracle's HR employees table, and we identify four test scenarios:

Passing a single valid record

Passing a single invalid record

Trying to pass a record with an invalid type

Passing multiple valid records, and one invalid record

Design Pattern Output Groups

Records inserted into table employees

Records returned in output parameter array

Timing of average call

Unit Test Results Output

The output below is for a failing run, where the time limit is breached, and I also have deliberately entered incorrected expected values for two records, to show the difference in formatting between success and failure output group records. I like to include the output tables on completion of development in my technical design document. The output below is generated by my own library packages, not directly by utPLSQL, whose output format I dislike.

In the event of the suite failing, as here, the utility code ensures that an Oracle error is generated. This can then be trapped by a calling Unix script from a scheduled Jenkins job to send out emails

Conclusions

A design pattern has been presented for database web service save procedures, with scenarios and output results

The implementation (presented in the part 2 article) was against an Oracle database publicly available demonstration schema, and used Brendan's database unit testing framework

The main ideas could be applied with any database technology and any testing framework

It is suggested that any proposed alternative unit testing framework be compared by implementing this design pattern, or similar