pgTAP 0.92.0

pgTAP is a unit testing framework for PostgreSQL written in PL/pgSQL and
PL/SQL. It includes a comprehensive collection of
TAP-emitting assertion functions, as well as the
ability to integrate with other TAP-emitting test frameworks. It can also be
used in the xUnit testing style.

Be sure that you have pg_config installed and in your path. If you used a
package management system such as RPM to install PostgreSQL, be sure that the
-devel package is also installed. If necessary tell the build process where
to find it:

env PG_CONFIG=/path/to/pg_config make && make install && make installcheck

And finally, if all that fails (and if you're on PostgreSQL 8.1, it likely
will), copy the entire distribution directory to the contrib/ subdirectory
of the PostgreSQL source tree and try it there without pg_config:

env NO_PGXS=1 make && make install && make installcheck

If you encounter an error such as:

ERROR: must be owner of database regression

You need to run the test suite using a super user, such as the default
"postgres" super user:

make installcheck PGUSER=postgres

Once pgTAP is installed, you can add it to a database. If you're running
PostgreSQL 9.1.0 or greater, it's a simple as connecting to a database as a
super user and running:

CREATE EXTENSION pgtap;

If you've upgraded your cluster to PostgreSQL 9.1 and already had pgTAP
installed, you can upgrade it to a properly packaged extension with:

CREATE EXTENSION pgtap FROM unpackaged;

For versions of PostgreSQL less than 9.1.0, you'll need to run the
installation script:

psql -d mydb -f /path/to/pgsql/share/contrib/pgtap.sql

If you want to install pgTAP and all of its supporting objects into a
specific schema, use the PGOPTIONS environment variable to specify the
schema, like so:

PGOPTIONS=--search_path=tap psql -d mydb -f pgTAP.sql

Testing pgTAP with pgTAP

In addition to the PostgreSQL-standard installcheck target, the test
target uses the pg_prove Perl program to do its testing, which will be
installed with the
TAP::Parser::SourceHandler::pgTAP
CPAN distribution. You'll need to make sure that you use a database with
PL/pgSQL loaded, or else the tests won't work. pg_prove supports a number of
environment variables that you might need to use, including all the usual
PostgreSQL client environment variables:

$PGDATABASE

$PGHOST

$PGPORT

$PGUSER

You can use it to run the test suite as a database super user like so:

make test PGUSER=postgres

Adding pgTAP to a Database

Once pgTAP has been built and tested, you can install it into a
PL/pgSQL-enabled database:

psql -d dbname -f pgtap.sql

If you want pgTAP to be available to all new databases, install it into the
"template1" database:

psql -d template1 -f pgtap.sql

If you want to remove pgTAP from a database, run the uninstall_pgtap.sql
script:

psql -d dbname -f uninstall_pgtap.sql

Both scripts will also be installed in the contrib directory under the
directory output by pg_config --sharedir. So you can always do this:

psql -d template1 -f `pg_config --sharedir`/contrib/pgtap.sql

But do be aware that, if you've specified a schema using $TAPSCHEMA, that
schema will always be created and the pgTAP functions placed in it.

pgTAP Test Scripts

You can distribute pgtap.sql with any PostgreSQL distribution, such as a
custom data type. For such a case, if your users want to run your test suite
using PostgreSQL's standard installcheck make target, just be sure to set
variables to keep the tests quiet, start a transaction, load the functions in
your test script, and then rollback the transaction at the end of the script.
Here's an example:

Of course, if you already have the pgTAP functions in your testing database,
you should skip \i pgtap.sql at the beginning of the script.

The only other limitation is that the pg_typeof() function, which is
written in C, will not be available in 8.3 and lower. You'll want to
comment out its declaration in the bundled copy of pgtap.sql and then avoid
using cmp_ok(), since that function relies on pg_typeof(). Note that
pg_typeof() is included in PostgreSQL 8.4, so you won't need to avoid it on
that version or higher.

Now you're ready to run your test script!

% psql -d try -Xf test.sql
1..1
ok 1 - My test passed, w00t!

You'll need to have all of those variables in the script to ensure that the
output is proper TAP and that all changes are rolled back -- including the
loading of the test functions -- in the event of an uncaught exception.

Using pg_prove

Or save yourself some effort -- and run a batch of tests scripts or all of
your xUnit test functions at once -- by using pg_prove, available in the
TAP::Parser::SourceHandler::pgTAP
CPAN distribution. If you're not relying on installcheck, your test scripts
can be a lot less verbose; you don't need to set all the extra variables,
because pg_prove takes care of that for you:

If you're using xUnit tests and just want to have pg_prove run them all
through the runtests() function, just tell it to do so:

% pg_prove -d myapp --runtests

Yep, that's all there is to it. Call pg_prove --verbose to see the
individual test descriptions, pg_prove --help to see other supported
options, and pg_prove --man to see its entire documentation.

Using pgTAP

The purpose of pgTAP is to provide a wide range of testing utilities that
output TAP. TAP, or the "Test Anything Protocol", is an emerging standard for
representing the output from unit tests. It owes its success to its format as
a simple text-based interface that allows for practical machine parsing and
high legibility for humans. TAP started life as part of the test harness for
Perl but now has implementations in C/C++, Python, PHP, JavaScript, Perl, and
now PostgreSQL.

There are two ways to use pgTAP: 1) In simple test scripts that use a plan to
describe the tests in the script; or 2) In xUnit-style test functions that you
install into your database and run all at once in the PostgreSQL client of
your choice.

I love it when a plan comes together

Before anything else, you need a testing plan. This basically declares how
many tests your script is going to run to protect against premature failure.

The preferred way to do this is to declare a plan by calling the plan()
function:

SELECT plan( 42 );

There are rare cases when you will not know beforehand how many tests your
script is going to run. In this case, you can declare that you have no plan.
(Try to avoid using this as it weakens your test.)

SELECT * FROM no_plan();

Often, though, you'll be able to calculate the number of tests, like so:

SELECT plan( COUNT(*) )
FROM foo;

At the end of your script, you should always tell pgTAP that the tests have
completed, so that it can output any diagnostics about failures or a
discrepancy between the planned number of tests and the number actually run:

SELECT * FROM finish();

What a sweet unit!

If you're used to xUnit testing frameworks, you can collect all of your tests
into database functions and run them all at once with runtests(). This is
similar to how PGUnit and
Epic work. The runtests() function does all the
work of finding and running your test functions in individual transactions. It
even supports setup and teardown functions. To use it, write your unit test
functions so that they return a set of text results, and then use the pgTAP
assertion functions to return TAP values, like so:

The latter gives you some idea of what failed. It also makes it easier to find
the test in your script, simply search for "simple exponential".

All test functions take a name argument. It's optional, but highly suggested
that you use it.

Sometimes it's useful to extract test function names from pgtap output, especially when using xUnit style with Continuous Integration Server like Hudson or TeamCity.
By default pgTAP displays this names as "comment", but you're able to change this behavior by overriding function diag_test_name:

diag_test_name()

This will show
# test: my_example_test_function_name
instead of
# my_example_test_function_name()
This makes easy handling test name and differing test names from comments.

I'm ok, you're not ok

The basic purpose of pgTAP--and of any TAP-emitting test framework, for that
matter--is to print out either "ok #" or "not ok #", depending on whether a
given test succeeded or failed. Everything else is just gravy.

All of the following functions return "ok" or "not ok" depending on whether
the test succeeded or failed.

ok()

SELECT ok( :boolean, :description );
SELECT ok( :boolean );

Parameters

:boolean

A boolean value indicating success or failure.

:description

A short description of the test.

This function simply evaluates any boolean expression and uses it to determine
if the test succeeded or failed. A true expression passes, a false one fails.
Very simple.

The :description is a very short description of the test that will be printed
out. It makes it very easy to find a test in your script when it fails and
gives others an idea of your intentions. The description is optional, but we
very strongly encourage its use.

Should an ok() fail, it will produce some diagnostics:

not ok 18 - sufficient mucus
# Failed test 18: "sufficient mucus"

Furthermore, should the boolean test result argument be passed as a NULL
rather than true or false, ok() will assume a test failure and attach an
additional diagnostic:

Similar to ok(), is() and isnt() compare their two arguments with IS
NOT DISTINCT FROM (=) AND IS DISTINCT FROM (<>) respectively and use
the result of that to determine if the test succeeded or failed. So these:

Note: Thanks to the use of the IS [ NOT ] DISTINCT FROM construct, NULLs
are not treated as unknowns by is() or isnt(). That is, if :have and
:want are both NULL, the test will pass, and if only one of them is
NULL, the test will fail.

So why use these test functions? They produce better diagnostics on failure.
ok() cannot know what you are testing for (beyond the description), but
is() and isnt() know what the test was and why it failed. For example this
test:

pass()

fail()

Sometimes you just want to say that the tests have passed. Usually the case is
you've got some complicated condition that is difficult to wedge into an
ok(). In this case, you can simply use pass() (to declare the test ok) or
fail() (for not ok). They are synonyms for ok(1) and ok(0).

Use these functions very, very, very sparingly.

isa_ok()

Checks to see if the given value is of a particular type. The description and
diagnostics of this test normally just refer to "the value". If you'd like
them to be more specific, you can supply a :name. For example you might say
"the return value" when yo're examing the result of a function call:

Pursuing Your Query

Sometimes, you've just gotta test a query. I mean the results of a full blown
query, not just the scalar assertion functions we've seen so far. pgTAP
provides a number of functions to help you test your queries, each of which
takes one or two SQL statements as arguments. For example:

SELECT throws_ok('SELECT divide_by(0)');

Yes, as strings. Of course, you'll often need to do something complex in your
SQL, and quoting SQL in strings in what is, after all, an SQL application, is
an unnecessary PITA. Each of the query-executing functions in this section
thus support an alternative to make your tests more SQLish: using prepared
statements.

[Prepared statements](http://www.postgresql.org/docs/current/static/sql-prepare.html
"PostgreSQL Documentation: PREPARE") allow you to just write SQL and simply
pass the prepared statement names to test functions. For example, the above
example can be rewritten as:

PREPARE mythrow AS SELECT divide_by(0);
SELECT throws_ok('mythrow');

pgTAP assumes that an SQL argument without space characters or starting with a
double quote character is a prepared statement and simply EXECUTEs it. If
you need to pass arguments to a prepared statement, perhaps because you plan
to use it in multiple tests to return different values, just EXECUTE it
yourself. Here's an example with a prepared statement with a space in its
name, and one where arguments need to be passed:

The first query must return only one column of the same type as the values
in the array. If you need to test more columns, you'll need to use two
queries.

Keeping these techniques in mind, read on for all of the query-testing
goodness.

To Error is Human

Sometimes you just want to know that a particular query will trigger an error.
Or maybe you want to make sure a query does not trigger an error. For such
cases, we provide a couple of test functions to make sure your queries are as
error-prone as you think they should be.

When you want to make sure that an exception is thrown by PostgreSQL, use
throws_ok() to test for it.

The first argument should be the name of a prepared statement or else a string
representing the query to be executed (see the summary
for query argument details). throws_ok() will use the PL/pgSQL EXECUTE
statement to execute the query and catch any exception.

The second argument should be an exception error code, which is a
five-character string (if it happens to consist only of numbers and you pass
it as an integer, it will still work). If this value is not NULL,
throws_ok() will check the thrown exception to ensure that it is the
expected exception. For a complete list of error codes, see [Appendix
A.](http://www.postgresql.org/docs/current/static/errcodes-appendix.html
"Appendix A. PostgreSQL Error Codes") in the PostgreSQL
documentation.

The third argument is an error message. This will be most useful for functions
you've written that raise exceptions, so that you can test the exception
message that you've thrown. Otherwise, for core errors, you'll need to be
careful of localized error messages. One trick to get around localized error
messages is to pass NULL as the third argument. This allows you to still pass
a description as the fourth argument.

The fourth argument is of course a brief test description. Here's a useful
example:

For the two- and three-argument forms of throws_ok(), if the second argument
is exactly five bytes long, it is assumed to be an error code and the optional
third argument is the error message. Otherwise, the second argument is assumed
to be an error message and the third argument is a description. If for some
reason you need to test an error message that is five bytes long, use the
four-argument form.

lives_ok()

SELECT lives_ok( :sql, :description );
SELECT lives_ok( :sql );

Parameters

:sql

An SQL statement or the name of a prepared statement, passed as a string.

:description

A short description of the test.

The inverse of throws_ok(), lives_ok() ensures that an SQL statement does
not throw an exception. Pass in the name of a prepared statement or string
of SQL code (see the summary for query argument
details). The optional second argument is the test description. An example:

SELECT lives_ok(
'INSERT INTO try (id) VALUES (1)',
'We should not get a unique violation for a new PK'
);

performs_ok()

An SQL statement or the name of a prepared statement, passed as a string.

:milliseconds

Number of milliseconds.

:description

A short description of the test.

This function makes sure that an SQL statement performs well. It does so by
timing its execution and failing if execution takes longer than the specified
number of milliseconds. An example:

PREPARE fast_query AS SELECT id FROM try WHERE name = 'Larry';
SELECT performs_ok(
'fast_query',
250,
'A select by name should be fast'
);

The first argument should be the name of a prepared statement or a string
representing the query to be executed (see the summary
for query argument details). throws_ok() will use the PL/pgSQL EXECUTE
statement to execute the query.

The second argument is the maximum number of milliseconds it should take for
the SQL statement to execute. This argument is numeric, so you can even use
fractions of milliseconds if it floats your boat.

The third argument is the usual description. If not provided, performs_ok()
will generate the description "Should run in less than $milliseconds ms".
You'll likely want to provide your own description if you have more than a
couple of these in a test script or function.

Should a performs_ok() test fail it produces appropriate diagnostic
messages. For example:

Note: There is a little extra time included in the execution time for the
the overhead of PL/pgSQL's EXECUTE, which must compile and execute the SQL
string. You will want to account for this and pad your estimates accordingly.
It's best to think of this as a brute force comparison of runtimes, in order
to ensure that a query is not really slow (think seconds).

Can You Relate?

So you've got your basic scalar comparison functions, what about relations?
Maybe you have some pretty hairy SELECT statements in views or functions to
test? We've got your relation-testing functions right here.

An SQL statement or the name of a prepared statement, passed as a string.

:array

An array of values representing a single-column row values.

:cursor

A PostgreSQL refcursor value representing a named cursor.

:description

A short description of the test.

There are three ways to test result sets in pgTAP. Perhaps the most intuitive
is to do a direct row-by-row comparison of results to ensure that they are
exactly what you expect, in the order you expect. Coincidentally, this is
exactly how results_eq() behaves. Here's how you use it: simply pass in two
SQL statements or prepared statement names (or some combination; (see the
summary for query argument details) and an optional
description. Yep, that's it. It will do the rest.

For example, say that you have a function, active_users(), that returns a
set of rows from the users table. To make sure that it returns the rows you
expect, you might do something like this:

In general, the use of prepared statements is highly recommended to keep your
test code SQLish (you can even use VALUES in prepared statements in
PostgreSQL 8.2 and up!). But note that, because results_eq() does a
row-by-row comparison, the results of the two query arguments must be in
exactly the same order, with exactly the same data types, in order to pass. In
practical terms, it means that you must make sure that your results are never
unambiguously ordered.

For example, say that you want to compare queries against a persons table.
The simplest way to sort is by name, as in:

Notice how the two "Tom" rows are reversed. The upshot is that you must ensure
that your queries are always fully ordered. In a case like the above, it means
sorting on both the name column and the age column. If the sort order of
your results isn't important, consider using set_eq() or bag_eq() instead.

Internally, results_eq() turns your SQL statements into cursors so that it
can iterate over them one row at a time. Conveniently, this behavior is
directly available to you, too. Rather than pass in some arbitrary SQL
statement or the name of a prepared statement, simply create a cursor and pass
it in, like so:

The key is to ensure that the cursor names are passed as refcursors. This
allows results_eq() to disambiguate them from prepared statements. And of
course, you can mix and match cursors, prepared statements, and SQL as much as
you like. Here's an example using a prepared statement and a (reset) cursor
for the expected results:

On PostgreSQL 8.3 and down, the rows are cast to text for comparison, rather
than compared as record objects. The downside to this necessity is that the
test cannot detect incompatibilities in column numbers or types, or
differences in columns that convert to the same text representation. For
example, a NULL column will be equivalent to an empty string. As a result,
pgTAP will not show the have and want values if they are the same, just
the error message, like so:

# Failed test 149
# Number of columns or their types differ between the queries

An SQL statement or the name of a prepared statement, passed as a string.

:array

An array of values representing a single-column row values.

:cursor

A PostgreSQL refcursor value representing a named cursor.

:description

A short description of the test.

The inverse of results_eq(), this function tests that query results are not
equivalent. Note that, like results_ne(), order matters, so you can actually
have the same sets of results in the two query arguments and the test will
pass if they're merely in a different order. More than likely what you really
want is results_eq() or set_ne(). But this function is included for
completeness and is kind of cute, so enjoy. If a results_ne() test fails,
however, there will be no diagnostics, because, well, the results will be the
same!

Note that the caveats for results_ne() on PostgreSQL 8.3 and down apply to
results_ne() as well.

set_eq()

An SQL statement or the name of a prepared statement, passed as a string.

:array

An array of values representing a single-column row values.

:description

A short description of the test.

Sometimes you don't care what order query results are in, or if there are
duplicates. In those cases, use set_eq() to do a simple set comparison of
your result sets. As long as both queries return the same records, regardless
of duplicates or ordering, a set_eq() test will pass.

The SQL arguments can be the names of prepared statements or strings
containing an SQL query (see the summary for query
argument details), or even one of each. If the results returned by the first
argument consist of a single column, the second argument may be an array:

set_ne()

An SQL statement or the name of a prepared statement, passed as a string.

:array

An array of values representing a single-column row values.

:description

A short description of the test.

The inverse of set_eq(), this function tests that the results of two queries
are not the same. The two queries can as usual be the names of prepared
statements or strings containing an SQL query (see the
summary for query argument details), or even one of
each. The two queries, however, must return results that are directly
comparable -- that is, with the same number and types of columns in the same
orders. If it happens that the query you're testing returns a single column,
the second argument may be an array.

set_has()

An SQL statement or the name of a prepared statement, passed as a string.

:description

A short description of the test.

When you need to test that a query returns at least some subset of records,
set_has() is the hammer you're looking for. It tests that the the results of
a query contain at least the results returned by another query, if not more.
That is, the test passes if the second query's results are a subset of the
first query's results. The second query can even return an empty set, in which
case the test will pass no matter what the first query returns. Not very
useful perhaps, but set-theoretically correct.

As with set_eq(). the SQL arguments can be the names of prepared statements
or strings containing an SQL query (see the summary
for query argument details), or one of each. If it happens that the query
you're testing returns a single column, the second argument may be an array.

In whatever case, a failing test will yield useful diagnostics just like:

# Failed test 122
# Missing records:
# (44,Anna)
# (86,Angelina)

As with set_eq(), set_has() will also provide useful diagnostics when the
queries return incompatible columns. Internally, it uses an EXCEPT query to
determine if there any any unexpectedly missing results.

bag_eq()

An SQL statement or the name of a prepared statement, passed as a string.

:array

An array of values representing a single-column row values.

:description

A short description of the test.

The bag_eq() function is just like set_eq(), except that it considers the
results as bags rather than as sets. A bag is a set that allows duplicates. In
practice, it mean that you can use bag_eq() to test result sets where order
doesn't matter, but duplication does. In other words, if a two rows are the
same in the first result set, the same row must appear twice in the second
result set.

Otherwise, this function behaves exactly like set_eq(), including the
utility of its diagnostics.

bag_ne()

An SQL statement or the name of a prepared statement, passed as a string.

:array

An array of values representing a single-column row values.

:description

A short description of the test.

The inverse of bag_eq(), this function tests that the results of two queries
are not the same, including duplicates. The two queries can as usual be the
names of prepared statements or strings containing an SQL query (see the
summary for query argument details), or even one of
each. The two queries, however, must return results that are directly
comparable -- that is, with the same number and types of columns in the same
orders. If it happens that the query you're testing returns a single column,
the second argument may be an array.

bag_has()

An SQL statement or the name of a prepared statement, passed as a string.

:description

A short description of the test.

The bag_has() function is just like set_has(), except that it considers
the results as bags rather than as sets. A bag is a set with duplicates. What
practice this means that you can use bag_has() to test result sets where
order doesn't matter, but duplication does. Internally, it uses an EXCEPT
ALL query to determine if there any any unexpectedly missing results.

bag_hasnt()

An SQL statement or the name of a prepared statement, passed as a string.

:description

A short description of the test.

This test function is the inverse of bag_hasnt(): the test passes when the
results of the first query have none of the results of the second query.
Diagnostics are similarly useful:

# Failed test 198
# Extra records:
# (44,Anna)
# (86,Angelina)

Internally, the function uses an INTERSECT ALL query to determine if there
is any unexpected overlap between the query results. This means that a
duplicate row in the first query will appear twice in the diagnostics if it is
also duplicated in the second query.

is_empty()

SELECT is_empty( :sql, :description );
SELECT is_empty( :sql );

Parameters

:sql

An SQL statement or the name of a prepared statement, passed as a string.

:description

A short description of the test.

The is_empty() function takes a single query string or prepared statement
name as its first argument, and tests that said query returns no records.
Internally it simply executes the query and if there are any results, the test
fails and the results are displayed in the failure diagnostics, like so:

Compares the contents of a single row to a record. Due to the limitations of
non-C functions in PostgreSQL, a bar RECORD value cannot be passed to the
function. You must instead pass in a valid composite type value, and cast the
record argument (the second argument) to the same type. Both explicitly
created composite types and table types are supported. Thus, you can do this:

Note the use of the sometype composite type for the second argument. The
upshot is that you can create composite types in your tests explicitly for
comparing the return values of your queries, if such queries don't return an
existing valid type.

Hopefully someday in the future we'll be able to support arbitrary record
arguments. In the meantime, this is the 90% solution.

Diagnostics on failure are similar to those from is():

# Failed test 322
# have: (1,Jacob)
# want: (1,Larry)

The Schema Things

Need to make sure that your database is designed just the way you think it
should be? Use these test functions and rest easy.

A note on comparisons: pgTAP uses a simple equivalence test (=) to compare
all SQL identifiers, such as the names of tables, schemas, functions, indexes,
and columns (but not data types). So in general, you should always use
lowercase strings when passing identifier arguments to the functions below.
Use mixed case strings only when the objects were declared in your schema
using double-quotes. For example, if you created a table like so:

CREATE TABLE Foo (id integer);

Then you must test for it using only lowercase characters (if you want the
test to pass):

SELECT has_table('foo');

If, however, you declared the table using a double-quoted string, like so:

CREATE TABLE "Foo" (id integer);

Then you'd need to test for it using exactly the same string, including case,
like so:

SELECT has_table('Foo');

In general, this should not be an issue, as mixed-case objects are created
only rarely. So if you just stick to lowercase-only arguments to these
functions, you should be in good shape.

I Object!

In a busy development environment, you might have a number of users who make
changes to the database schema. Sometimes you have to really work to keep
these folks in line. For example, do they add objects to the database without
adding tests? Do they drop objects that they shouldn't? These assertions are
designed to help you ensure that the objects in the database are exactly the
objects that should be in the database, no more, no less.

Each tests tests that all of the objects in the database are only the objects
that should be there. In other words, given a list of objects, say tables in
a call to tables_are(), this assertion will fail if there are tables that
are not in the list, or if there are tables in the list that are missing from
the database. It can also be useful for testing replication and the success or
failure of schema change deployments.

If you're more interested in the specifics of particular objects, skip to
the next section.

This function tests that all of the tables in the named schema, or that are
visible in the search path, are only the tables that should be there. If the
:schema argument is omitted, tables will be sought in the search path,
excluding pg_catalog and information_schema If the description is omitted,
a generally useful default description will be generated. Example:

This function tests that all of the views in the named schema, or that are
visible in the search path, are only the views that should be there. If the
:schema argument is omitted, views will be sought in the search path,
excluding pg_catalog and information_schema If the description is omitted,
a generally useful default description will be generated. Example:

This function tests that all of the sequences in the named schema, or that are
visible in the search path, are only the sequences that should be there. If
the :schema argument is omitted, sequences will be sought in the search
path, excluding pg_catalog and information_schema. If the description is
omitted, a generally useful default description will be generated. Example:

This function tests that all of the columns on the named table are only the
columns that should be on that table. If the :schema argument is omitted,
the table must be visible in the search path, excluding pg_catalog and
information_schema. If the description is omitted, a generally useful
default description will be generated. Example:

This function tests that all of the indexes on the named table are only the
indexes that should be on that table. If the :schema argument is omitted,
the table must be visible in the search path, excluding pg_catalog and
information_schema. If the description is omitted, a generally useful
default description will be generated. Example:

This function tests that all of the triggers on the named table are only the
triggers that should be on that table. If the :schema argument is omitted,
the table must be visible in the search path, excluding pg_catalog and
information_schema. If the description is omitted, a generally useful
default description will be generated. Example:

This function tests that all of the functions in the named schema, or that are
visible in the search path, are only the functions that should be there. If
the :schema argument is omitted, functions will be sought in the search
path, excluding pg_catalog and information_schema If the description is
omitted, a generally useful default description will be generated. Example:

This function tests that all of the operator classes in the named schema, or
that are visible in the search path, are only the opclasses that should be
there. If the :schema argument is omitted, opclasses will be sought in the
search path, excluding pg_catalog and information_schema. If the
description is omitted, a generally useful default description will be
generated. Example:

This function tests that all of the rules on the named relation are only the
rules that should be on that relation (a table or a view). If the :schema
argument is omitted, the rules must be visible in the search path, excluding
pg_catalog and information_schema. If the description is omitted, a
generally useful default description will be generated. Example:

Tests that all of the types in the named schema are the only types in that
schema, including base types, composite types, domains, enums, and
pseudo-types. If the :schema argument is omitted, the types must be visible
in the search path, excluding pg_catalog and information_schema. If the
description is omitted, a generally useful default description will be
generated. Example:

SELECT types_are('myschema', ARRAY[ 'timezone', 'state' ]);

In the event of a failure, you'll see diagnostics listing the extra and/or
missing types, like so:

Tests that all of the domains in the named schema are the only domains in that
schema. If the :schema argument is omitted, the domains must be visible in
the search path, excluding pg_catalog and information_schema. If the
description is omitted, a generally useful default description will be
generated. Example:

SELECT domains_are('myschema', ARRAY[ 'timezone', 'state' ]);

In the event of a failure, you'll see diagnostics listing the extra and/or
missing domains, like so:

Tests that all of the enums in the named schema are the only enums in that
schema. Enums are supported in PostgreSQL 8.3 and up. If the :schema
argument is omitted, the enums must be visible in the search path, excluding
pg_catalog and information_schema. If the description is omitted, a
generally useful default description will be generated. Example:

SELECT enums_are('myschema', ARRAY[ 'timezone', 'state' ]);

In the event of a failure, you'll see diagnostics listing the extra and/or
missing enums, like so:

casts_are()

SELECT casts_are( :casts, :description );
SELECT casts_are( :casts );

Parameters

:casts

An array of cast names.

:description

A short description of the test.

This function tests that all of the casts in the database are only the casts
that should be in that database. Casts are specified as strings in a syntax
similarly to how they're declared via CREATE CAST. The pattern is
:source_type AS :target_type. If either type was created with double-quotes
to force mixed case or special characers, then you must use double quotes in
the cast strings. Example:

Tests that all of the operators in the named schema are the only operators in
that schema. If the :schema argument is omitted, the operators must be
visible in the search path, excluding pg_catalog and information_schema.
If the description is omitted, a generally useful default description will be
generated.

The :operators argument is specified as an array of strings in which
each operator is defined similarly to the display of the :regoperator type.
The format is :op(:leftop,:rightop) RETURNS :return_type.

For left operators the left argument type should be NONE. For right
operators, the right argument type should be NONE. The example above shows
one one of each of the operator types. =(citext,citext) is an infix
operator, -(bigint,NONE) is a left operator, and !(NONE,bigint) is a right
operator. Example:

To Have or Have Not

Perhaps you're not so concerned with ensuring the precise correlation of
database objects. Perhaps you just need to make sure
that certain objects exist (or that certain objects don't exist). You've
come to the right place.

This function tests whether or not a tablespace exists in the database. The
first argument is a tablespace name. The second is either the a file system
path for the database or a test description. If you specify a location path,
you must pass a description as the third argument; otherwise, if you omit the
test description, it will be set to "Tablespace :tablespace should exist".
Example:

has_schema()

This function tests whether or not a schema exists in the database. The first
argument is a schema name and the second is the test description. If you omit
the test description, it will be set to "Schema :schema should exist".

hasnt_schema()

SELECT hasnt_schema(
'someschema',
'There should be no schema someschema'
);

Parameters

:schema

Name of a schema.

:description

A short description of the test.

This function is the inverse of has_schema(). The test passes if the
specified schema does not exist.

This function tests whether or not a relation exists in the database.
Relations are tables, views, seqences, composite types, foreign tables, and
toast tables. The first argument is a schema name, the second is a relation
name, and the third is the test description. If you omit the schema, the
relation must be visible in the search path. Example:

SELECT has_relation('myschema', 'somerelation');

If you omit the test description, it will be set to "Relation :relation
should exist".

This function tests whether or not a table exists in the database. The first
argument is a schema name, the second is a table name, and the third is the
test description. If you omit the schema, the table must be visible in the
search path. Example:

SELECT has_table('myschema', 'sometable');

If you omit the test description, it will be set to "Table :table should
exist".

Note that this function will not recognize foreign tables; use
has_foreign_table() to test for the presence of foreign tables.

This function tests whether or not a view exists in the database. The first
argument is a schema name, the second is a view name, and the third is the
test description. If you omit the schema, the view must be visible in the
search path. Example:

SELECT has_view('myschema', 'someview');

If you omit the test description, it will be set to "View :view should
exist".

This function tests whether or not a sequence exists in the database. The first
argument is a schema name, the second is a sequence name, and the third is the
test description. If you omit the schema, the sequence must be visible in the
search path. Example:

SELECT has_sequence('myschema', 'somesequence');

If you omit the test description, it will be set to "Sequence :sequence should
exist".

This function tests whether or not a foreign table exists in the database. The
first argument is a schema name, the second is a foreign table name, and the
third is the test description. If you omit the schema, the foreign table must
be visible in the search path. Example:

SELECT has_foreign_table('myschema', 'some_foreign_table');

If you omit the test description, it will be set to "Foreign table :table
should exist".

This function tests whether or not a type exists in the database. Detects all
types of types, including base types, composite types, domains, enums, and
pseudo-types. The first argument is a schema name, the second is a type name,
and the third is the test description. If you omit the schema, the type must
be visible in the search path. If you omit the test description, it will be
set to "Type :type should exist". If you're passing a schema and type rather
than type and description, be sure to cast the arguments to name values so
that your type name doesn't get treated as a description. Example:

SELECT has_type( 'myschema', 'sometype' );

If you've created a composite type and want to test that the composed types
are a part of it, use the column testing functions to verify them, like so:

This function tests whether or not a composite type exists in the database.
The first argument is a schema name, the second is the name of a composite
type, and the third is the test description. If you omit the schema, the
composite type must be visible in the search path. If you omit the test
description, it will be set to "Composite type :composite type should
exist". Example:

SELECT has_composite( 'myschema', 'somecomposite' );

If you're passing a schema and composite type rather than composite type and
description, be sure to cast the arguments to name values so that your
composite type name doesn't get treated as a description.

This function tests whether or not a domain exists in the database. The first
argument is a schema name, the second is the name of a domain, and the third
is the test description. If you omit the schema, the domain must be visible in
the search path. If you omit the test description, it will be set to "Domain
:domain should exist". Example:

SELECT has_domain( 'myschema', 'somedomain' );

If you're passing a schema and domain rather than domain and description, be
sure to cast the arguments to name values so that your domain name doesn't
get treated as a description.

This function tests whether or not a enum exists in the database. Enums are
supported in PostgreSQL 8.3 or higher. The first argument is a schema name,
the second is the an enum name, and the third is the test description. If you
omit the schema, the enum must be visible in the search path. If you omit the
test description, it will be set to "Enum :enum should exist". Example:

SELECT has_enum( 'myschema', 'someenum' );

If you're passing a schema and enum rather than enum and description, be sure
to cast the arguments to name values so that your enum name doesn't get
treated as a description.

Checks for the existence of an index associated with the named table. The
:schema argument is optional, as is the column name or names or expression,
and the description. The columns argument may be a string naming one column or
expression, or an array of column names and/or expressions. For expressions,
you must use lowercase for all SQL keywords and functions to properly compare
to PostgreSQL's internal form of the expression. Non-functional expressions
should also be wrapped in parentheses. A few examples:

Tests to see if the specified table has the named trigger. The :description
is optional, and if the schema is omitted, the table with which the trigger is
associated must be visible in the search path.

Checks to be sure that the given function exists in the named schema and with
the specified argument data types. If :schema is omitted, has_function()
will search for the function in the schemas defined in the search path. If
:args is omitted, has_function() will see if the function exists without
regard to its arguments. Some examples:

The :args argument should be formatted as it would be displayed in the view
of a function using the \df command in psql. For example, even if you have
a numeric column with a precision of 8, you should specify ARRAY['numeric'].
If you created a varchar(64) column, you should pass the :args argument as
ARRAY['character varying'].

If you wish to use the two-argument form of has_function(), specifying only
the schema and the function name, you must cast the :function argument to
:name in order to disambiguate it from from the
has_function(:function, :description) form. If you neglect to do so, your
results will be unexpected.

Also, if you use the string form to specify the :args array, be sure to cast
it to name to disambiguate it from a text string:

SELECT has_function( 'lower', '{text}'::name[] );

Deprecation notice: The old name for this test function, can_ok(), is
still available, but emits a warning when called. It will be removed in a
future version of pgTAP.

Tests for the existence of a cast. A cast consists of a source data type, a
target data type, and perhaps a (possibly schema-qualified) function. An example:

SELECT has_cast( 'integer', 'bigint', 'pg_catalog', 'int8' );

If you omit the description four the 3- or 4-argument version, you'll need to
cast the function name to the NAME data type so that PostgreSQL doesn't
resolve the function name as a description. For example:

SELECT has_cast( 'integer', 'bigint', 'int8'::NAME );

pgTAP will generate a useful description if you don't provide one.

Note that pgTAP does not compare typemods. So if you wanted to test for a cast
between, say, a uuid type and bit(128), this will not work:

Tests for the presence of a binary operator. If the operator exists with the
given schema, name, left and right arguments, and return value, the test will
fail. If the operator does not exist, the test will fail. Example:

If you omit the schema name, then the operator must be visible in the search
path. If you omit the test description, pgTAP will generate a reasonable one
for you. The return value is also optional. If you need to test for a left or
right unary operator, use has_leftop() or has_rightop() instead.

Tests for the presence of a left-unary operator. If the operator exists with
the given schema, name, right argument, and return value, the test will fail.
If the operator does not exist, the test will fail. Example:

SELECT has_leftop( 'pg_catalog', '!!', 'bigint', 'numeric' );

If you omit the schema name, then the operator must be visible in the search
path. If you omit the test description, pgTAP will generate a reasonable one
for you. The return type is also optional.

Tests for the presence of a right-unary operator. If the operator exists with
the given left argument, schema, name, and return value, the test will fail.
If the operator does not exist, the test will fail. Example:

SELECT has_rightop( 'bigint', 'pg_catalog', '!', 'numeric' );

If you omit the schema name, then the operator must be visible in the search
path. If you omit the test description, pgTAP will generate a reasonable one
for you. The return type is also optional.

Tests for the presence of an operator class. If you omit the schema name, then
the operator must be visible in the search path. If you omit the test
description, pgTAP will generate a reasonable one for you. The return value is
also optional.

hasnt_language()

The inverse of has_language(), this function tests for the absence of a
procedural language.

Table For One

Okay, you're sure that your database has exactly the [right schema](#I+Object!
"I Object!") and that all of the objects you need are
there. So let's take a closer
look at tables. There are a lot of ways to look at tables, to make sure that
they have all the columns, indexes, constraints, keys, and indexes they need.
So we have the assertions to validate 'em.

Tests whether or not a column exists in a given table, view, or composite
type. The first argument is the schema name, the second the table name, the
third the column name, and the fourth is the test description. If the schema
is omitted, the table must be visible in the search path. If the test
description is omitted, it will be set to "Column :table.:column should
exist".

Tests whether the specified column has a NOT NULL constraint. The first
argument is the schema name, the second the table name, the third the column
name, and the fourth is the test description. If the schema is omitted, the
table must be visible in the search path. If the test description is omitted,
it will be set to "Column :table.:column should be NOT NULL". Note that this
test will fail with a useful diagnostic message if the table or column in
question does not exist. But use has_column() to make sure the column exists
first, eh?

This function is the inverse of col_not_null(): the test passes if the
column does not have a NOT NULL constraint. The first argument is the schema
name, the second the table name, the third the column name, and the fourth is
the test description. If the schema is omitted, the table must be visible in
the search path. If the test description is omitted, it will be set to "Column
:table.:column should allow NULL". Note that this test will fail with a
useful diagnostic message if the table or column in question does not exist.
But use has_column() to make sure the column exists first, eh?

This function is the inverse of col_has_default(). The test passes if the
specified column does not have a default. It will still fail if the column
does not exist, and emit useful diagnostics to let you know.

This function tests that the specified column is of a particular type. If it
fails, it will emit diagnostics naming the actual type. The first argument is
the schema name, the second the table name, the third the column name, the
fourth the type's schema, the fifth the type, and the sixth is the test
description. Example: If the table schema is omitted, the table and the type
must be visible in the search path. If the test description is omitted, it
will be set to "Column :schema.:table.:column should be type
:schema.:type". Note that this test will fail if the table or column in
question does not exist.

The type argument should be formatted as it would be displayed in the view of
a table using the \d command in psql. For example, if you have a numeric
column with a precision of 8, you should specify "numeric(8,0)". If you
created a varchar(64) column, you should pass the type as "character
varying(64)". Example:

Tests the default value of a column. If it fails, it will emit diagnostics
showing the actual default value. The first argument is the schema name, the
second the table name, the third the column name, the fourth the default
value, and the fifth is the test description. If the schema is omitted, the
table must be visible in the search path. If the test description is omitted,
it will be set to "Column :table.:column should default to :default". Note
that this test will fail if the table or column in question does not exist.

The default argument must have an unambiguous type in order for the call to
succeed. If you see an error such as 'ERROR: could not determine polymorphic
type because input has type "unknown"', it's because you forgot to cast the
expected value, probably a NULL (which, by the way, you can only properly
test for in PostgreSQL 8.3 and later), to its proper type. IOW, this will
fail:

SELECT col_default_is( 'tab', age, NULL );

But this will not:

SELECT col_default_is( 'tab', age, NULL::integer );

You can also test for functional defaults. Just specify the function call as a
string:

SELECT col_default_is( 'user', 'created_at', 'now()' );

If the test fails, it will output useful diagnostics. For example, this test:

And if the test fails because the table or column in question does not exist,
the diagnostics will tell you that, too. But you use has_column() and
col_has_default() to test those conditions before you call
col_default_is(), right? Right??? Yeah, good, I thought so.

Tests whether or not a table has a primary key. The first argument is the
schema name, the second the table name, the the third is the test description.
If the schema is omitted, the table must be visible in the search path. If the
test description is omitted, it will be set to "Table :table should have a
primary key". Note that this test will fail if the table in question does not
exist.

Tests whether or not a table has a foreign key constraint. The first argument
is the schema name, the second the table name, the the third is the test
description. If the schema is omitted, the table must be visible in the search
path. If the test description is omitted, it will be set to "Table :table
should have a foreign key constraint". Note that this test will fail if the
table in question does not exist.

Tests whether the specified column or columns in a table is/are the primary
key for that table. If it fails, it will emit diagnostics showing the actual
primary key columns, if any. The first argument is the schema name, the second
the table name, the third the column name or an array of column names, and the
fourth is the test description. Examples:

If the schema is omitted, the table must be visible in the search path. If the
test description is omitted, it will be set to "Column :table(:column)
should be a primary key". Note that this test will fail if the table or column
in question does not exist.

If the test fails, it will output useful diagnostics. For example this test:

Just like col_is_fk(), except that it test that the column or array of
columns are a primary key. The diagnostics on failure are a bit different,
too. Since the table might have more than one foreign key, the diagnostics
simply list all of the foreign key constraint columns, like so:

This function combines col_is_fk() and col_is_pk() into a single test that
also happens to determine that there is in fact a foreign key relationship
between the foreign and primary key tables. To properly test your
relationships, this should be your main test function of choice.

The first three arguments are the schema, table, and column or array of
columns that constitute the foreign key constraint. The schema name is
optional, and the columns can be specified as a string for a single column or
an array of strings for multiple columns. The next three arguments are the
schema, table, and column or columns that constitute the corresponding primary
key. Again, the schema is optional and the columns may be a string or array of
strings (though of course it should have the same number of elements as the
foreign key column argument). The seventh argument is an optional description
If it's not included, it will be set to :fk_schema.:fk_table(:fk_column)
should reference :pk_column.pk_table(:pk_column). Some examples:

Tests whether or not a table has a unique constraint. The first argument is
the schema name, the second the table name, the the third is the test
description. If the schema is omitted, the table must be visible in the search
path. If the test description is omitted, it will be set to "Table :table
should have a unique constraint". Note that this test will fail if the table
in question does not exist.

Tests whether or not a table has a check constraint. The first argument is the
schema name, the second the table name, the the third is the test description.
If the schema is omitted, the table must be visible in the search path. If the
test description is omitted, it will be set to "Table :table should have a
check constraint". Note that this test will fail if the table in question does
not exist.

In the event of failure, the diagnostics will list the columns on the table
that do have check constraints, if any:

Tests whether a table is clustered on the given index. A table is clustered on
an index when the SQL command CLUSTER TABLE INDEXNAME has been executed.
Clustering reorganizes the table tuples so that they are stored on disk in the
order defined by the index.

Feeling Funky

Perhaps more important than testing the database schema is testing your custom
functions. Especially if you write functions that provide the interface for
clients to interact with the database, making sure that they work will save
you time in the long run. So check out these assertions to maintain your
sanity.

Checks to be sure that :schema has :functions defined. This is subtly
different from functions_are(). functions_are() fails if the functions
defined in :schema are not exactly the functions defined in :functions.
can(), on the other hand, just makes sure that :functions exist.

If :schema is omitted, then can() will look for functions defined in
schemas defined in the search path. No matter how many functions are listed in
:functions, a single call to can() counts as one test. If you want
otherwise, call can() once for each function -- or better yet, use
has_function(). Example:

SELECT can( 'pg_catalog', ARRAY['upper', 'lower'] );

If any of the functions are not defined, the test will fail and the
diagnostics will output a list of the functions that are missing, like so:

Tests that a particular function is implemented in a particular procedural
language. The function name is required. If the :schema argument is omitted,
then the function must be visible in the search path. If the :args[]
argument is passed, then the function with that argument signature will be the
one tested; otherwise, a function with any signature will be checked (pass an
empty array to specify a function with an empty signature). If the
:description is omitted, a reasonable substitute will be created. Examples:

Tests that a particular function returns a particular data type. The :args[]
and :type arguments should be formatted as they would be displayed in the
view of a function using the \df command in psql. For example, use
"character varying" rather than "varchar", and "boolean" rather than "bool".
For set returning functions, the :type argument should start with "setof "
(yes, lowercase). Examples:

If the :schema argument is omitted, then the function must be visible in the
search path. If the :args[] argument is passed, then the function with that
argument signature will be the one tested; otherwise, a function with any
signature will be checked (pass an empty array to specify a function with an
empty signature). If the :description is omitted, a reasonable substitute
will be created.

In the event of a failure, you'll useful diagnostics will tell you what went
wrong, for example:

Tests that a function is a security definer (i.e., a "setuid" function). If
the :schema argument is omitted, then the function must be visible in the
search path. If the :args argument is passed, then the function with that
argument signature will be the one tested; otherwise, a function with any
signature will be checked (pass an empty array to specify a function with an
empty signature). If the :description is omitted, a reasonable substitute
will be created. Examples:

Tests that a function is a strict, meaning that the function returns null if
any argument is null. If the :schema argument is omitted, then the function
must be visible in the search path. If the :args argument is passed, then
the function with that argument signature will be the one tested; otherwise, a
function with any signature will be checked (pass an empty array to specify a
function with an empty signature). If the :description is omitted, a
reasonable substitute will be created. Examples:

Tests that a function is an aggregate function. If the :schema argument is
omitted, then the function must be visible in the search path. If the
:args[] argument is passed, then the function with that argument signature
will be the one tested; otherwise, a function with any signature will be
checked (pass an empty array to specify a function with an empty signature).
If the :description is omitted, a reasonable substitute will be created.
Examples:

Tests the volatility of a function. Supported volatilities are "volatile",
"stable", and "immutable". Consult the CREATE FUNCTION
documentation
for details. The function name is required. If the :schema argument is
omitted, then the function must be visible in the search path. If the
:args[] argument is passed, then the function with that argument signature
will be the one tested; otherwise, a function with any signature will be
checked (pass an empty array to specify a function with an empty signature).
If the :description is omitted, a reasonable substitute will be created.
Examples:

language_is_trusted()

Tests that the specified procedural language is trusted. See the [CREATE
LANGUAGE](http://www.postgresql.org/docs/current/static/sql-createlanguage.html
"CREATE LANGUAGE") documentation for details on trusted and untrusted
procedural languages. If the :description argument is not passed, a suitably
useful default will be created.

In the event that the language in question does not exist in the database,
language_is_trusted() will emit a diagnostic message to alert you to this
fact, like so:

This function tests that an enum consists of an expected list of labels. Enums
are supported in PostgreSQL 8.3 or higher. The first argument is a schema
name, the second an enum name, the third an array of enum labels, and the
fourth a description. Example:

Tests the data type underlying a domain. The first two are arguments are the
schema and name of the domain. The second two are the schema and name of the
type that the domain should extend. The fifth argument is a description. If
there is no description, a reasonable default description will be created. The
schema arguments are also optional (though if there is no schema for the
domain then there cannot be one for the type). For the 3- and 4-argument
forms with schemas, cast the schemas to NAME to avoid ambiguities. Example:

The inverse of domain_type_is(), this function tests that a domain does
not extend a particular data type. For example, a US postal code domain
should probably extned the text type, not integer, since leading 0s are
valid and required. Example:

The context for the cast, one of "implicit", "assignment", or "explicit".

Test that a cast from a source to a target data type has a particular context.
Example:

SELECT cast_context_is( 'integer', 'bigint', 'implicit' );

The data types should be passed as they are displayed by
pg_catalog.format_type(). For example, you would need to pass "character
varying", and not "VARCHAR".

The The supported contexts are "implicit", "assignment", and "explicit". You
can also just pass in "i", "a", or "e". Consult the PostgreSQL CREATE
CAST
documentation for the differences between these contexts (hint: they
correspond to the default context, AS IMPLICIT, and AS ASSIGNMENT). If you
don't supply a test description, pgTAP will create a reasonable one for you.

is_superuser()

Tests that a database user is a super user. If the description is omitted, it
will default to "User :user should be a super user". Example:

SELECT is_superuser('theory' ;

If the user does not exist in the database, the diagnostics will say so.

isnt_superuser()

SELECT is_superuser(
'dr_evil',
'User "dr_evil" should not be a super user'
);

Parameters

:user

Name of a PostgreSQL user.

:description

A short description of the test.

The inverse of is_superuser(), this function tests that a database user is
not a super user. Note that if the named user does not exist in the
database, the test is still considered a failure, and the diagnostics will say
so.

Checks whether a group contains a user or all of an array of users. If the
description is omitted, it will default to "Should have members of group
:group." On failure, is_member_of() will output diagnostics listing the
missing users, like so:

Checks whether a rule on the specified relation is an INSTEAD rule. See the
CREATE RULE
Documentation
for details. If the :schema argument is omitted, the relation must be
visible in the search path. If the :description argument is omitted, an
appropriate description will be created. An example:

SELECT rule_is_instead('public', 'users', 'on_insert');

In the event that the test fails because the rule in question does not
actually exist, you will see an appropriate diagnostic such as:

# Failed test 625: "Rule on_insert on relation public.users should be an INSTEAD rule"
# Rule on_insert does not exist

Name of a rule event, one of "SELECT", "INSERT", "UPDATE", or "DELETE".

:description

A short description of the test.

Tests the event for a rule, which may be one of "SELECT", "INSERT", "UPDATE",
or "DELETE". For the :event argument, you can specify the name of the event
in any case, or even with a single letter ("s", "i", "u", or "d"). If the
:schema argument is omitted, then the table must be visible in the search
path. If the :description is omitted, a reasonable default will be created.
Example:

Tests the ownership of a relation. Relations are tables, views, seqences,
composite types, foreign tables, and toast tables. If the :description
argument is omitted, an appropriate description will be created. Examples:

Diagnostics

If you pick the right test function, you'll usually get a good idea of what
went wrong when it failed. But sometimes it doesn't work out that way. So here
we have ways for you to write your own diagnostic messages which are safer
than just \echo or SELECT foo.

diag()

SELECT diag( :lines );

Parameters

:lines

A list of one or more SQL values of the same type.

Returns a diagnostic message which is guaranteed not to interfere with test
output. Handy for this sort of thing:

-- Output a diagnostic message if the collation is not en_US.UTF-8.
SELECT diag(
E'These tests expect LC_COLLATE to be en_US.UTF-8,\n',
'but yours is set to ', setting, E'.\n',
'As a result, some tests may fail. YMMV.'
)
FROM pg_settings
WHERE name = 'lc_collate'
AND setting <> 'en_US.UTF-8';

Which would produce:

# These tests expect LC_COLLATE to be en_US.UTF-8,
# but yours is set to en_US.ISO8859-1.
# As a result, some tests may fail. YMMV.

You can pass data of any type to diag() on PostgreSQL 8.3 and higher and it
will all be converted to text for the diagnostics. On PostgreSQL 8.4 and
higher, you can pass any number of arguments (as long as they are all the same
data type) and they will be concatenated together.

Conditional Tests

Sometimes running a test under certain conditions will cause the test script
or function to die. A certain function or feature isn't implemented (such as
pg_sleep() prior to PostgreSQL 8.2), some resource isn't available (like a
procedural language), or a contrib module isn't available. In these cases it's
necessary to skip tests, or declare that they are supposed to fail but will
work in the future (a todo test).

Note how use of the conditional CASE statement has been used to determine
whether or not to run a couple of tests. If they are to be run, they are run
through collect_tap(), so that we can run a few tests in the same query. If
we don't want to run them, we call skip() and tell it how many tests we're
skipping.

If you don't specify how many tests to skip, skip() will assume that you're
skipping only one. This is useful for the simple case, of course:

With todo(), :how_many specifies how many tests are expected to fail. If
:how_many is omitted, it defaults to 1. pgTAP will run the tests normally,
but print out special flags indicating they are "todo" tests. The test harness
will interpret these failures as ok. Should any todo test pass, the harness
will report it as an unexpected success. You then know that the thing you had
todo is done and can remove the call to todo().

The nice part about todo tests, as opposed to simply commenting out a block of
tests, is that they're like a programmatic todo list. You know how much work
is left to be done, you're aware of what bugs there are, and you'll know
immediately when they're fixed.

todo_start( why )

todo_start( )

This function allows you declare all subsequent tests as TODO tests, up until
the todo_end() function is called.

The todo() syntax is generally pretty good about figuring out whether or not
we're in a TODO test. However, often we find it difficult to specify the
number of tests that are TODO tests. Thus, you can instead use
todo_start() and todo_end() to more easily define the scope of your TODO
tests.

This is generally not recommended, but large testing systems often have weird
internal needs.

The todo_start() and todo_end() function should also work with the
todo() function, although it's not guaranteed and its use is also
discouraged:

SELECT todo_start('working on this');
-- lots of code
SELECT todo('working on that', 2);
-- Two tests for which the above line applies
-- Followed by more tests scoped till the following line.
SELECT todo_end();

We recommend that you pick one style or another of TODO to be on the safe
side.

todo_end()

Stops running tests as TODO tests. This function is fatal if called without a
preceding todo_start() method call.

in_todo()

Returns true if the test is currently inside a TODO block.

Utility Functions

Along with the usual array of testing, planning, and diagnostic functions,
pTAP provides a few extra functions to make the work of testing more pleasant.

pgtap_version()

SELECT pgtap_version();

Returns the version of pgTAP installed in the server. The value is NUMERIC,
and thus suitable for comparing to a decimal value:

pg_version()

SELECT pg_version();

Returns the server version number against which pgTAP was compiled. This is
the stringified version number displayed in the first part of the core
version() function and stored in the "server_version" setting:

pg_version_num()

SELECT pg_version_num();

Returns an integer representation of the server version number against which
pgTAP was compiled. This function is useful for determining whether or not
certain tests should be run or skipped (using skip()) depending on the
version of PostgreSQL. For example:

The revision level is in the tens position, the minor version in the thousands
position, and the major version in the ten thousands position and above
(assuming PostgreSQL 10 is ever released, it will be in the hundred thousands
position). This value is the same as the server_version_num setting
available in PostgreSQL 8.2 and higher, but supported by this function back to
PostgreSQL 8.1:

os_name()

Returns a string representing the name of the operating system on which pgTAP
was compiled. This can be useful for determining whether or not to skip tests
on certain operating systems.

This is usually the same a the output of uname, but converted to lower case.
There are some semantics in the pgTAP build process to detect other operating
systems, though assistance in improving such detection would be greatly
appreciated.

NOTE: The values returned by this function may change in the future,
depending on how good the pgTAP build process gets at detecting a OS.

collect_tap()

SELECT collect_tap(:lines);

Parameters

:lines

A list of one or more lines of TAP.

Collects the results of one or more pgTAP tests and returns them all. Useful
when used in combination with skip():

Used internally by pgTAP to compare type names, but may be more generally
useful.

display_oper()

SELECT display_oper( :opername, :operoid );

Parameters

:opername

Operator name.

:operoid

Operator OID.

Similar to casting an operator OID to regoperator, only the schema is not
included in the display. For example:

SELECT display_type(oprname, oid ) FROM pg_operator;

Used internally by pgTAP to compare operators, but may be more generally
useful.

pg_typeof()

SELECT pg_typeof(:any);

Parameters

:any

Any SQL value.

Returns a regtype identifying the type of value passed to the function. This
function is used internally by cmp_ok() to properly construct types when
executing the comparison, but might be generally useful.

Note: pgTAP does not build pg_typeof() on PostgreSQL 8.4 or higher,
because it's in core in 8.4. You only need to worry about this if you depend
on the function being in particular schema. It will always be in pg_catalog
in 8.4 and higher.

findfuncs()

SELECT findfuncs( :schema, :pattern );
SELECT findfuncs( :pattern );

Parameters

:schema

Schema to search for functions.

:pattern

Regular expression pattern against which to match function names.

This function searches the named schema or, if no schema is passed, the search
patch, for all functions that match the regular expression pattern. The
functions it finds are returned as an array of text values, with each value
consisting of the schema name, a dot, and the function name. For example:

Tap that Batch

Sometimes it can be useful to batch a lot of TAP tests into a function. The
simplest way to do so is to define a function that RETURNS SETOF TEXT and
then simply call RETURN NEXT for each TAP test. Here's a simple example:

If you like you can create a whole slew of these batched tap functions, and
then use the do_tap() function to run them all at once. If passed no
arguments, it will attempt to find all visible functions that start with
"test". If passed a schema name, it will look for and run test functions only
in that schema (be sure to cast the schema to name if it is the only
argument). If passed a regular expression pattern, it will look for function
names that match that pattern in the search path. If passed both, it will of
course only search for test functions that match the function in the named
schema.

This can be very useful if you prefer to keep all of your TAP tests in
functions defined in the database. Simply call plan(), use do_tap() to
execute all of your tests, and then call finish(). A dead simple example:

As a bonus, if client_min_messages is set to "warning", "error", "fatal", or
"panic", the name of each function will be emitted as a diagnostic message
before it is called. For example, if do_tap() found and executed two TAP
testing functions an client_min_messages is set to "warning", output will
look something like this:

If you'd like pgTAP to plan, run all of your tests functions, and finish all
in one fell swoop, use runtests(). This most closely emulates the xUnit
testing environment, similar to the functionality of
PGUnit and
Epic. Example:

SELECT * FROM runtests( 'testschema', '^test' );

As with do_tap(), you can pass in a schema argument and/or a pattern that
the names of the tests functions can match. If you pass in only the schema
argument, be sure to cast it to name to identify it as a schema name rather
than a pattern:

SELECT * FROM runtests('testschema'::name);

Unlike do_tap(), runtests() fully supports startup, shutdown, setup, and
teardown functions, as well as transactional rollbacks between tests. It also
outputs the test plan and fishes the tests, so you don't have to call plan()
or finish() yourself.

The fixture functions run by runtests() are as follows:

^startup - Functions whose names start with "startup" are run in
alphabetical order before any test functions are run.

^setup - Functions whose names start with "setup" are run in alphabetical
order before each test function is run.

^teardown - Functions whose names start with "teardown" are run in
alphabetical order after each test function is run. They will not be run,
however, after a test that has died.

^shutdown - Functions whose names start with "shutdown" are run in
alphabetical order after all test functions have been run.

Note that all tests executed by runtests() are run within a single
transaction, and each test is run in a subtransaction that also includes
execution all the setup and teardown functions. All transactions are rolled
back after each test function, and at the end of testing, leaving your
database in largely the same condition as it was in when you started it (the
one exception I'm aware of being sequences, which are not rolled back to the
value used at the beginning of a rolled-back transaction).

Compose Yourself

So, you've been using pgTAP for a while, and now you want to write your own
test functions. Go ahead; I don't mind. In fact, I encourage it. How? Why,
by providing a function you can use to test your tests, of course!

But first, a brief primer on writing your own test functions. There isn't much
to it, really. Just write your function to do whatever comparison you want. As
long as you have a boolean value indicating whether or not the test passed,
you're golden. Just then use ok() to ensure that everything is tracked
appropriately by a test script.

For example, say that you wanted to create a function to ensure that two text
values always compare case-insensitively. Sure you could do this with is()
and the LOWER() function, but if you're doing this all the time, you might
want to simplify things. Here's how to go about it:

Yep, that's it. The key is to always use pgTAP's ok() function to guarantee
that the output is properly formatted, uses the next number in the sequence,
and the results are properly recorded in the database for summarization at
the end of the test script. You can also provide diagnostics as appropriate;
just append them to the output of ok() as we've done here.

Of course, you don't have to directly use ok(); you can also use another
pgTAP function that ultimately calls ok(). IOW, while the above example
is instructive, this version is easier on the eyes:

The output from your test. Usually it's just returned by a call to the test
function itself. Required.

:is_ok

Boolean indicating whether or not the test is expected to pass. Required.

:name

A brief name for your test, to make it easier to find failures in your test
script. Optional.

:want_description

Expected test description to be output by the test. Optional. Use an empty
string to test that no description is output.

:want_diag

Expected diagnostic message output during the execution of a test. Must
always follow whatever is output by the call to ok(). Optional. Use an
empty string to test that no description is output.

:match_diag

Use matches() to compare the diagnostics rather than :is(). Useful for
those situations where you're not sure what will be in the output, but you
can match it with a regular expression.

This function runs anywhere between one and three tests against a test
function. At its simplest, you just pass in the output of your test function
(and it must be one and only one test function's output, or you'll screw
up the count, so don't do that!) and a boolean value indicating whether or not
you expect the test to have passed. That looks something like this:

SELECT * FROM check_test(
lc_eq('This', 'THIS', 'eq'),
true
);

All other arguments are optional, but I recommend that you always include a
short test name to make it easier to track down failures in your test script.
check_test() uses this name to construct descriptions of all of the tests it
runs. For example, without a short name, the above example will yield output
like so:

not ok 14 - Test should pass

Yeah, but which test? So give it a very succinct name and you'll know what
test. If you have a lot of these, it won't be much help. So give each call
to check_test() a name:

Which will make it much easier to find the failing test in your test script.

The optional fourth argument is the description you expect to be output. This
is especially important if your test function generates a description when
none is passed to it. You want to make sure that your function generates the
test description you think it should! This will cause a second test to be run
on your test function. So for something like this:

The fifth argument, :want_diag, which is also optional, compares the
diagnostics generated during the test to an expected string. Such diagnostics
must follow whatever is output by the call to ok() in your test. Your
test function should not call diag() until after it calls ok() or things
will get truly funky.

Assuming you've followed that rule in your lc_eq() test function, see what
happens when a lc_eq() fails. Write your test to test the diagnostics like
so:

If you pass in the optional sixth argument, :match_diag, the :want_diag
argument will be compared to the actual diagnostic output using matches()
instead of is(). This allows you to use a regular expression in the
:want_diag argument to match the output, for those situations where some
part of the output might vary, such as time-based diagnostics.

I realize that all of this can be a bit confusing, given the various haves and
wants, but it gets the job done. Of course, if your diagnostics use something
other than indented "have" and "want", such failures will be easier to read.
But either way, do test your diagnostics!

Compatibility

Here are some notes on how pgTAP is built for particular versions of
PostgreSQL. This helps you to understand any side-effects. If you'd rather not
have these changes in your schema, build pgTAP with a schema just for it,
instead:

make TAPSCHEMA=tap

To see the specifics for each version of PostgreSQL, consult the files in the
compat/ directory in the pgTAP distribution.

9.1 and Up

No changes. Everything should just work.

9.0 and Down

The foreign_table_owner_is() function will not work, because, of course,
there were no foreign tables until 9.1.

8.4 and Down

The sequence_privs_are() function does not work, because privileges could
not be granted on sequences before 9.0.

The triggers_are() function does not ignore internally-created triggers.

8.3 and Down

A patch is applied to modify results_eq() and row_eq() to cast records
to text before comparing them. This means that things will mainly be
correct, but it also means that two queries with incompatible types that
convert to the same text string may be considered incorrectly equivalent.

A C function, pg_typeof(), is built and installed in a DSO. This is for
compatibility with the same function that ships in 8.4 core, and is required
for cmp_ok() and isa_ok() to work.

The variadic forms of diag() and collect_tap() are not available.
You can pass an array of TAP to collect_tap(), however.

These permission-testing functions don't work, because one cannot grant
permissions on the relevant objects until 8.4:

has_any_column_privilege()

has_column_privilege()

has_foreign_data_wrapper_privilege()

has_server_privilege()

8.2 and Down

A patch is applied that removes the enum_has_labels(), the
diag(anyelement) function and col_has_default() cannot be used to test
for columns specified with DEFAULT NULL (even though that's the implied
default default). The has_enums() function won't work. Also, a number of
assignments casts are added to increase compatibility. The casts are:

boolean to text

text[] to text

name[] to text

regtype to text

Two operators, = and <>, are added to compare name[] values.

Metadata

Public Repository

The source code for pgTAP is available on
GitHub. Please feel free to fork and
contribute!

Mail List

Join the pgTAP community by subscribing to the pgtap-users mail
list. All questions,
comments, suggestions, and bug reports are welcomed there.

Author

Credits

Michael Schwern and chromatic for Test::More.

Adrian Howard for Test::Exception.

Copyright and License

Copyright (c) 2008-2013 David E. Wheeler. Some rights reserved.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose, without fee, and without a written agreement is
hereby granted, provided that the above copyright notice and this paragraph
and the following two paragraphs appear in all copies.

IN NO EVENT SHALL DAVID E. WHEELER BE LIABLE TO ANY PARTY FOR DIRECT,
INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST
PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN
IF DAVID E. WHEELER HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

DAVID E. WHEELER SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS,
AND DAVID E. WHEELER HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT,
UPDATES, ENHANCEMENTS, OR MODIFICATIONS.