Notes on keeping scholarly, technical, and public information useful

Main menu

Category Archives: Validation

The other day I got an inquiry from a user having trouble getting their extensions to MathML 2 to work in their new XSD schema. I learned some things while working on their problem.

First, let’s be clear. MathML says that it is intended to be extensible. Section 7.3.2 of MathML2 reads in full:

The set of elements and attributes specified in the MathML specification are necessary for rendering common mathematical expressions. It is recognized that not all mathematical notation is covered by this set of elements, that new notations are continually invented, and that sub-communities within mathematics often have specialized notations; and furthermore that the explicit extension of a standard is a necessarily slow and conservative process. This implies that the MathML standard could never explicitly cover all the presentational forms used by every sub-community of authors and readers of mathematics, much less encode all mathematical content.

In order to facilitate the use of MathML by the widest possible audience, and to enable its smooth evolution to encompass more notational forms and more mathematical content (perhaps eventually covered by explicit extensions to the standard), the set of tags and attributes is open-ended, in the sense described in this section.

MathML is described by an XML DTD, which necessarily limits the elements and attributes to those occurring in the DTD. Renderers desiring to accept non-standard elements or attributes, and authors desiring to include these in documents, should accept or produce documents that conform to an appropriately extended XML DTD that has the standard MathML DTD as a subset.

MathML renderers are allowed, but not required, to accept non-standard elements and attributes, and to render them in any way. If a renderer does not accept some or all non-standard tags, it is encouraged either to handle them as errors as described above for elements with the wrong number of arguments, or to render their arguments as if they were arguments to an mrow, in either case rendering all standard parts of the input in the normal way.

I don’t find this passage in MathML3, but the sample embedding of MathML into XHTML does extend the document grammar to include XHTML elements, so I believe that the design principle remains true.

It’s easy enough to extend the document grammar as expressed by the DTD: just provide new declarations of appropriate parameter entity references which include your new elements, something
along the following lines. Let us say that we have concluded that we want our extension elements to be legal everywhere that mml:mspace is legal, and we don’t need them anywhere else. We can write:

For XSD, it could in principle be even simpler. The simplest way to make an XSD schema easily extensible is to include wildcards at appropriate points in content models, to allow users’ extension elements to be included in valid documents. All the user has to do is supply a schema document with the declarations of their extension elements:

In the MathML 2 XSD, it turns out to be slightly more complicated, because despite explicitly expecting extensions to the document grammar, the designers didn’t put in the most obvious possible extensibility hook: the content models of MathML elements contain no wildcards, except in the case of the annotation element. So we have some more work to do.

Plan B is to use element substitution groups. Since we want our elements to be legal wherever mml:mspace is legal, we can just make our elements substitutable for mml:mspace. In the simplest case, we would then just write our schema document thus:

The wrinkle here is that when we write it this way, our extension elements get the same type as their substitution-group head, mml:mspace. If we just reinsert the declarations of my:extension-type-1 and my:extension-type-2 and the type attributes on the element declarations, the XSD validator will remind us firmly but politely (in most cases) that the types of my:ext1 and my:ext2 must be derived from that of mml:mspace. In the case of the XSD schema for MathML 2, that means they must be derived from type mml:mspace.type. For document-oriented schemas, this type-derivation requirement is a nuisance; it came from the data-base-oriented part of the working group that specified XSD.

Fortunately, it’s only a nuisance, not a serious obstacle. All we need to do is to define our extension types in terms of changes to type mml:mspace.type. This will require a couple of intermediate types which we’ll call bridge types. The first step in the derivation is to clear away everything we don’t want in our extension types, by restricting away any unwanted content (we’re in luck: mml:mspace.type has no content at all) and any unwanted attribute (again we’re in luck: all attributes are optional). Since one of our extension types uses an id attribute and the other does not, we’ll define two bridge types.

The reference to xlink:href requires that we import the XLink namespace (even though all we’re doing is saying we don’t want that attribute here), so we need to add another xs:import element as well as another namespace declaration.

There are two easy ways a vocabulary designer can make this process simpler:

Include wildcards at points where you want your grammar to be extensible.

This is a bit of a blunt instrument, but it sometimes gets the job done.

If you want to give the extender more control (perhaps they want some extension elements to be legal in some contexts and others to be legal in other contexts), give them extension hooks in the form of abstract elements with a minimally constraining type (e.g. xs:anyType), so that they don’t need to play games with type derivations, the way it was necessary to do for the MathML 2 extension.

Include abstract elements with minimally constraining types at points where you want your grammar to be extensible in context-appropriate ways.

As a general rule: any important element class in your document grammar (e.g. phrase-level-element or list or paragraph-level-element) is a good candidate for an abstract element intended to allow users to add new elements simply by making their new elements substitutable for the appropriate abstract element. (We have a new phrase-level element? Fine: declare <xs:element name="new-phrase" substitutionGroup="target:phrase-level-element"/> and we’re done.)

Of course, the determinism rules (aka Unique Particle Attribution constraint) in XSD still make extending a complex document grammar harder than it needs to be. But by providing appropriate extension hooks, the designer of a document grammar can make things a lot simpler for the user with special needs.

I recently had occasion to write an XSD 1.1 schema for a client whose data includes ISBN and ISSN values.

In a DTD, all one can plausibly say about an element which is supposed to contain an ISBN is that it contains character data, something like this:

<!ELEMENT isbn (#PCDATA) >

That accepts legal ISBN values, like “0 13 651431 6” and “978-1-4419-1901-4”, but it also accepts strings with invalid check-digits, like “0 13 561431 6” (inversion of digits is said to be the most common single error in typing ISBNs), and strings with the wrong number of digits, like “978-1-4419-19014-4”. For that matter, it also accepts strings like “@@@ call Sally and ask what the ISBN is going to be @@@”. (There may be stages in a document’s life when you want to accept that last value. But there may also be stages when you don’t want to allow anything but a legal ISBN. This post is about what to do when writing a schema for that latter set of stages in a document’s life.)

In XSD 1.0, regular-expression patterns can be used to say, more specifically, that the value of a ten-digit ISBN should be of a specific length (thirteen, actually, not ten, because we want to require hyphens or blanks as separators) and should contain only decimal digits, separators, and X (because X is a legal check-digit).

Actually, we can do better than that. In a ten-digit ISBN, there should be ten digits: one to five digits in the so-called group identifier (which divides the world in language / country areas), one to seven digits in the publisher code (in the US, all publisher codes use at least two digits, but I have not been able to find anything that plausibly asserts this is necessarily true for all publisher codes world-wide), one to seven in the item number, and a final digit (or X) as a check digit.

Since the number of separators is fixed, and the total length of the string is fixed, the type definition above will only accept literals with exactly ten non-separator digits. The patterns above assume that either hyphens or blanks will be used as separators, not a mix of hyphens and blanks; they also want any X appearing as a check-digit to be uppercase.

A similar type can be defined for thirteen-digit ISBNs, which add a three-digit industry-code prefix and another separator at the beginning:

In XSD 1.0, that’s as much as we can conveniently do. (Well, almost. If we are willing to endure the associated tedium, we can check for the correct positioning of hyphens in at least the ISBNs of some areas which assign publisher codes in such a way as to ensure that ISBNs remain unique even if the separators are dropped. See the ISBN datatype defined by Roger Costello and Roger Sperberg for an illustration of the principle.)

In theory, we ought to be able to do better: the check-digit algorithm can be checked by a finite-state automaton, and the languages of ten-digit and thirteen-digit ISBNs are thus demonstrably regular languages. So in principle, there are regular expressions that can perform the check-digit calculation. When I have tried to translate from the FSA to a regular expression, however, the result has been uncomfortably long.

But in XSD 1.1, the addition of assertions makes it possible to replicate the check-digit algorithm. We can write a type definition similar to the ones given above, with an additional xsd:assertion element whose test attribute has as its value an XPath expression which will validate the check-digit.

The ISBN-10 check-digit is constructed in such a way that the sum of digit 1 × 10 + digit 2 × 9 + … + digit 8 × 3 + digit 9 × 2 + digit 10 (if digit 10 is a digit, or 10 if digit 10 is an X), modulo 11, is equal to 0. The ISBN-13 check-digit uses a similar but simpler calculation: the numeric values of digits in even-numbered positions are multiplied by three, those of the digits in odd-numbered positions by one, and the sum of these weighted values must be a multiple of ten. This calculation is well within the range of XPath 2.0; let us build up the expression in stages.

Given a candidate ISBN in variable $value, we can obtain a string of digits (or X) without the separators by deleting all hyphens and blanks, which we can do in XPath by writing:

translate($value,' -','')

We can turn that, in turn, into a sequence of numbers (the UCS code-point numbers for the characters) using the XPath 2.0 function string-to-codepoints:

string-to-codepoints(translate($value,' -',''))

For example, given the ISBN “0 13 651431 6”, as the value of $value, the expression just given evaluates to the sequence of integers (48 49 51 54 53 49 52 51 49 54). For purposes of the checksum calculation, however, we’d rather have a 0 in the ISBN appear as a 0, not a 48, in our sequence of numbers. And we need to turn X (which maps to 88) into 10. So we write the following XPath 2.0 expression:

From the integer sequences thus created, we can extract the first digit by writing the filter expression [1], the second digit with [2], etc. It would be convenient to be able to assign the integer sequence to a variable, but that’s not possible in XPath 2.0 (at least, not using normal means). In writing the schema document, however, we can put the expression that generates the sequence into a named entity, thus:

Some people, of course, frown on the use of entities in XML and claim that they are not helpful. I think examples like this one clearly show that entities can be very useful when used intelligently; it is much easier to see that the assertions given above are correct than it is in the equivalent assertions after entity expansion (post-edited to provide better legibility):

The use of entity references makes it far easier to be confident that the two, or ten, for-expressions all really do the same thing, and they provide a level of abstraction which, in a simple way, encapsulates the book-keeping details and allows the overall structure of the two test expressions to be more clearly exhibited.

(End of digression.)

The end result is an XSD 1.1 datatype that detects most typos in the recording of ISBNs. It does not, alas, ensure that the legal ISBN one types in is actually the correct ISBN, only that it is a correct ISBN. But using machines to check what machines can check will leave more time for humans to check those things that only humans can check.