Contents

Overall goal

API tests should validate that the specification is interpreteed in the same way by (at least) two implementers.

General information

The test cases can be only tied to the JSON responses, which will be the evaluated features of the test suite. The reference JSON responses used are translations of the reference RDF documents collected in the Ontology fopr Media Resource test suite.

For running the automatic test suite, it is assumed that a JavaScript interface is available for testing (which might direct calls to a web service in the back-end).

Binding the test cases to actual interfaces may cause problems due to availability or visibility. In this terms this is the only intersection between the sync and async specification.

The test cases have been designed to be independent from an actual implementation mode, so automatic test suite can be used by both, the synchronous and asynchronous mode.

Implementations Overview

Current Implementations (compliant to new spec version)

Firefox extension (covering Youtube and Vimeo, whereas Vimeo metadata is not in scope)

Implementations under update/development

Web service implementation of Werner, Tobias and Florian (implementing a subset of mappings between mw ontology and DC/MPEG-7) (ready until ?)

Web service implementation of Salzburg research (formats in scope not yet discussed) (ready: mid December - contact: Thomas Kurz)

Unknown status

ETRI: EXIF, ID3, YouTube (covering all the properties)

IBBT-MMLab: Maybe they can help us out with NinSuna? We should ping Eric on this. (Florian will ping him)

Expected Output Documents

The test suite shall cover tests, if the implementations behaves as defined in the API specification.

(Automatic) Test suite

The test suite has to be aware, that there are tests which are independent of formats a) and tests, that need information on formats to be conducted (b).

a) General behaviour (format independent)

* An implementation must be able to echo the implemented mode (sync, async or both).
* Calling not-existent properties. Here, the right error response has to be set.
* Passing null/bad values. Here, the right error response has to be set.
* Multiple entries of properties

b) Metadata mapping test cases (format dependent)

The API specification defines a specific JSON response structure, to which a implementation has to be compliant. To enable an automatic test, a normative JSON response for a example media resource for each metadata format has to be created covering all properties defined in the corresponding mapping table. The example media resources have been already defined in the ontology test suite.

Requirements for automatic testing:

* It must be possible to define a certain set of properties of a metadata format to test against.
* The automatic test must indicate, whether a test for a specific property is failed, valid to the core requirements
or full compliance.
* An implementation has to be able to retrieve the original metadata document as a whole (at least for the properties in use).
* Optional: Filtering by language and metadata format.

Note 1: A normative JSON response for a particular metadata format is set of atomic tests. In example, if a metadata format has 10 mappings defined to our ontology, the compliance test for this metadata format may consist of 10 atomic tests at maximum (+ the genereal test cases).

Note 2: There exist some tests, where properties are not static fields and change over time. An example would be Youtube rating. In such cases, we can only check, whether a value is present or not.

Generation of Implementation Report

The generation report is following the form of a table. Here every possible atomic operation performed in the automatic test suite is listed. Beside the atomic test cases of the general behaviour, for every format, every property has to be listed in two instantiations (optional attributes vs. specific attributes). An example for Dublin Core creator would be:

#

Metadata format

Property

coverage

Implementation 1

...

Implementation n

JSON Fragment

i

DC

creator

core attributes

...

some link

i+1

DC

creator

specific attributes

...

some link

In the current implementation report, there exist also lines for properties that passed no values. These cases have already been covered by the general behaviour and makes these lines redundant.

Options for running the test

1) An automatic test suite is able to call the implementations (browser extension or web service supported) and compares the results against the reference data. Here the user is able to configure certain values.

2) To download the reference results and perform the checking locally. Maybe we should also provide some kind of excel sheet to integrate the results for feedback.