This document defines the Ontology for Media Resources 1.0. The term
"Ontology" is used in its broadest possible definition: a core vocabulary. The
intent of this vocabulary is to bridge the different descriptions of media
resources, and provide a core set of descriptive properties. This document
defines a core set of metadata properties for media resources, along with their
mappings to elements from a set of existing metadata formats. Besides that, the
document presents a Semantic Web compatible implementation of the abstract
ontology using RDF/OWL. The document is mostly targeted towards media resources
available on the Web, as opposed to media resources that are only accessible in
local archives or museums.

This section describes the status of this document at the time of its
publication. Other documents may supersede this document. A list of current W3C
publications and the latest revision of this technical report can be found in
the W3C technical reports index at
http://www.w3.org/TR/.

This is the second Last Call Working Draft of the Ontology for Media
Resources 1.0 specification.

This W3C Working Draft version of the Ontology for Media Resources 1.0
specification incorporates requests for changes from comments sent during the
first Last Call Review, as agreed with the commenters (see Disposition
of Last Call comments for Ontology for Media Resources 1.0) and changes
following implementation experiences from the Working Group. The Working Group
wishes to have these changes reviewed before proceeding to Candidate
Recommendation. For convenience, the differences between this second Last Call
Working Draft and the First Last Call
Working Draft are highlighted in the Last
Call Diff file.

The W3C Membership and other interested parties are invited to review the
document and send comments through 31 March 2011. Comments must be sent to public-media-annotation@w3.org
mailing list (public
archive). Use "[2nd LC Comment ONT]" in the subject line of your email.

Publication as a Working Draft does not imply endorsement by the W3C
Membership. This is a draft document and may be updated, replaced or obsoleted
by other documents at any time. It is inappropriate to cite this document as
other than work in progress.

Appendices

1 Introduction

This document defines the Ontology for Media Resources 1.0. In this
document, the term "ontology" is used in its broadest possible definition: a
core vocabulary. The Ontology for Media Resources 1.0 is both a core vocabulary
(a set of properties describing media resources) and its mapping to a set of
metadata formats currently describing media resources published on the Web.
Mappings to formats for media resources non available on the Web have not been
taken into account in this version of the Ontology. The purpose of the mappings
is to provide an interoperable set of metadata, thereby enabling different
applications to share and reuse these metadata. The set of properties of the
Ontology for Media Resources 1.0 was selected with respect to the most commonly
adopted set of elements from metadata formats currently in use to describe
media resources.

Ideally, the mappings defined in this document would preserve the semantics
of a metadata item across metadata formats. In reality, however, this cannot be
easily achieved: there is often a difference in the extension of what is
covered by the elements (or terms) from different formats. This means that a
mapping between the Ontology's property and the elements from two different
formats that have such a difference will not allow a semantic-preserving
mapping. For example, the propertydc:creator from the Dublin Core and
the propertyexif:Artist
defined in the Exchangeable Image File Format, or EXIF are both mapped to the property creator,
in the Ontology. The document therefore also specifies types of
mappings: exact, broader or narrower. Nevertheless, mapping back and forth
between properties from different schemata, using only the Ontology defined in
this specification as a reference, will induce a certain loss in semantics.
Mechanisms for correcting for this loss are beyond the scope of this document.

The Ontology defines mappings between
its set of properties and the elements from metadata formats commonly used to
describe media resources. The namespace for the Ontology is http://www.w3.org/ns/ma-ont#,
which is identified with the "ma" prefix in this document. Although some of the
properties can appear to be redundant with the Dublin
Core, there are several differences that distinguish them:

Dublin Core is only one of the vocabularies for which a
mapping is defined.

The Dublin Core set does not cover all needs of the Media Ontology;
this specification would be at least an extension of Dublin Core.

More importantly, the Dublin Core properties have been created with a
set of restrictions. While these restrictions are in general somewhat
loose, this specification required other restrictions on the
properties of the Ontology, related to its use in an API (see API for Media Resources).

The Media Ontology (i.e. the core set of properties and mappings defined in
this specification) provides the basic information needed by targeted
applications (see Use Cases
and Requirements for Ontology and API for Media Ressource 1.0) for
supporting interoperability among the various kinds of metadata formats related
to media resources that
are available on the Web. The Ontology is accompanied by an API (see API for Media Resources 1.0)
that provides a uniform access to all of its elements. Furthermore a Semantic
Web compatible implementation of the Ontology is available which is presented
in Section 7 of this document. This implementation uses the Semantic Web
ontology languages RDF/OWL and its derivation from the core vocabulary is
presented in detail with it.

The properties defined in this document are used to describe media resources
that are available on the web. Media resources can denote both the abstract
concept of a media resource (e.g., the movie "Notting Hill") as well as a
specific instance (e.g., a certain file with an MPEG-4 encoding of the English
version of "Notting Hill" with French subtitles). For the sake of simplicity,
we do not make distinctions between these different levels of abstraction that
exist in some formats (e.g., [FRBR])

1.1 Formats in
scope

This section is normative; however, examples contained in this section
are informative.

The following table lists the formats that were selected as in-scope of a
potential mapping from the Media Ontology, along with the identifiers which are
used as prefixes to identify them in this specification.

We distinguish multimedia metadata formats that focus on the description of
multimedia resources from multimedia container formats. In the case of the
latter, only few technical properties are relevant for the Ontology for Media
Resources, because of they widespread usage. Very specific properties are out
of the scope of this specification

1.2 Formats
out of scope

2 Conformance Requirements

This section is normative.

This document contains normative, non-normative, and informative sections.
The parts of this document that define the Ontology, as well as the syntactic
and semantic level mappings between elements from existing formats and the core
properties defined in this document, are normative, and are marked as such. For
normative sections only, the keywords "MUST", "MUST NOT", "REQUIRED", "SHALL",
"SHALL NOT", "SHOULD", "RECOMMENDED", "MAY", and "OPTIONAL" are to be
interpreted as described in RFC2119 [RFC
2119]. To facilitate the differentiation between the normative use
of these terms as defined in RFC2119 and a non-normative use of these terms,
the normative use of these terms MUST occur in all capital letters. All other
sections, including examples, are not normative.

A "strictly conforming" application is one that satisfies all
"MUST" and "SHALL" provisions in this document. In contrast, a
"conditionally conforming" application is one that satisfies all
"MUST" provisions in this document, but not all "SHALL" provisions. It should
be noted that an application that does not specify all "MUST"
provisions in this document is not conforming".

Note: In this specification the use of "Media Ontology" and
"Ontology for Media Resources 1.0 " is equivalent.

3 Terminology

A formal definition of an ontology is as follows. "An ontology is a
formal, explicit specification of a shared, often machine-readable,
vocabulary. Its meaning, in the form of entities and relationships
between them, intends to describe some knowledge in a given domain.
Formal refers to the fact that the ontology should be representable in a
formal grammar. Explicit means that the entities and relationships used,
and the constraints on their use, are precisely and unambiguously defined
in a declarative language suitable for knowledge representation. Shared
means that all users of an ontology will represent a concept using the
same or equivalent set of entities and relationships. Domain refers to
the content of the universe of discourse being represented by the
ontology" [KEUO]. In this specification,
the broadest possible definition of an ontology is used: a shared
vocabulary. The vocabulary in question is the list of core properties
(relationships) defined here (prefixed ma in this document); its
machine-readable format is specified in the following section. The vocabulary used is RDF [RDF]. However, implementations are not limited to
using RDF. Implementations MAY use different formats and still be
considered to be conformant with this specification, as long as they
comply to the definition of the properties listed in the following section 5.

A media resource is any physical or logical resource that can be
identified using a Uniform Resource Identifier (URI), as defined by
[RFC 3986]), which has or is related
to one or more media content types. Note that [RFC 3986] points out that a resource may be
retrievable or not. Hence, this term encompasses the abstract notion of a
movie (e.g., Notting Hill) as well as the binary encoding of this movie
(e.g., the MPEG-4 encoding of Notting Hill on a DVD), or any intermediate
levels of abstraction (e.g., the director's cut or the plain version of
the Notting Hill movie). Although some ontologies (FRBR, BBC) define different
concepts for different levels of abstraction, other ontologies do not.
Therefore, in order to foster interoperability, the ontology defined in
this specification does not provide such a classification of media
resources.

A property is an element from an existing metadata format for
describing media
resources, or an element from the core vocabulary as defined in this
specification. For example, the Dublin
Coredc:creator element and the Media Ontology
creator element are both properties. A property links a Media Resource with a
literal value or another resource. In the above example, the
dc:creator property links a given resource with the value of its
creator property. In this example, Dublin Core does this by defining the
dc:creator property as follows: "Examples of a creator include a
person, an organization, or a service".

Properties can have structured or unstructured values. The set of
properties defined in the Media Ontology core vocabulary is listed in
section 5 Property
definitions.

For the purposes of this document, a mapping is defined as
a function that transforms information represented in one schema using
one format to information in a different schema that uses a different
format. In this document, a set of mappings are defined between a subset
of the "in scope" Vocabularies and the properties of the core vocabulary of the Media
Ontology that is defined in this document. These mappings are presented
in section 5.2 Property mapping
table.

Property value types are the data types of the values for a property. For example, the property
dc:creator can have either string or URI as data types. Property
value types are defined in section 4 Property value type
definitions. They are dependent on XML Schema data types
[XML Schema 2].

4 Property value type definitions

This section is normative.

Note:

Currently, the data types of property values that used in this document are
defined in terms of XML Schema 1.1, part 2.

Applications that wish to be conformant with this specification MUST use the
data types specified in this section for property values that are defined in
this specification.

4.1 URI

"A Uniform Resource Identifier", or URI, is defined in [RFC 3986]. In this specification, the term URI is
used, since it is well known. However, the use of this term is extended in this
specification to also include "Internationalized Resource Identifiers" (IRIs),
as defined in [RFC 3987]. An IRI is a URI
that MAY contain non-escaped characters other than ASCII characters. The data
type is anyURI. Hence,
in this specification, the term "URI" MUST be interpreted to also include IRI.

4.2 String

A String value MUST be represented using the XML Schema string data type.

4.3 Integer

An Integer value MUST be represented using the XML Schema integer data type.

4.4 Double

A Double value MUST be represented using the XML Schema double data type.

4.5 Date

A Date value MUST be represented using the XML Schema dateTime data type.

5 Property
definitions

This section is normative; however, examples contained within this
section are informative.

5.1
Core property definitions

5.1.1 Description of
the approach followed for the property definitions

This list of core properties has been defined by creating an initial set of
mappings from the list of vocabularies in
scope. The core list is a selection of the properties that were supported
by the majority of the vocabularies in scope [findtop10].

Several properties in this specification are defined as complex types,
consisting of a tuple of attributes. This is used to support qualifiers and
optional attributes. Hence, a special syntax has been defined to accommodate
this requirement, and is explained below.

All properties names are intentionally in singular form and MUST contain
only a single value. However, multiple instances of a property MAY be used. In
addition, each property MAY have an associated language attribute, which can be
used to enable several instances of that property to be defined in different
languages.

The following syntax is used for the type descriptions:

( ) (parentheses) are used to indicate a attribute/value pair

| (vertical bar) is used to indicate a choice between different
values

{ } (curly brackets) are used to define a complex type, i.e., a tuple
of attribute/value pairs

? (question mark) is used to indicate an optional element

contributor { (attName="contributor", attValue="URI" | "String"),
(attName="role", attValue="URI" | "String")? } is interpreted as a complex type
that has two elements. The first identifies the contributor of a media resource
by using a URI or a string. The second specifies an optional role, which is
defined by a string. Elements are comma separated, and the collection of
elements that makes up the complex type is enclosed in curly brackets.

5.1.2
Descriptive properties (Core Set)

Name

Type definition

Description

Identification

identifier

(attName="identifier", attValue="URI")

A URI identifying a media resource, which can be either an abstract
concept (e.g., Hamlet) or a specific object (e.g., an MPEG-4 encoding
of the English version of "Hamlet").

A tuple that specifies the title or name given to the resource. The
type can be used to optionally define the category of the title.

language

(attName="language", attValue="URI" | "String")

The language used in the resource. We recommend to use a controlled
vocabulary such as [BCP 47]. An BCP
47 language identifier can also identify sign languages e.g. using ISO
639-3 subtags like bfi (British sign language).

locator

(attName="locator", attValue="URI")

The logical address at which the resource can be accessed (e.g. a
URL, or a DVB URI).

A tuple identifying the agent, using either a URI (recommended best
practice) or plain text. The role can be used to optionally define the
nature of the contribution (e.g., actor, cameraman, director, singer,
author, artist, or other role types). An example of such a tuple is:
{imdb:nm0000318, director}.

A tuple identifying the author of the resource, using either a URI
(recommended best practice) or plain text. The role can be used to
optionally define the category of author (e.g., playwright or author).
The role is defined as plain text. An example of such a tuple is:
{dbpedia:Shakespeare, playwright}.

A tuple identifying an optional name and/or an optional set of
geographic coordinates, in a given system (which is also optionnaly
specified), that describe where the resource has been created,
developed, recorded, or otherwise authored. The optional name can be
defined using either a URI (recommended best practice) or plain text.
The optional geographic coordinates MAY include longitude, latitude,
and/or altitude information, in a given geo-coordinate system (such as
the World Geodetic System)
that MAY also be specified.

Content description

description

(attName="description", attValue="String")

Free-form text describing the content of the resource.

keyword

(attName="keyword", attValue="URI" | "String")

A concept, descriptive phrase or keyword that specifies the topic of
the resource, using either a URI (recommended best practice) or plain
text. In addition, the concept, descriptive phrase, or keyword
contained in this element SHOULD be taken from an ontology or a
controlled vocabulary.

genre

(attName="genre", attValue="URI" | "String")

The category of the content of the resource, using either a URI
(recommended best practice) or plain text. In addition, the genre
contained in this element SHOULD be taken from an ontology or
controlled vocabulary, such as the EBU
vocabulary.

A tuple defining the rating value, an optional rating person or
organization defined as either a URI (recommended best practice) or as
plain text, and an optional voting range. The voting range can
optionally be used to define the minimum and maximum values that the
rating can have.

A tuple that identifies a resource that the current resource is
related with (using either a URI -recommended best practice- or plain
text), and optionally, specifies the nature of the relationship. An
example is a listing of content that has a (possibly named)
relationship to another content, such as the trailer of a movie, or the
summary of a media resource.

collection

(attName="collection", attValue="URI" | "String")

The name of the collection (using either a URI or plain text) from
which the resource originates or to which it belongs. We recommend to
use a URI, as a best practice.

A tuple containing the copyright statement associated with the
resource and optionally, the identifier of the copyright holder. Other
issues related to Digital Rights Management are out of scope for this
specification.

A tuple containing a policy statement either human readable as a
string or machine resolvable as a URI, and the type of the policy to
provide more information as to the nature of the policy. See examples.

Distribution

publisher

(attName="publisher", attValue="URI" | "String")

The publisher of a resource, defined as either a URI or plain text.
We recommend, as a best practice, to define the publisher as a URI.

A tuple defining the frame size of the resource (e.g., width and
height of 720 and 480 units, respectively). The units can be optionally
specified; if the units are not specified, then the units MUST be
interpreted as pixels.

compression

(attName="compression", attValue="URI" | "String")

The compression type used. For container files (e.g., QuickTime,
AVI), the compression is not defined by the format, as a container file
can have several tracks that each use different encodings. In such a
case, several compression instances should be used. Thus, querying the
compression property of the track media fragments will return different
values for each track fragment. Either or both of two values may be
supplied: a URI, and a free-form string which can be used for user
display or when the naming convention is lost or unknown. The URI
consists of a absolute-URI (RFC 3986 [RFC
3986], section 4.3) and fragment (RFC 3986 [RFC 3986], section 3.5), that is, e.g. in
the form absolute-URI#name. The absolute-URI identifies the naming
convention used for the second parameter, which is a string name from
that convention. A URL is preferred for the URI, and if it is used, it
(a) might contain a date in the form mmyyyy, indicating that the owner
of the domain in the URL agreed to its use as a label around that date
and (b) should be de-referencable, yielding an informative resource
about the naming convention. Note that this use of URIs with fragments
also closely matches RDF (see RDF
concepts). Note that for some container files, the format parameter
can also carry an extended MIME type to document this; see [RFC 4281] for one such instance. See examples.

duration

(attName="duration", attValue="Double")

The actual duration of the resource. The units are defined to be
seconds.

format

(attName="format", attValue="URI" | "String")

The MIME type of the resource (e.g.,
wrapper or bucket media types), ideally including as much information
as possible about the resource such as media type parameters, for
example, using the "codecs" parameter [RFC
4281].

A tuple defining the number of tracks of a resource, optionally
followed by the type of track (e.g., video, audio, or subtitle).

A number of these properties use qualifiers to define subtypes and roles:
identifier, title, contributor, creator, date, relation, collection, policy,
fragment and numTracks. In addition, the location, rating, copyright, and
frameSize properties use optional elements to define the unit of measure of
their values, the ranges that the values of these elements can have, or other
supplementary information. All subtype and role qualifiers for these properties
are optional. The set of possible values for subtypes is not normative.
However, whenever possible, values defined in an existing controlled vocabulary
or classification scheme SHOULD be used.

5.1.3
Examples for the Core Set of properties

5.1.3.1 Examples for the compression
property

Example

Property

Attribute name

Value

Comment

Example 1

compression

compression

urn:example-org:codingnames2010#ITU-H264

ITU-H264 and G711 are defined by example.org (who also
defined a URN to identify their naming conventions), and by example.net
(who use a URL to identify theirs).

compression

compression

Advanced Video Coding

Example 2

compression

compression

http://example.net/012011/standards/codecs.htm#G711

The second example gives only an identifier,

Example 3

compression

compression

Raw audio

the third example has no identifier, only an indicator.

Example 4

compression

compression

urn:x-ul:060E2B34.0401.0101.04020202.03020500

layer 2 or 3 compression, SMPTE

compression

compression

MPEG Layer II/III

Example 5

compression

compression

AVC MP@L42

AVC compression, Cablelabs

Example 6

compression

compression

c125

AVC compression, IPTC

5.1.3.2 Examples for the policy property

The "type definition" of the policy property would include:

policy.statement : A human-readable description of the Policy (string)
or an Identifier of the Policy (URI)

policy.type : The category of the Policy (URI)

Recommended values for policy.type is the Meta information from the XHTML
Vocabulary (http://www.w3.org/1999/xhtml/vocab/#)

The copyright would naturally be mapped into policy.statement

Examples:

Property

Attribute name

Value

policy

statement

Copyright PLING Inc 2010. All Rights Reserved

type

http://www.w3.org/1999/xhtml/vocab/#copyright

policy

statement

http://p3pbook.com/examples/10-4.xml

type

http://www.w3.org/1999/xhtml/vocab/#p3pv1

policy

statement

http://odrl.net/license/license.xml

type

http://www.w3.org/1999/xhtml/vocab/#license

policy

statement

http://creativecommons.org/licenses/by/3.0/

type

http://www.w3.org/1999/xhtml/vocab/#license

5.2
Property mapping table

5.2.1
Rationale regarding the mapping table

The mappings between the Media Ontology and a subset of the "in-scope"
vocabularies of this specification specify both the semantic and some elements
of the syntactic correspondences between the Media Ontology properties and the
elements of a given vocabulary. The vocabularies selected were those that were
deemed to be the most popular and useful regarding the proposed Use Cases (see
Use Cases and Requirements for
Ontology and API for Media Ressource 1.0).

5.2.1.1
Semantic Level Mappings

The presented mappings are uni-directional mappings, because the semantics
of the elements being mapped from the same Media Ontology property may be very
different across formats. For example, copyright is mapped to both
xmpDM:copyright and dc:rights (as part of the XMP standard
[XMP]); the same property is mapped to
exif:Copyright (see [EXIF]).
Unfortunately, no semantic relationship can be inferred between the elements
defined in the XMP and EXIF standards. The mappings that have been taken into
account have different semantics that have one of the following four
characteristics:

Exact match: the semantics of the two properties are equivalent in all
possible contexts. For example, the semantics of the property
title exactly matches the semantics of the property
vra:title.

More specific: the property of the vocabulary taken into account has
associated semantics that contain a superset of the semantics expressed by
the property defined in this specification. For example in DIG35, ipr_names@description and
ipr_person@description are both more specific than the property
publisher to which they are mapped.

More generic: the inverse of the above, meaning that the property of
the vocabulary taken into account has associated semantics that is broader
than the property defined in this specification. For example, the DIG35location is more general than the
location property.

Related: the two properties are related in a way that is relevant for
some use cases, but this relation has no defined and/or commonly applied
semantics. For example, in Media RSS,
media:credit is related to creator.

This list of relations between vocabularies (or informal mappings) and the
"Core Media Properties list" is published as a table. Feedback from people or
companies actually using the different vocabularies in communities that are
currently using the different vocabularies is very welcome; if such feedback is
received, it will be incorporated into an updated of this specification.

5.2.1.2 Syntactic Level Mappings

Syntactic level mappings define the correspondence between two similar
properties that have different syntactic expressions, but (roughly) similar
associated semantics. For example, one important use case is date formatting,
where the format of the date and/or time used is different in two vocabularies,
but the overall semantics (identifying a date and/or time) is the same.

5.2.1.3 Mapping
expression

The mapping expression corresponds to the concrete implementation or
representation of the mappings defined in the previous paragraph, both at a
semantic level and at syntactic one.

SKOS
(acronym for the Simple Knowledge Organization System) is a Recommendation of
the W3C Semantic Web activity which defines a vocabulary for representing
Knowledge Organization Systems, such as vocabularies, and relationships amongst
them. In SKOS the
mapping properties that we take into account in the mapping table are expressed
as: skos:exactMatch, skos:narrowMatch,
skos:broadMatch and skos:relatedMatch.

A future version of this specification may include additional information
about the properties. For example, some restrictions might be added to a set of
mappings (e.g., if they are symmetric) to enhance more efficient mappings. If
such changes are implemented, every effort will be made to produce a new and
revised specification that is backwards-compatible with the current version of
this specification.

5.2.2
Multimedia metadata formats mapping tables

The following mappings are established from the Media Ontology's properties
to various multimedia metadata formats. This list of formats is not closed, nor
does it pretend to be exhaustive. A future version of this specification may
include additional mappings if a need or use case is established for these new
mappings.

For each format there is a mapping table with the following columns.

MAWG: the name of the property being mapped to, like
identifier, title etc.

Spec: the abbrevation of the specification wich defines that
property.

How to do the mapping: details about the mapping. Not given
for all formats.

Datatype: the datatype of the format specific property.

Required vs Optional: information about optionality. Not
given for all formats.

XPath: an XPath 1.0 expression pointing to the property in
the format. Not given for all formats.

5.2.2.1 CableLabs 1.1

For the Cablelabs format the mapping table has the following extra
columns.

Type (MediaType): Defines the type of asset that this field
refers to in the Cablelabs 1.1 ADI and Content Specs. The type defines
whether the asset is a movie, a still or other structure in the whole Video
package.

Spec: In CableLabs Cablelabs 1.1 ADI and Content Specs there
are two specifications (ADI and CONTENT) which apply to the management of
the content and the content metadata itself. The AMS refers to the former,
the MOD or SVOD refers to the latter - defining the type of service
used.

String, one advisory per element (max 1024 characters
for all advisories). ~~~ Examples: ~~~ < app_data app=""MOD""
name=""Advisories"" value=""S"/" > ~~~ < app_data app=""MOD""
name=""Advisories"" value=""V"/" > ~~~ There are at most six
occurrences of "Advisories", with a combined maximum of at most 12
characters.</app_data></app_data>

String, one advisory per element (max 1024 characters
for all advisories). ~~~ Examples: ~~~ < app_data app=""MOD""
name=""Advisories"" value=""S"/" > ~~~ < app_data app=""MOD""
name=""Advisories"" value=""V"/" > ~~~ There are at most six
occurrences of "Advisories", with a combined maximum of at most 12
characters.</app_data></app_data>

Opt

N/A

Relational

relation

more general

Movie, Still-Image, Preview, TrickAsset,Encrypted Asset

AMS

Asset_Class

A system-level type for the asset. This is intended to be helpful for
the application mapping and routing, and expected to be more general
than the Type value for the content. Expected Value is "package".

NOTE: @scheme is the URI that identifies the categorization scheme.
It is an optional attribute. If this attribute is not included, the
default scheme is 'http://search.yahoo.com/mrss/category_schema'.

sum of all the attributes in AudioChannels +1 (for
video), if the profile of the MPEG-7 document is known, the number of
video and audio channels could be determined from the number of
parallel tracks being described

A date and time, stored using the extended format defined in ISO
8601:2004- Data elements and interchange format.

key: com.apple.quicktime.location.date value: a string containing the
location date and time

Defined in ISO 8601:2004- Data elements and interchange format.

N/A

related

A machine readable facing direction. Directions are specified as a
string consisting of one or two angles, separated by a slash if two
occur. The first is a compass direction, expressed in degrees and
decimal degrees, optionally preceded by the characters "+" or "-", and
optionally followed by the character "M". The direction is determined
as accurately as possible; the nominal due north (zero degrees) is
defined as facing along a line of longitude of the location system,
unless the angle is followed by the "M" character indicating a magnetic
heading. The second is an elevation direction, expressed in degrees and
decimal degrees between +90.0 and -90.0, with 0 being horizontal
(level), +90.0 being straight up, and -90.0 being straight down (and
for these two cases, the compass direction is irrelevant).

A UTF-8 string. This should not be tagged with a country or language
code.

N/A

related

A machine readable direction of motion. Directions are specified as a
string consisting of one or two angles, separated by a slash if two
occur. The first is a compass direction, expressed in degrees and
decimal degrees, optionally preceded by the characters "+" or "-", and
optionally followed by the character "M". The direction is determined
as accurately as possible; the nominal due north (zero degrees) is
defined as facing along a line of longitude of the location system,
unless the angle is followed by the "M" character indicating a magnetic
heading. The second is an elevation direction, expressed in degrees and
decimal degrees between +90.0 and -90.0, with 0 being horizontal
(level), +90.0 being straight up, and -90.0 being straight down (and
for these two cases, the compass direction is irrelevant).

A UTF-8 string. Can have multiple values with different language and
country code designations.

N/A

targetAudience

N/A

N/A

Fragments

fragments

N/A

N/A

namedFragments

N/A

N/A

Technical Properties

frameSize

exact

The width and height fields from the track header box of that track.
moov.trak.tkhd.(track width | track height)

If requested for a movie, and there is only one video track, or if
requested for a specific video track, the width and height of that
track. If the requested movie has more than one visual track, it is
calculated as the spatial union of all non-empty track dimensions.

Follow the box hierarchy inside the movie box, into each video or
sound track’s mdia.stbl.stsd, and then extract the 4-character codes
from the video sample description or descriptions.

four character code(s)

N/A

duration

exact

The duration field from the movie header (overall movie) or track
header (for a track), divided by the timescale from the movie header.
moov.mvhd.duration or moov.trak.tkhd.duration; divided by
moov.mvhd.timescale

Find the movie header box (mvhd) and get the timescale field, and
then retrieve the duration field from the movie or track header (mvhd,
tkhd) as appropriate, and divide.

float (after division), rational (as stored)

N/A

format

exact

video/quicktime (valid for all resources)

MIME type

N/A

samplingRate

exact

The field sample rate in the version 0 or 1 sound sample
description(s) for the movie sound tracks. This is a 16.16 integer with
the fractional 16 bits, fractional bits may be non-zero.
moov.trak.mdia.minf.stbl.stsd.(sound sample description
v0/v1.sampleRate)

Follow the box hierarchy inside the movie box, into each sound
track’s mdia.stbl.stsd, and locate the sound description. Confirm the
sound description version is 0 or 1 before proceeding. Retrieve the
32-bit fixed-point number.

32-bit fixed-point integer of the form (16.16)

N/A

exact

The field audio sample rate in the version 2 sound sample
description(s) for the movie sound tracks. This is a 64-bit floating
point double. moov.trak.mdia.minf.stbl.stsd.(sound sample description
v2.audioSampleRate)

Follow the box hierarchy inside the movie box, into each sound
track’s mdia.stbl.stsd, and locate the sound description. Confirm the
sound description version is 2 before proceeding. Retrieve the 64-bit
double.

Double

N/A

frameRate

more general

The sample count from the sample size (stsz) box in the sample table,
divided by the duration (see above).
moov.trak.mdia.minf.stbl.stsz.sampleCount, divided by duration. NOTE:
As frame durations may vary within a track, this is the average frame
rate. The frame rate is not guaranteed to be constant.

Either (a.i) sum the top-level box sizes or (a.ii) find the file size
from external means (e.g. file system) or (b) for each track, compute
the total sample size (from the sample size table). Then divide by
duration (computed above).

(a.i) sum over all top-level atoms(atom size) or (b) sum over all
samples(moov.trak.mdia.mif.stbl.stsz( sampleSize ) (count also in the
stsz box)

either ISO/IEC 646:1991 - ISO 7-Bit Coded Character Set
or binary mapping of 64bit time code

N/A

format

more specific

06.0E.2B.34.01.01.01.03

04.09.02.01.00.00.00.00

MIME media type

value

ISO/IEC 646:1991 - ISO 7-Bit Coded Character Set

N/A

samplingRate

exact

06.0E.2B.34.01.01.01.05

04.02.03.01.01.01.00.00

Audio Sample Rate

value

Rational

N/A

frameRate

exact

06.0E.2B.34.01.01.01.01

04.01.03.01.03.00.00.00

Frame Rate

value

UInt16

N/A

averageBitRate

more specific

06.0E.2B.34.01.01.01.02

OR

06.0E.2B.34.01.01.01.03

04.02.03.01.02.00.00.00

OR

04.01.05.01.11.00.00.00

Audio/Video Average Bit Rate

calculated from video+audio bitrate

Floating Point

N/A

numTracks

more specific

06.0E.2B.34.01.01.01.05

04.02.01.01.04.00.00.00

Channel Count (Audio)

channel count audio+1, if container is accessible, it
could be directly determined from the tracks in the container

UInt32

N/A

5.2.2.16 TTML

MAWG

Relation

TTML

How to do the mapping

Datatype

XPath

Descriptive Properties (Core Set)

Identification

identifier

N/A

title

more specific

title

#PCDATA

metadata/ttm:title

language

exact

xml:lang

#CDATA

tt/@xml:lang

locator

N/A

Creation

contributor

more general

agent

with type attribute values person|group|organization

#PCDATA in each of the name elements

metadata/ttm:agent/ttm:name

creator

more general

agent

with type attribute values person|group|organization

#PCDATA in each of the name elements

metadata/ttm:agent/ttm:name

date

N/A

location

N/A

Content description

description

exact

desc

#PCDATA

metadata/ttm:desc

keyword

N/A

genre

N/A

rating

Relational

relation

N/A

collection

N/A

Rights

copyright

exact

copyright

#PCDATA

metadata/ttm:copyright

policy

N/A

Distribution

publisher

more general

agent

with type attribute values person|group|organization

#PCDATA in each of the name elements

metadata/ttm:agent/ttm:name

targetAudience

N/A

Fragments

fragments

more general

@begin, @end

begin/end attribute of one of the following elements:
body, div, p, region, span

*/@begin, */@end

namedFragments

more general

@begin, @end

begin/end attribute of one of the following elements:
body, div, p, region, span; using media-marker-value flavour of the
attribute value

*/@begin, */@end

Technical Properties

frameSize

N/A

compression

N/A

duration

more general

@dur

dur attribute of one of the following elements: body,
div, p, region, span

*/@dur

format

N/A

samplingRate

N/A

frameRate

more general

frameRate

on one of the following elements: body, div, p, region,
span

*/ttp:frameRate

averageBitRate

N/A

numTracks

N/A

5.2.2.17 TV-Anytime

MAWG

Relation

TV-Anytime

How to do the mapping

Datatype

XPath

Each XPath expression is to be interpreted in the following
context:

TVAMain/ProgramDescription/ProgramInformationTable/ProgramInformation

The default namespace is urn:tva:metadata:2010.

Descriptive Properties (Core
Set)

Identification

identifier

exact

programId

OtherIdentifier

anyURI

string

@programId

or

OtherIdentifier

title

exact

Title

ShortTitle

or

TitleImage

or

TitleVideo

or

TitleAudio

Media titles allow identifying the resource by means
other than text

string

string

anyURI

anyURI

anyURI

Title

or

ShortTitle

or

MediaTitle/TitleImage/MediaUri

or

MediaTitle/TitleVideo/MediaUri

or

MediaTitle/TitleAudio/MediaUri

language

exact

Language, CaptionLanguage, SignLanguage

TVA gives information on three distinct types of
languages, with additional attributes, aggregation of information would
allow further informing language

string

string

string

BasicDescription/Language/language/@type or
BasicDescription/Language/language/@supplemental

or

BasicDescription/CaptionLanguage/language/@primary or
BasicDescription/CaptionLanguage/language/@translation or
BasicDescription/CaptionLanguage/language/@supplemental or
BasicDescription/CaptionLanguage/language/@closed

or

BasicDescription/SignLanguage/language/@primary or
BasicDescription/SignLanguage/language/@translation or
BasicDescription/SignLanguage/language/@type or
BasicDescription/SignLanguage/language/@closed

dc:creator property in the Dublin Core namespace. In
XMP, the tiff:Artist property from the Exif namespace for TIFF
properties is stored as the first item in dc:creator.

sequence of names

N/A

date

exact

xmp:CreateDate

xmp:CreateDate property in the XMP Basic namespace

ISO date format

N/A

exact

photoshop:DateCreated

photoshop:DateCreated property in the Photoshop
namespace

ISO date format

N/A

exact

exif:DateTimeOriginal

exif:DateTimeOriginal property in the Exif namespace for
Exif-specific properties. This should not be stored in files, only
added to extracted XMP for application runtime convenience.

ISO date format

N/A

related

dc:date

dc:date property in the Dublin Core namespace

sequence of ISO date format values

N/A

related

xmp:ModifyDate

xmp:ModifyDate property in the XMP Basic namespace

ISO date format

N/A

location

exact

exif:GPSLatitude and exif:GPSLongitude

exif:GPSLatitude and exif:GPSLongitude properties in the
Exif namespace for Exif-specific properties. These should not be stored
in files, only added to extracted XMP for application runtime
convenience.

GPS coordinate

N/A

related

photoshop:Country

photoshop:Country property in the Photoshop
namespace

string

N/A

related

photoshop:City

photoshop:City property in the Photoshop namespace

string

N/A

related

photoshop:State

photoshop:State property in the Photoshop namespace

string

N/A

Content description

description

exact

dc:description

dc:description property in the Dublin Core namespace. In
XMP, also tiff:ImageDescription property values from the Exif namespace
for TIFF properties are mapped to dc:description.

xmpDM:videoFrameSize property in the XMP Dynamic Media
namespace. xmpDM:videoFrameSize is not authoritative. Use the file
format specific technical metadata.

int, int (width x height)

N/A

compression

related

tiff:Compression

tiff:Compression property in the Exif namespace for TIFF
properties. tiff:Compression is not authoritative and irrelevant to
dynamic media formats. xmpDM:audioCompressor is not authoritative. Use
the file format specific technical metadata.

closed choice of integers

N/A

related

xmpDM:audioCompressor

xmpDM:audioCompressor property in the XMP Dynamic Media
namespace

string

N/A

duration

exact

xmpDM:duration

xmpDM:duration property in the XMP Dynamic Media
namespace. xmpDM:duration is not authoritative. Use the file format
specific technical metadata.

time value in seconds

N/A

format

exact

dc:format

dc:format property in the Dublin Core namespace

MIME type

N/A

samplingRate

more specific

xmpDM:audioSampleRate

xmpDM:audioSampleRate property in the XMP Dynamic Media
namespace. xmpDM:audioSampleRate is not authoritative. Use the file
format specific technical metadata.

integer

N/A

frameRate

exact

xmpDM:frameRate

xmpDM:frameRate property in the XMP Dynamic Media
namespace. xmpDM:frameRate is not authoritative. Use the file format
specific technical metadata.

classificationSystem: @country (This attribute value identifies the
country or countries where a video is considered to contain restricted
content. The attribute value will either be the word all, which
indicates that the video contains content that is considered restricted
everywhere, or a comma-delimited list of ISO 3166 two-letter country
codes identifying particular countries where the video content is
restricted)

5.2.3
Multimedia container formats mapping tables

The following mappings are established from the Media Ontology's properties
to various multimedia container formats. This list of container formats is not
closed, nor does it pretend to be exhaustive. A future version of this
specification may include additional mappings if a need or use case is
established for these new mappings.

Follow the box hierarchy inside the movie box, into each
track/mdia/stbl/stsd, and then extract the 4-character codes from the
sample entry or entries.

four character code(s)

N/A

duration

exact

The duration field from the movie header (overall movie) or track
header (for a track), divided by the timescale from the movie header.
moov.mvhd.duration or moov.trak.tkhd.duration; divide by
moov.mvhd.timescale

Find the movie header box (mvhd) and get the timescale field, and
then retrieve the duration field from the movie or track header (mvhd,
tkhd) as appropriate, and divide.

float (after division), rational (as stored)

N/A

format

exact

video/3gpp (valid for all resources), audio/3gpp (if it is known the
movie has no visual presentation)

static; but it may help to scan for the codecs used and supply those
(RFC4281), the
codecs parameter for bucket mime types) for 3GPP, MP4 and Movie
files.

MIME type

N/A

samplingRate

usually exact

In 3GP the field samplerate in the sample entry or entries for the
movie tracks. This is a 16.16 integer with the fractional 16 bits
restricted to be zero.
moov.trak.mdia.minf.stbl.stsd.(sampleentry.sampleRate)

Find the samplerate 32-bit field in the sample entry, and right-shift
16 bits.

Integer

N/A

frameRate

more general

The sample count from the sample size (stsz) box in the sample table,
divided by the duration (see above).
moov.trak.mdia.minf.stbl.stsz.sampleCount, divided by duration.

Either (a.i) sum the top-level box sizes or (a.ii) find the file size
from external means (e.g. file system) or (b) for each track, compute
the total sample size (from the sample size table). Then divide by
duration (computed above).

(a.i) sum over all top-level atoms(atom size) or (b) sum over all
samples(moov.trak.mdia.mif.stbl.stsz( sampleSize ) (count also in the
stsz box)

Note: in 3GPP and MP4 files, a single track may be addressed by track ID
using the using the ISO/IEC 21000-17:2006 "ffp()" syntax (for example
http://www.example.com/sample.3gp#ffp(track_ID=101)).

5.2.3.2 Flash

5.2.3.2.1 flv

FLV files can contain a SCRIPTDATA tag named onMetadata, documented in
section E.5 of the FLV and F4V specification [Flash]. Beginning in Flash version 10, FLV files
can also contain XMP metadata. Refer to the above XMP
metadata format mapping table for more details. Technical metadata should
be taken from the onMetadata tag according to the table below.

MAWG

Relation

Flash (FLV)

How to do the mapping

Datatype

XPath

Descriptive Properties (Core Set)

Identification

identifier

N/A

N/A

Content description

description

N/A

N/A

Technical Properties

frameSize

exact

The width and height fields from the onMetadata tag. The units are
always pixels.

DOUBLE, 64-bit IEEE float

N/A

compression

Not directly represented. Implicit in the audiocodecid and
videocodecid fields from the onMetadata tag.

N/A

duration

exact

The duration field from the onMetadata tag. The unit is always
seconds.

DOUBLE, 64-bit IEEE float

N/A

format

exact

video/x-flv

static

MIME type

N/A

samplingRate

exact

The audiosamplerate field from the onMetadata tag. The unit is always
samples per second.

DOUBLE, 64-bit IEEE float

N/A

frameRate

exact

The framerate field from the onMetadata tag. The unit is always
frames per second.

DOUBLE, 64-bit IEEE float

N/A

averageBitRate

exact

The sum of the audiodatarate and videodatarate fields from the
onMetadata tag. The units are always kilobits per second.

DOUBLE, 64-bit IEEE float

N/A

numTracks

exact

FLV files contain at most 1 audio track and at most 1 video track.
There are 1-bit flags in the FLV header telling if audio or video are
present.

N/A

5.2.3.2.2 f4v

F4V is a flavor of MPEG-4, used for Adobe's "Flash video" when H.264 is the
codec. Other than the format item, the technical properties are identical to
MPEG-4. The full table is reproduced here for convenience. F4V files will also
generally contain XMP metadata. Technical metadata from the native MPEG-4
locations should be preferred.

MAWG

Relation

F4V

How to do the mapping

Datatype

XPath

Descriptive Properties (Core Set)

Identification

identifier

N/A

N/A

Content description

description

N/A

MP4 has no formal system.

N/A

Technical Properties

frameSize

exact

The width and height fields from the track header box of that track.
moov.trak/tkhd.(width | height)

If requested for a movie, and there is only one video track, or if
requested for a specific video track, the width and height of that
track.

Follow the box hierarchy inside the movie box, into each
track/mdia/stbl/stsd, and then extract the 4-character codes from the
sample entry or entries.

four character code(s)

N/A

duration

exact

The duration field from the movie header (overall movie) or track
header (for a track), divided by the timescale from the movie header.
moov.mvhd.duration or moov.trak.tkhd.duration; divide by
moov.mvhd.timescale

Find the movie header box (mvhd) and get the timescale field, and
then retreive the duration field from the movie or track header (mvhd,
tkhd) as appropriate, and divide.

float (after division), rational (as stored)

N/A

format

exact

video/mp4 (valid for all resources), audio/mp4 (if it is known the
movie has no visual presentation)

static

MIME type

N/A

samplingRate

usually exact

In MP4 files, the field samplerate in the sample entry or entries for
the movie tracks. This is a 16.16 integer with the fractional 16 bits
restricted to be zero.
moov.trak.mdia.minf.stbl.stsd.(sampleentry.sampleRate)

Find the samplerate 32-bit field in the sample entry, and right-shift
16 bits.

Integer

N/A

frameRate

more general

The sample count from the sample size (stsz) box in the sample table,
divided by the duration (see above).
moov.trak.mdia.minf.stbl.stsz.sampleCount, divided by duration.

Either (a.i) sum the top-level box sizes or (a.ii) find the file size
from external means (e.g. file system) or (b) for each track, compute
the total sample size (from the sample size table). Then divide by
duration (computed above).

(a.i) sum over all top-level atoms(atom size) or (b) sum over all
samples(moov.trak.mdia.mif.stbl.stsz( sampleSize ) (count also in the
stsz box)

Follow the box hierarchy inside the movie box, into each
track/mdia/stbl/stsd, and then extract the 4-character codes from the
sample entry or entries.

four character code(s)

N/A

duration

exact

The duration field from the movie header (overall movie) or track
header (for a track), divided by the timescale from the movie header.
moov.mvhd.duration or moov.trak.tkhd.duration; divide by
moov.mvhd.timescale

Find the movie header box (mvhd) and get the timescale field, and
then retrieve the duration field from the movie or track header (mvhd,
tkhd) as appropriate, and divide.

float (after division), rational (as stored)

N/A

format

exact

video/3gpp (valid for all resources), audio/3gpp (if it is known the
movie has no visual presentation)

static; but it may help to scan for the codecs used and supply those
(RFC4281), the
codecs parameter for bucket mime types) for 3GPP, MP4 and Movie
files.

MIME type

N/A

samplingRate

usually exact

In MP4 files, the field samplerate in the sample entry or entries for
the movie tracks. This is a 16.16 integer with the fractional 16 bits
restricted to be zero.
moov.trak.mdia.minf.stbl.stsd.(sampleentry.sampleRate)

Find the samplerate 32-bit field in the sample entry, and right-shift
16 bits.

Integer

N/A

frameRate

more general

The sample count from the sample size (stsz) box in the sample table,
divided by the duration (see above).
moov.trak.mdia.minf.stbl.stsz.sampleCount, divided by duration.

Either (a.i) sum the top-level box sizes or (a.ii) find the file size
from external means (e.g. file system) or (b) for each track, compute
the total sample size (from the sample size table). Then divide by
duration (computed above).

(a.i) sum over all top-level atoms(atom size) or (b) sum over all
samples(moov.trak.mdia.mif.stbl.stsz( sampleSize ) (count also in the
stsz box)

RECORDING_LOCATION / COMPOSITION_LOCATION (The countries
corresponding to the string, same 2 octets as in Internet domains, or
possibly ISO-3166. This code is followed by a comma, then more detailed
information such as state/province, another comma, and then city.),
COMPOSER_NATIONALITY (The countries corresponding to the string, same 2
octets as in Internet domains, or possibly ISO-3166.)

LAW_RATING (Depending on the country it's the format of the rating of
a movie (P, R, X in the USA, an age in other countries or a URI
defining a logo)), ICRA (content rating for parental control,
previously RSACi), RATING (how much a person likes the song/movie. The
number is between 0 and 5 with decimal values possible (e.g. 2.7))

CONTENT_TYPE (the type of the item. e.g. Documentary, Feature Film,
Cartoon, Music Video, Music, Sound FX, ...), PERIOD (the period that
the piece is from or about)

String

N/A

Fragments

fragments

exact

Cues

Seek table provided through the following fields: Cues (top-level
element to speed seeking access), CuePoint (seek point), CueTime
(Absolute timecode according to the segment time base),
CueTrackPositions (positions for different tracks corresponding to the
timecode)

use TimecodeScale field to identify resolution of Duration field
(Timecode scale in nanoseconds, e.g. 1.000.000 means all timecodes in
the segment are expressed in milliseconds), which provides segment
duration (typically a Matroska file is composed of 1 segment)

Float

N/A

format

exact

CodecID field

fixed to "VP8" for video and "Vorbis" for audio

String constant (V_VP8, A_VORBIS)

N/A

samplingRate

exact

SamplingFrequency

Value of SamplingFrequency field (in Hz)

float

N/A

frameRate

exact

FrameRate

Value of FrameRate field (informational only, since frames are
timestamped)

float

N/A

averageBitRate

exact

calculate as bitrate = length_of_file / duration on system

float

N/A

numTracks

exact

max TrackNumber

maximum value on all TrackNumber field values in the Tracks field

unsigned integer

N/A

6 Usage Examples

This section is informative

6.1 Example1: How to use the POWDER
protocol in combination with the Media Ontology's properties for publishing
descriptions of media resources

6.2 Subtitles and the Ontology for
Media Resources

Concerning external subtitles, using relation is the recommended approach.
The identifier attribute contains the URL of the subtitle file, and the
relation type qualifies it as a subtitle relation. The value should be a URI,
but could also be a string. It is recommended to use a controlled vocabulary
for the type of the relation.

Embedding of subtitles is not a use case that has been considered, however
it is possible. The mechanism used to specify timed metadata is to specify
fragments identified by Media Fragment URIs [MediaFragment] and then describe annotations
of these fragments.

Subtitles can be embedded in a media file, in which case they can be
described as a track media fragment using fragment and Media Fragment URIs
[MediaFragment].

Subtitles could be embedded by using title with a type qualifier for
subtitle. A list of time media fragments is defined and each fragment is
annotated using title.

Although the last option is a way of embedding subtitles it is not
recommended. Instead, a dedicated format such as TTML or WebSRT should be used
for the subtitles and referenced.

6.3 Semantic annotation

Time based annotations are a possible and the following two cases are
covered by the specification:

use description for a textual description of the media resource (or a
fragment).

use relation to link to a RDF file or named graph containing the
annotation for the media resource (or fragment).

At the time of writing this specification, there no solution for embedding a
set of triples into one of the properties of the Ontology for Media Resources
1.0. The summary of a discussion with the Semantic Web Coordination Group is
that named graphs could be a solution to this issue, but there is no standard
syntax for expressing them, to which this specification could refer. Such a
syntax might find its way into RDF 2.0. Thus the embedding of triples into
media annotation elements is excluded until a standard syntax for named graphs
is available.

6.4 Captions and signing

Core property definitions section
defines a gerenal property fragment with a role attribute to
specify the relation between the resource and its fragment, like captioning or
signing. In the RDF representation, this is achieved by
defining subproperties of the <tt>ma:hasFragment</tt> property.

Captions and signing of a media resource can be provided in different forms,
the most typical being:

an additional track of the media file,

embeded in the video track,

as a separate file.

To account for this diversity, the RDF ontology does not link <tt>ma:hasTrack</tt> with <tt>ma:hasCaptioning</tt> or <tt>ma:hasSigning</tt>. The last two can
link a media resource to any fragment, e.g. a spatial fragment of the
video track where the signing is located, or even an external file considered
as a fragment of the resource. If the fragment is also a track, nothing
prevents to link it with both properties <tt>ma:hasCaptioning</tt> and <tt>ma:hasTrack</tt>.

For example, the following RDF describes a video with embeded signing,
subtitles as an external file, and a track containing audio-description
(caption for accessibility):

6.5 Language for media resources

The core set of properties proposed in section 5 only defines a single property
for specifying the language of a media resource. However, a media resource may
have several languages. For example, a video file can have the following
languages applying to it:

The four language codes could be directly applied to the video file, using
the language core property <tt>ma:hasLanguage</tt> in the RDF representation), but this would lose a part of the
information.

If one wants to keep the complete information, the recommended option is to
assign each language to the appropriate fragment of the video, using
[MediaFragment] to identify them, and the core
property fragment<tt>ma:hasFragment</tt> and its
subproperties in the RDF representation to link them to
the video file itself. In the example above, we would have:

the audio track associated with british english,

a temporal fragment of the audio track associated with french,

the subtitle track associated with spanish,

the spatial fragment of the video track associated with sign
language.

7 Namespace and RDF-representation of
the Ontology for Media Resources 1.0

This section is normative

This section presents an implementation of the Ontology for Media Resources
as a Semantic Web ontology. At first a namespace for the Ontology is defined
(Section 7.1). Secondly, an implementation guideline is given which details how
the core vocabulary defined in this specification relates to the RDF vocabulary
(Section 7.2). Finally Section 7.3 presents an RDF vocabulary which implements
the abstract ontology using RDF and OWL. The ontology is a valid OWL2 DL
ontology and it can be directly used to describe media resource on the Web in a
Semantic Web and Linked Data compatible way. The ontology has been built using
standard ontology engineering methodologies in a small expert group inside the
MAWG working group.

7.1 Namespace of core property
definitions

The namespace of the Ontology for Media Resources 1.0 is defined by this
URI: http://www.w3.org/ns/ma-ont#.
Applications that are compliant with this specification MUST use this namespace
URI.

Note:

As specifications that use this namespace URI progress through the
standardization process, they MUST use the same namespace URI. This namespace
URI is expected to remain the same throughout the evolution of this ontology,
even in the case new properties are added to it, so long as it remains
backwards compatible. If however a new version were produced that was not
backwards compatible, the WG reserves the right to change the namespace URI.

7.2
Correspondence between the informal ontology and the RDF representation

Unless stated otherwise, atomic values are represented by literals while
complex values are represented by resources. It follows that, in the general
case, properties with complex values are represented by object properties,
while properties with simple values are represented by datatype properties.
Attributes in complex values are represented by properties of the resource
representing the complex value; depending on their semantics, they are
represented by datatype or object properties.

The RDF ontology also introduces a number of classes corresponding to the
domains and ranges of the corresponding property.