1.1.2 Process to introduce new User Stories & Use Cases

Open an Issue in the tracker against the UC&R product. The WG will review these and decide whether these are valid.

1.2 Scope and Motivation

Linked Data was defined by Tim Berners-Lee with the following guidelines [1]:

Use URIs as names for things

Use HTTP URIs so that people can look up those names

When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL)

Include links to other URIs. so that they can discover more things

These four rules have proven very effective in guiding and inspiring people to publish Linked Data on the web. The amount of data, especially public data, available on the web has grown rapidly, and an impressive number of extremely creative and useful “mashups” have been created using this data as result.

There has been much less focus on the potential of Linked Data as a model for managing data on the web - the majority of the Application Programming Interfaces (APIs) available on the Internet for creating and updating data follow a Remote Procedure Call (RPC) model rather than a Linked Data model.

If Linked Data were just another model for doing something that RPC models can already do, it would be of only marginal interest. Interest in Linked Data arises from the fact that applications with an interface defined using Linked Data can be much more easily and seamlessly integrated with each other than applications that offer an RPC interface. In many problem domains, the most important problems and the greatest value are found not in the implementation of new applications, but in the successful integration of multiple applications into larger systems.

Some of the features that make Linked Data exceptionally well suited for integration include:

A single interface – defined by a common set of HTTP methods – that is universally understood and is constant across all applications. This is in contrast with the RPC architecture where each application has a unique interface that has to be learned and coded to.

A universal addressing scheme – provided by HTTP URLs – for both identifying and accessing all “entities”. This is in contrast with the RPC architecture where there is no uniform way to either identify or access data.

A simple yet extensible data model – provided by RDF – for describing data about a resource in a way which doesn’t require prior knowledge of vocabulary being used.

Experience implementing applications and integrating them using Linked Data has shown very promising results, but has also demonstrated that the original four rules defined by Tim Berners-Lee for Linked Data are not sufficient to guide and constrain a writable Linked Data API. As was the case with the original four rules, the need generally is not for the invention of fundamental new technologies, but rather for a series of additional rules and patterns that guide and constrain the use of existing technologies in the construction of a Basic Profile for Linked Data to achieve interoperability.

The following list illustrates a few of the issues that require additional rules and patterns:

What URLs do I post to in order to create new resources?

How do I get lists of existing resources, and how do I get basic information about them without having to access each one?

How should I detect and deal with race conditions on write?

What media-types/representations should I use?

What standard vocabularies should I use?

What primitive data types should I use?

A good goal for the Basic Profile for Linked Data would be to define a specification required to allow the definition of a writable Linked Data API equivalent to the simple application APIs that are often written on the web today using the Atom Publishing Protocol (APP). APP shares some characteristics with Linked Data, such as the use of HTTP and URLs. One difference is that Linked Data relies on a flexible data model with RDF, which allows for multiple representations.

1.3 Organization of this Document

This document is organized as follows:

User Stories capture statements about system requirements written from a user or application perspective. They are typically lightweight and informal and can run from one line to a paragraph or two (sometimes described as an 'epic') [2]. Analysis of each user story will reveal a number of (functional) use-cases and other non-functional requirements. See Device API Access Control Use Cases and Requirements for a good example of user stories and their analysis.

Use Cases are used to capture and model functional requirements. Use cases describe the system’s behavior under various conditions [3], cataloguing who does what with the system, for what purpose, but without concern for system design or implementation [4]. Each use case is identified by a reference number to aid cross-reference from other documentation; use-case indexing in this document is based on rdb2rdf use-cases. A variety of styles may be used to capture use-cases, from a simple narrative to a structured description with actors, pre/post conditions, and step-by-step behaviours as in POWDER: Use Cases and Requirements, and non-functional requirements raised by the use-case. Use cases act like the hub of a wheel, with spokes supporting requirements analysis, scenario-based evaluation, testing, and integration with non-functional, or quality requirements.

Scenarios are more focused still, representing a single instance of a use case in action. Scenarios may range from lightweight narratives as seen in Use cases and requirements for Media Fragments, to being formally modeled as interaction diagrams. Each use-case should include at least a primary scenario, and possibly other alternative scenarios.

1.4 User Stories

1.4.1 Maintaining Social Contact Information

Many of us have multiple email accounts that include information about the people and organizations we interact with – names, email addresses, telephone numbers, instant messenger identities and so on. When someone’s email address or telephone number changes (or they acquire a new one), our lives would be much simpler if we could update that information in one spot and all copies of it would automatically be updated. In other words, those copies would all be linked to some definition of “the contact.” There might also be good reasons (like off-line email addressing) to maintain a local copy of the contact, but ideally any copies would still be linked to some central “master.”

Agreeing on a format for “the contact” is not enough, however. Even if all our email providers agreed on the format of a contact, we would still need to use each provider’s custom interface to update or replace the provider’s copy, or we would have to agree on a way for each email provider to link to the “master”. If we look outside our own personal interests, it would be even more useful if the person or organization exposed their own contact information so we could link to it.

What would work in either case is a common understanding of the resource, a few formats needed, and access guidance for these resources. This would support how to acquire a link to a contact, and how to use those links to interact with a contact (including reading, updating, and deleting it), as well as how to easily create a new contact and add it to my contacts and when deleting a contact, how it would be removed from my list of contacts. It would also be good to be able to add some application-specific data about my contacts that the original design didn’t consider. Ideally we’d like to eliminate multiple copies of contacts, there would be additional valuable information about my contacts that may be stored on separate servers and need a simple way to link this information back to the contacts. Regardless of whether a contact collection is my own, shared by an organization, or all contacts known to an email provider (or to a single email account at an email provider), it would be nice if they all worked pretty much the same way.

1.4.2 Keeping Track of Personal and Business Relationships

In our daily lives, we deal with many different organizations in many different relationships, and they each have data about us. However, it is unlikely that any one organization has all the information about us. Each of them typically gives us access to the information (at least some of it), many through websites where we are uniquely identified by some string – an account number, user ID, and so on. We have to use their applications to interact with the data about us, however, and we have to use their identifier(s) for us. If we want to build any semblance of a holistic picture of ourselves (more accurately, collect all the data about us that they externalize), we as humans must use their custom applications to find the data, copy it, and organize it to suit our needs.

Would it not be simpler if at least the Web-addressable portion of that data could be linked to consistently, so that instead of maintaining various identifiers in different formats and instead of having to manually supply those identifiers to each one’s corresponding custom application, we could essentially build a set of bookmarks to it all? When we want to examine or change their contents, would it not be simpler if there were a single consistent application interface that they all supported? Of course it would.

Our set of links would probably be a simple collection. The information held by any single organization might be a mix of simple data and collections of other data, for example, a bank account balance and a collection of historical transactions. Our bank might easily have a collection of accounts for each of its collection of customers.

1.4.3 System and Software Development Tool Integration

System and software development tools typically come from a diverse set of vendors and are built on various architectures and technologies. These tools are purpose built to meet the needs for a specific domain scenario (modeling, design, requirements and so on.) Often tool vendors view integrations with other tools as a necessary evil rather than providing additional value to their end-users. Even more of an afterthought is how these tools’ data -- such as people, projects, customer-reported problems and needs -- integrate and relate to corporate and external applications that manage data such as customers, business priorities and market trends. The problem can be isolated by standardizing on a small set of tools or a set of tools from a single vendor, but this rarely occurs and if does it usually does so only within small organizations. As these organizations grow both in size and complexity, they have needs to work with outsourced development and diverse internal other organizations with their own set of tools and processes. There is a need for better support of more complete business processes (system and software development processes) that span the roles, tasks, and data addressed by multiple tools. This demand has existed for many years, and the tools vendor industry has tried several different architectural approaches to address the problem. Here are a few:

Implement an API for each application, and then, in each application, implement “glue code” that exploits the APIs of other applications to link them together.

Design a single database to store the data of multiple applications, and implement each of the applications against this database. In the software development tools business, these databases are often called “repositories.”

Implement a central “hub” or “bus” that orchestrates the broader business process by exploiting the APIs described previously.

It is fair to say that although each of those approaches has its adherents and can point to some successes, none of them is wholly satisfactory. The use of Linked Data as an application integration technology has a strong appeal. OSLC

1.4.4 Library Linked Data

The W3C Library Linked Data working group has a number of use cases cited in their Use Case Report. LLD-UC These referenced use cases focus on the need to extract and correlate library data from disparate sources. Variants of these use cases that can provide consistent formats, as well as ways to improve or update the data, would enable simplified methods for both efficiently sharing this data as well as producing incremental updates without the need for repeated full extractions and import of data.

The 'Digital Objects Cluster' contains a number of relevant use-cases:

Grouping: This should "Allow the end-users to define groups of resources on the web that for some reason belong together. The relationship that exists between the resources is often left unspecified. Some of the resources in a group may not be under control of the institution that defines the groups."

Collections discovery: "Enable innovative collection discovery such as identification of nearest location of a physical collection where a specific information resource is found or mobile device applications ... based on collection-level descriptions."

Community information services: Identify and classify collections of special interest to the community.

1.4.5 Municipality Operational Monitoring

Across various cities, towns, counties, and various municipalities there is a growing number of services managed and run by municipalities that produce and consume a vast amount of information. This information is used to help monitor services, predict problems, and handle logistics. In order to effectively and efficiently collect, produce, and analyze all this data, a fundamental set of loosely coupled standard data sources are needed. A simple, low-cost way to expose data from the diverse set of monitored services is needed, one that can easily integrate into the municipalities of other systems that inspect and analyze the data. All these services have links and dependencies on other data and services, so having a simple and scalable linking model is key.

1.4.6 Healthcare

For physicians to analyze, diagnose, and propose treatment for patients requires a vast amount of complex, changing and growing knowledge. This knowledge needs to come from a number of sources, including physicians’ own subject knowledge, consultation with their network of other healthcare professionals, public health sources, food and drug regulators, and other repositories of medical research and recommendations.

To diagnose a patient’s condition requires current data on the patient’s medications and medical history. In addition, recent pharmaceutical advisories about these medications are linked into the patient’s data. If the patient experiences adverse affects from medications, these physicians need to publish information about this to an appropriate regulatory source. Other medical professionals require access to both validated and emerging effects of the medication. Similarly, if there are geographical patterns around outbreaks that allow both the awareness of new symptoms and treatments, this information needs to quickly reach a very distributed and diverse set of medical information systems. Also, reporting back to these regulatory agencies regarding new occurrences of an outbreak, including additional details of symptoms and causes, is critical in producing the most effective treatment for future incidents.

1.4.7 Metadata enrichment in broadcasting

There are many different use cases when broadcasters show interest in metadata enrichment:

enrich definitions of terms in classification schemes or enumeration lists

This comes in support of more effective information management and data/content mining (if you can't find your content, it' like if you don't have and must either recreate or acquire it again, which is not financially effective).

However, there is a need for solutions facilitating linkage to other data sources and taking care of the issues such as discovery, automation, disambiguation. Etc. Other important issues that broadcasters would face are the editorial quality of the linked data, its persistence, and usage rights.

1.4.8 Aggregation and Mashups of Infrastructure Data

For infrastructure management (such as storage systems, virtual machine environments, and similar IaaS and PaaS concepts), it is important to provide an environment in which information from different sources can be aggregated, filtered, and visualized effectively. Specifically, the following use cases need to be taken into account:

While some data sources are based on Linked Data, others are not, and aggregation and mashups must work across these different sources.

Consumers of the data sources and aggregated/filtered data streams are not necessarily implementing Linked Data themselves, they may be off-the-shelf components such as dashboard frameworks for composing visualizations.

Simple versions of this scenario are pull-based, where the data is requested from data sources. In more advanced settings, without a major change in architecture it should be possible to move to a push-based interaction model, where data sources push notifications to subscribers, and data sources provide different services that consumers can subscribe to (such as "informational messages" or "critical alerts only").

In this scenario, the important factors are to have abstractions that allow easy aggregation and filtering, are independent from the internal data model of the sources that are being combined, and can be used for pull-based interactions as well as for push-based interactions.

1.4.9 Data Sharing

In a downscaled context, where the used of a central data repository is replaced by several smaller servers, it is necessary to be able to ship information among the servers. A device in the network may publish an information on a server with an other device as a target receiver. This message will then have to be forwarded from server to server until that target is reached. A set of common standards for updating the content of containers and the description of the resources will be necessary to implement such feature (not taking the routing aspect into consideration here).

1.4.10 Sharing Binary Resources and Metadata

When publishing datasets about stars one may want to publish links to the pictures in which those stars appear, and this may well require publishing the pictures themselves. Vice versa: when publishing a picture of space we need to know which telescope took the picture, which part of the sky it was pointing at, what filters were used, which identified stars are visible, who can read it, who can write to it, ...

If LinkedData contains information about resources that are most naturally expressed in non-rdf formats (be they binary such as pictures or videos, or human readable documents in XML formats), those non RDF formats should be just as easy to publish to the LinkedData server as the RDF relations that link those resources up. A LinkedData server should therefore allow publishing of non linked data resources too, and make it easy to publish and edit metadata about those resources.

The resource comes in two parts - the image and information about the image (which may in the image file but better external to it as it's more general). The information about the image is vital. It's a compound item of image data and other data (being application metadata about the image does not distinguish from the platform's point-of-view.

1.4.11 Data catalogs

The Asset Description Metadata Schema (ADMS) provides the data model to describe semantic assets repositories contents, but this leaves many open challenges when building a federation of these repositories to serve the need of assets reuse. These include accessing and querying individual repositories and efficiently retrieving updated content without having to retrieve the whole content. Hence, we chose to build the integration solution capitalizing on the Data Warehousing integration approach. This allows us to cope with heterogeneity of sources technologies and to benefit from the optimized performance it offers, given that individual repositories do not usually change frequently. With Data Warehousing, the federation requires to:

understand the data, i.e. understand their semantic descriptions, and other systems.

seamlessly exchange the semantic assets metadata from different repositories

keep itself up-to-date.

Repositories owners can maintain de-referenceable URIs for their repository description and contained assets in a Linked Data compatible manner. ADMS provides the necessary data model to enable meaningful exchange of data. However, This leaves the challenge of efficient access to the data not fully addressed.

1.4.12 Constrained Devices and Networks

Information coming from resource constrained devices in the Web of Things (WoT) has been identified as a major driver in many domains, from smart cities to environmental monitoring to real-time tracking. The amount of information produced by these devices is growing exponentially and needs to be accessed and integrated in a systematic, standardized and cost efficient way. By using the same standards as on the Web, integration with applications will be simplified and higher-level interactions among resource constrained devices, abstracting away heterogeneities, will become possible. Up-coming IoT/WoT standards such as 6LowPAN - IPv6 for resource constrained devices - and the Constrained Application Protocol (CoAP), which provides a downscaled version of HTTP on top of UDP for the use on constrained devices, are already at a mature stage. The next step now is to support RESTful interfaces also on resource constrained devices, adhering to the Linked Data principles. Due to the limited resources available, both on the device and in the network (such as bandwidth, energy, memory) a solution based on SPARQL Update is at the current point in time considered not to be useful and/or feasible. An approach based on the HTTP-CoAP Mapping would enable constrained devices to directly participate in a Linked Data-based environment.

1.4.13 Services Supporting the Process of Science

Many fields of science now include branches with in silico data-intensive methods, e.g. bioinformatics, astronomy. To support these new methods we look to move beyond the established platforms provided by scientific workflow systems to capture, assist, and preserve the complete lifecycle from record of the experiment, through local trusted sharing, analysis, dissemination (including publishing of experimental data "beyond the PDF"), and re-use.

Aggregations, specifically Research Objects (ROs) that are exchanged between services and clients bringing together workflows, data sets, annotations, and provenance. We use an RDF model for this. While some aggregated contents are encoded using RDF and in increasing number are linked data sources, others are not; while some are stored locally "within" the RO, others are remote (in both cases this is often due to size of the resources or access policies).

Services that are distributed and linked. Some may be centralising for e.g. publication, others may be local, e.g. per lab. We need lightweight services that can be simply and easily integrated into and scale across the wide variety of softwares and data used in science: we have adopted a RESTful approach where possible.

Foundation services that collect and expose ROs for storage, modification, exploration, and reuse.

Services that provide added-value to ROs such as seamless import/export from scientific workflow systems, automated stability evaluation, or recommendation (and therefore interact with the foundation services to retrieve/store/modify/ROs).

1.4.14 Project Membership Information : Information Evolution

Information about people and projects changes as roles change, as organisations change
and as contact details change. Finding the current state of a project is important
in enabling people to contact the right person in the right role. It can also be
useful to look back and see who was performing what role in the past.

A use of a Linked Data Platform could be to give responsibility for managing
such information with the project team itself, not requiring updates to be
requested of a centralised website administrator.

This could be achieved with:

Resource descriptions for each person and project

A container resource to describe roles/membership in the project.

To retain the history of the project, the old version of a resources,
including container resources, should be retained so there is a need to address both specific items
and also have a notion of "current".

Access to information has two aspects:

Access to the "current" state, regardless of the version of the resource description

Access to historical state, via access to a specific version of the resource description

1.4.15 Cloud Infrastructure Management

Cloud operators offer API support to provide customers with remote access for infrastructure management. Infrastructure consists of Systems, Computers, Networks, Discs, etc, and the overall structure can be seen as mostly hierarchical, (Cloud contains Systems, Systems contain Machines, etc). This is complemented with crossing links (e.g. Machines connected to a Network). The IaaS scenario also makes requirements for lifecycle management, non-instant changes and history capture. Infrastructure management can be seen as the manipulation of the underlying graph.

1.5 Use Cases

The following use-cases are each derived from one or more of the user-stories above. These use-cases are explored in detail through the development of scenarios, each motivated by some key aspect exemplified by a single user-story. The examples they contain are included purely for illustrative purposes, and should not be interpreted normatively.

1.5.1 UC1: Manage containers

A number of user-stories introduce the idea of a container as a mechanism for creating and managing resources within the context of an application. Resources grouped together within the same container would typically belong to the same application. A container is identified by a URI so is a resource in its own right.
The properties of a container may also represent the affordances of that container, enabling clients to determine what other operations they can do on that container. These operations may include descriptions of application specific services that can be invoked by exchanging RDF documents.

1.5.1.1 Primary scenario: create container

Create a new container resource within the LDP server.
In Services supporting the process of science, Research Objects are semantically rich aggregations of resources that bring together data, methods and people in scientific investigations. A basic workflow research object will be created to aggegate scientific workflows and the artefacts that result from this workflow. The research object begins life as an empty container into which workflows, datasets, results and other data will be added throughout the lifecycle of the project.

1.5.1.2 Alternative scenario: create a nested container

The motivation for nested containers comes from the System and Software Development Tool Integration user-story. The OSLC Change Management vocabulary allows bug reports to have attachments referenced by the membership predicate oslc_cm:attachment. The 'top-level-container' contains issues, and each issue resource has its own container of attachment resources.

1.5.2 UC2: Manage resources

This use-case addresses the managed lifecycle of a resource and is concerned with resource ownership. The responsibility for managing resources belongs with their container.
For example, a container may accept a request from a client to make a new resource.
This use-case focuses on creation and deletion of resources in the context of a container, and the potential for transfer of ownership by moving resources between containers.
The ownership of a resource should always be clear; no resource managed in this way should ever be owned by more than one container.

Once a new resource has been created it should be identified by a URI. Clients may defer responsibility for establishing dereferenceable URIs to the container of their data.
The container is a natural choice for the endpoint for this interface as it will already have some application-specific knowledge about the contained resources.
While the LDP has ultimate control over resource naming, some applications may require more control over naming, perhaps to provide a more human-readable URI. An LDP server could support something like the Atom Publishing Protocol slug header to convey a user defined naming 'hint'.

1.5.2.1 Primary scenario: create resource

Resources begin life by being created within a container. From user-story, Maintaining Social Contact Information, It should be possible to "easily create a new contact and add it to my contacts." This suggests that resource creation is closely linked to the application context. The new resource is created in a container representing "my contacts." The lifecycle of the resource is linked to the lifecycle of it's container. So, for example, if "my contacts" is deleted then a user would also reasonably expect that all contacts within it would also be deleted.

Contact details are captured as an RDF description and it's properties, including "names, email addresses, telephone numbers, instant messenger identities and so on." The description may include non-standard RDF; "data about my contacts that the original design didn’t consider."
The following RDF could be used to describe contact information using the FOAF vocabulary. A contact is represented here by a foaf:PersonalProfileDocument defining a resource that can be created and updated as a single-unit, even though it may describe ancillary resources, such as a foaf:Person, below.

1.5.2.2 Alternative scenario: delete resource

Delete a resource and all it's properties. If the resource resides within a container it will be removed from that container, however other links to the deleted resource may be left as dangling references.
In the case where the resource is a container, the server may also delete any or all contained resources.
In normal practice, a deleted resource cannot be reinstated. There are however, edge-cases where limited undelete may be desirable.
Best practice states that "Cool URIs don't change", which implies that deleted URIs shouldn't be recycled.

1.5.2.3 Alternative scenario: moving contained resources

Many resources may have value beyond the life of their membership in a container. This implies methods to add references to revise container membership.
Cloning container members for use in other containers results in duplication of information and maintenance problems; web practice is to encourage the creation of one resource, which may be referenced as many places as necessary. A change of ownership may - or may not - imply a change of URI, depending upon the specific LDP naming policy. While assigning a new URI to a resource is discouraged [5], it is possible to indicate that a resource has moved with an appropriate HTTP response.

1.5.3 UC3: Retrieve resource description

Access the current description of a resource, containing properties of that resource and links to related resources. The representation may include descriptions of related resources that cannot be accessed directly.

Depending upon the application, an LDP may enrich the retrieved RDF with additional triples. Examples include adding incoming links, sameAs closure and type closure.

The HTTP response should also include versioning information (i.e. last update or entity tag) so that subsequent updates can ensure they are being applied to the correct version.

1.5.3.1 Primary scenario

The user-story Project Membership Information discusses the representation of information about people and projects. It calls for "Resource descriptions for each person and project" allowing project teams to review information held about these resources. The example below illustrates the kinds of information that might be held about organizational structures based on the Epimorphicsorganizational ontology.

Note that the example below defines two resources (shown as separate sections below) that will be hosted on an LDP based at http://example.com/. The representations of these resources may include descriptions of related resources, such as http://www.w3.org/, that that fall under a different authority and therefore can't be served from the LDP at this location.

In many cases, the things that are of interest are not always the things that are resolvable. The example below demonstrates how a foaf profile may be used to distinguish between the person and the profile; the former being the topic of the latter. This begs the question as to what a client should do with such non-document resources. In this case the HTTP protocol requires that the fragment part be stripped off before requesting the URI from the server. The result is a resolvable URI for the profile.

1.5.4 UC4: Update existing resource

Change the RDF description of a LDP resource, potentially removing or overwriting existing data. This allows applications to enrich the representation of a resource by addling additional links to other resources.

1.5.4.1 Primary scenario: enrichment

This relates to user-story Metadata enrichment in broadcasting and is based on the BBC Sports Ontology. The resource-centric view of linked-data provides a natural granularity for substituting, or overwriting a resource and its data. The simplest kind of update would simply replace what is currently known about a resource with a new representation. There are two distinct resources in the example below; a sporting event and an associated award. The granularity of the LDP would allow a user to replace the information about the award without disturbing the information about the event.

A catalog may contain multiple datasets, so when linking to new datasets it would be simpler and preferable to selectively add just the new dataset links.
A Talis changeset[6][7] could be used to add a new dc:title to the dataset. The following update would be directed to the catalogue to add an additional dataset.

1.5.5 UC5: Determine if a resource has changed

It should be possible to retrieve versioning information about a resource (e.g. last modified or entity tag) without having to download a representation of the resource.
This information can then be compared with previous information held about that resource to determine if it has changed.
This versioning information can also be used in subsequent conditional requests to ensure they are only applied if the version is unchanged.

1.5.5.1 Primary scenario

Based on the user-story, Constrained Devices and Networks, an LDP could be configured to act as a proxy for a CoAP based Web of Things. As an observer of CoAP resources, the LDP registers its interest so that it will be notified whenever the sensor reading changes. Clients of the LDP can interrogate the LDP to determine if the state has changed.

In this example, the information about a sensor and corresponding sensor readings can be represented as RDF resources. The first resource below, represents a sensor described using the Semantic Sensor Network ontology.

The value of the sensor changes in real-time as measurements are taken. The LDP client can interrogate the resource below to determine if it has changed, without necessarily having to download the RDF representation. As different sensor properties are represented disjointly (separate RDF representations) they may change independently.

1.5.6 UC6: Aggregate resources

There is a requirement to be able to manage collections of resources. The concept of a collection overlaps with, but is distinct from that of a container. These collections are (weak) aggregations, unrelated to the lifecycle management of resources, and distinct from the ownership between a resource and its container. There is a need to be able to create collections by adding and deleting individual membership properties. Resources may belong to multiple collections, or to none.

1.5.6.1 Primary scenario: add a resource to a collection

There is an existing collection at <http://example.com/concept-scheme/subject-heading> that defines a collection of subject headings. This collection is defined as a skos:ConceptScheme and
the client wishes to insert a new concept into the scheme. which will be related to the collection via a skos:inScheme link. The new subject-heading, "outer space exploration", is not necessarily owned by a container. The following RDF would be added to the (item-level) description of the collection.

1.5.6.2 Alternative scenario: add a resource to multiple collections

Logically, a resource should not be owned by more than one container. however, it may be a member of multiple collections which define a weaker form of aggregation. As this is simply a manipulation of the RDF description of a collection, it should be possible to add the same resource to multiple collections.

As a machine-readable collection of medical terms, the SNOMED ontology is of key importance in healthcare. SNOMED CT allows concepts with more than one parent that don't fall into a lattice.
In the example below, the same concept may fall under two different parent concepts.
The example uses skos:narrowerTransitive to elide intervening concepts.

1.5.7 UC7: Filter resource description

This use-case extends the normal behaviour of retrieving an RDF description of a resource, by dynamically excluding specific (membership) properties.
For containers, it is often desirable to be able to read a collection, or item-level description that excludes the container membership.

1.5.7.1 Primary scenario: retrieve collection-level description

This scenario, based on Library Linked Data, uses the Dublin Core Metadata Initiative Collection-Level description.
A collection can refer to any aggregation of physical or digital items.
This scenario covers the case whereby a client can request a collection-level description as typified by the example below, without necessarily having to download a full listing of the items within the collection.

This use-case scenario, also based on Library Linked Data, focuses on obtaining an item-level description of the resources aggregated by a collection.
The simplest scenario is where the members of a collection are returned within a single representation, so that a client can explore the data by following these links. Different applications may use different membership predicates to capture this aggregation. The example below uses rdfs:member, but many different membership predicates are in common use, including RDF Lists.
Item-level descriptions can be captured using the Functional Requirements for Bibliographic Records (FRBR) ontology.

1.5.8 UC8: Adding non-RDF Resources

1.5.8.1 Primary scenario: single attachment

1.5.8.2 Alternative scenario: multiple attachments

A user is trying to create a work order along with an attached image showing a faulty machine part. To the user and to the work order system, these two artifacts are managed as a set. A single request may create the work order, the attachment, and the relationship between them, atomically.
When the user retrieves the work order later, they expect a single request by default to retrieve the work order plus all attachments.
When the user updates the work order, e.g. to mark it completed, they only want to update the work order proper, not its attachments.
Users may add/remove/replace attachments to the work order during its lifetime.