Members of the OASIS WS-BPEL Extension for People (BPEL4People) Technical
Committee have submitted two Committee Draft specifications for for
public review through March 24, 2010. This OASIS TC was chartered in
2008 to define "(1) extensions to the OASIS WS-BPEL 2.0 Standard to
enable human interactions, and (2) a model of human interactions that
are service-enabled. This technical work is being carried out through
continued refinement of the BPEL4People and WS-HumanTask specifications.
The TC's focus is on: defining the specification of a WS-BPEL extension
enabling the definition of human interactions ('human tasks') as part
of a WS-BPEL process, defining the specification of a model enabling
the definition of human tasks that are exposed as Web services, and
defining a programming interface enabling human task client applications
to work with human tasks.

The WS-BPEL Extension for People (BPEL4People) Specification Version
1.1 introduces a BPEL extension to address human interactions in BPEL
as a first-class citizen. It defines a new type of basic activity which
uses human tasks as an implementation, and allows specifying tasks local
to a process or use tasks defined outside of the process definition.
This extension is based on the WS-HumanTask specification. WS-BPEL 2.0
(Web Services Business Process Execution Language, version 2.0) itself
introduces a model for business processes based on Web services. A BPEL
process orchestrates interactions among different Web services. The
language encompasses features needed to describe complex control flows,
including error handling and compensation behavior. In practice, however
many business process scenarios require human interactions. A process
definition should incorporate people as another type of participants,
because humans may also take part in business processes and can influence
the process execution.

The goal of this BPEL4People extension specification is to enable
portability and interoperability, where 'portability' is the ability
to take design-time artifacts created in one vendor's environment and
use them in another vendor's environment and 'interoperability' is the
capability for multiple components (process infrastructure, task
infrastructures and task list clients) to interact using well-defined
messages and protocols. This enables combining components from
different vendors allowing seamless execution.

The Web Services - Human Task (WS-HumanTask) Specification Version 1.1 addresses human tasks — tasks enable the integration of human beings
in service-oriented applications. This document provides a notation,
state diagram and API for human tasks, as well as a coordination
protocol that allows interaction with human tasks in a more service-
oriented fashion and at the same time controls tasks' autonomy... Human
tasks are services implemented by people. They allow the integration
of humans in service-oriented applications. A human task has two
interfaces. One interface exposes the service offered by the task,
like a translation service or an approval service. The second interface
allows people to deal with tasks, for example to query for human tasks
waiting for them, and to work on these tasks. A human task has people
assigned to it. These assignments define who should be allowed to play
a certain role on that task... Human tasks can be defined to react to
timeouts, triggering an appropriate escalation action. This also holds
true for notifications. A notification is a special type of human task
that allows the sending of information about noteworthy business events
to people..."

Members of the IETF Extensible Messaging and Presence Protocol (XMPP)
Working Group have published an Internet Draft specifying Requirements
for End-to-End Encryption in the Extensible Messaging and Presence
Protocol (XMPP). The Extensible Messaging and Presence Protocol is
an open technology for real-time communication, which powers a wide
range of applications including instant messaging, presence, multi-party
chat, voice and video calls, collaboration, lightweight middleware,
content syndication, and generalized routing of XML data.

XMPP technologies are typically deployed using a client-server
architecture. As a result, XMPP endpoints (often but not always
controlled by human users) need to communicate through one or more
servers. For example, the user 'juliet@capulet.lit' connects to the
'capulet.lit' server and the user 'romeo@montague.lit' connects to the
'montague.lit' server, but in order for Juliet to send a message to
Romeo the message will be routed over her client-to-server connection
with capulet.lit, over a server-to-server connection between
'capulet.lit' and 'montague.lit', and over Romeo's client-to-server
connection with montague.lit. Although the XMPP-CORE specification
requires support for Transport Layer Security to make it possible to
encrypt all of these connections, when XMPP is deployed any of these
connections might be unencrypted. Furthermore, even if the
server-to-server connection is encrypted and both of the
client-to-server connections are encrypted, the message would still
be in the clear while processed by both the 'capulet.lit' and
'montague.lit' servers.

Thus, end-to-end ('e2e') encryption of traffic sent over XMPP is a
desirable goal. Since 1999, the Jabber/XMPP developer community has
experimented with several such technologies, including OpenPGP, S/MIME,
and encrypted sessions. More recently, the community has explored the
possibility of using Transport Layer Security (TLS) as the base
technology for e2e encryption. In order to provide a foundation for
deciding on a sustainable approach to e2e encryption, this document
specifies a set of requirements that the ideal technology would meet.

This specification primarily addresses communications security
('commsec') between two parties, especially confidentiality, data
integrity, and peer entity authentication. Communications security can
be subject to a variety of attacks, which RFC 3552 divides into passive
and active categories. In a passive attack, information is leaked
(e.g., a passive attacker could read all of the messages that Juliet
sends to Romeo). In an active attack, the attacker can add, modify,
or delete messages between the parties, thus disrupting communications...
Ideally, any technology for end-to-end encryption in XMPP could be
extended to cover any of: One-to-one communication sessions between
two 'online' entities, One-to-one messages that are not transferred
in real time, One-to-many information broadcast, Many-to-many
communication sessions among more than two entities. However, both
one-to-many broadcast and many-to-many sessions are deemed out-of-scope
for this document, and this document puts more weight on one-to-one
communication sessions..."

Members of the W3C User Agent Accessibility Guidelines Working Group
have published a First Public Working Draft for Implementing UAAG 2.0:
A Guide to Understanding and Implementing User Agent Accessibility
Guidelines 2.0 and an updated version of of the User Agent
Accessibility Guidelines (UAAG) 2.0 specification. Comments on the
two documents should be sent to the W3C public list by 16-April-2010.

The "User Agent Accessibility Guidelines (UAAG) 2.0" specification is
part of a series of accessibility guidelines published by the W3C Web
Accessibility Initiative (WAI). It provides guidelines for designing
user agents that lower barriers to Web accessibility for people with
disabilities. User agents include browsers and other types of software
that retrieve and render Web content. A user agent that conforms to
these guidelines will promote accessibility through its own user
interface and through other internal facilities, including its ability
to communicate with other technologies (especially assistive technologies).
Furthermore, all users, not just users with disabilities, should find
conforming user agents to be more usable.

In addition to helping developers of browsers and media players, the
document will also benefit developers of assistive technologies because
it explains what types of information and control an assistive technology
may expect from a conforming user agent. Technologies not addressed
directly by this document (e.g., technologies for braille rendering)
will be essential to ensuring Web access for some users with disabilities.

The Working Draft for "Implementing UAAG 2.0" provides supporting
information for the User Agent Accessibility Guidelines (UAAG) 2.0. The
document provides explanation of the intent of UAAG 2.0 success criteria,
examples of implementation of the guidelines, best practice recommendations
and additional resources for the guideline. It includes a new section
supporting the definition of a user agent.

This article narrates how government agencies are seeking to navigate
issues of interoperability, data migrations, security, and standards in
the context of Cloud Computing. The government defines cloud computing
as an on-demand model for network access, allowing users to tap into a
shared pool of configurable computing resources, such as applications,
networks, servers, storage and services, that can be rapidly provisioned
and released with minimal management effort or service-provider interaction.

Momentum for cloud computing has been building during the past year,
after the new [U.S.] administration trumpeted the approach as a way to
derive greater efficiency and cost savings from information technology
investments. But the journey to cloud computing infrastructures will
take a few more years to unfold, federal CIOs and industry experts say.
Issues of data portability among different cloud services, migration of
existing data, security and the definition of standards for all of those
areas are the missing rungs on the ladder to the clouds.

The Federal Cloud Computing Security Working Group, an interagency
initiative, is working to develop the Government-Wide Authorization
Program (GAP), which will establish a standard set of security controls
and a common certification and accreditation program that will validate
cloud computing providers...Cloud vendors need to implement multiple
agency policies, which can translate into duplicative risk management
processes and lead to inconsistent application of federal security
requirements.

At the user level, there are challenges associated with access control
and identity management,according to Doug Bourgeois, director of the
Interior Department's National Business Center.. Organizations must
extend their existing identity, access management, audit and monitoring
strategies into the cloud. However, the problem is that existing
enterprise systems might not easily integrate with the cloud... An agency
cannot transfer data from a public cloud provider, such as Amazon or
Google, and put it in an infrastructure-as-a-service platform that a
private cloud provider develops for the agency and then exchange that
data with another type of cloud provider; that type of data transfer is
difficult because there are no overarching standards for operating in a
hybrid environment...

Members of the IETF Internet Wideband Audio Codec (CODEC) Working Group
have released an initial level -00 Internet Draft specification for
Codec Requirements. Additional discussion (development process,
evaluation, requirements conformance, intellectual property issues) is
provided in the draft for Guidelines for the Codec Development Within
the IETF. The IETF CODEC Working Group was formed recently to "to
ensure the existence of a single high-quality audio codec that is
optimized for use over the Internet and that can be widely implemented
and easily distributed among application developers, service operators,
and end users."

"According to reports from developers of Internet audio applications
and operators of Internet audio services, there are no standardized,
high-quality audio codecs that meet all of the following three conditions:
(1) Are optimized for use in interactive Internet applications. (2) Are
published by a recognized standards development organization (SDO) and
therefore subject to clear change control. (3) Can be widely implemented
and easily distributed among application developers, service operators,
and end users. According to application developers and service operators,
an audio codec that meets all three of these would: enable protocol
designers to more easily specify a mandatory-to-implement codec in
their protocols and thus improve interoperability; enable developers
to more easily easily build innovative, interactive applications for
the Internet; enable service operators to more easily deploy affordable,
high-quality audio services on the Internet; and enable end users of
Internet applications and services to enjoy an improved user experience.

The "Codec Requirements" specification provides requirements for an audio
codec designed specifically for use over the Internet. The requirements
attempt to address the needs of the most common Internet interactive
audio transmission applications and to ensure good quality when
operating in conditions that are typical for the Internet. These
requirements address the quality, sampling rate, delay, bit-rate, and
packet loss robustness. Other desirable codec properties are considered
as well...

In-scope applications include: (1) Point to point calls—where point
to point calls are voice over IP (VoIP) calls from two "standard" (fixed
or mobile) phones, and implemented in hardware or software. (2)
Conferencing, where conferencing applications that support multi-party
calls have additional requirements on top of the requirements for
point-to-point calls; conferencing systems often have higher-fidelity
audio equipment and have greater network bandwidth available—especially
when video transmission is involved. (3) Telepresence, where most
telepresence applications can be considered to be essentially very
high-quality video-conferencing environments, so all of the conferencing
requirements also apply to telepresence. (4) Teleoperation, where
teleoperation applications are similar to telepresence, with the
exception that they involve remote physical interactions. (5) In-game
voice chat, where the requirements are similar to those of conferencing,
with the main difference being that narrowband compatibility is not
necessary. (6) Live distributed music performances / Internet music
lessons, and other applications, where live music requires extremely
low end-to-end delay and is one of the most demanding application for
interactive audio transmission.

"Browser makers, grappling with outmoded technology and a vision to
rebuild the Web as a foundation for applications, have begun converging
on a seemingly basic by very important element of cloud computing. That
ability is called local storage, and the new mechanism is called
Indexed DB. Indexed DB, proposed by Oracle and initially called
WebSimpleDB, is largely just a prototype at this stage, not something
Web programmers can use yet. But already it's won endorsements from
Microsoft, Mozilla, and Google, and together, Internet Explorer, Firefox,
and Chrome account for more than 90 percent of the usage on the Net today.

Standardization could come: advocates have worked Indexed DB into the
considerations of the W3C, the World Wide Web Consortium that
standardizes HTML and other Web technologies. In the W3C discussions,
Indexed DB got a warm reception from Opera, the fifth-ranked browser.

It may sound perverse, but the ability to store data locally on a computer
turns out to be a very important part of the Web application era that's
really just getting under way. The whole idea behind cloud computing is
to put applications on the network, liberating them from being tied to
a particular computer, but it turns out that the computer still matters,
because the network is neither fast nor ubiquitous. Local storage lets
Web programmers save data onto computers where it's convenient for
processors to access. That can mean, for example, that some aspects of
Gmail and Google Docs can work while you're disconnected from the
network. It also lets data be cached on the computer for quick access
later. The overall state of the Web application is maintained on the
server, but stashing data locally can make cloud computing faster and
more reliable..."

An editor's draft of the W3C specification Indexed Database API is
available online: " User agents need to store large numbers of objects
locally in order to satisfy off-line data requirements of Web applications.
'Webs Storage' [10-September-2009 WD] is useful for storing pairs of
keys and their corresponding values. However, it does not provide in-order
retrieval of keys, efficient searching over values, or storage of
duplicate values for a key. This specification provides a concrete API
to perform advanced key-value data management that is at the heart of
most sophisticated query processors. It does so by using transactional
databases to store keys and their corresponding values (one or more
per key), and providing a means of traversing keys in a deterministic
order. This is often implemented through the use of persistent B-tree
data structures that are considered efficient for insertion and deletion
as well as in-order traversal of very large numbers of data records.

This post is part of an ongoing series. It expands on Item 9 of 'Reforming
Standardisation in JTC 1', which proposed Ten Recommendations for Reform,
and Item 9 was "Clarify intellectual property policies: International
Standards must have clearly stated IP policies, and avoid unacceptable
patent encumbrances."

Historically, patents have been a fraught topic with an uneasy co-existence
with standards. Perhaps (within JTC 1) one of the most notorious recent
examples surrounded the JPEG Standard and, in part prompted by such
problems there are certainly many people of good will wanting better
management of IP in standards. Judging by some recent development in
document format standardisation, it seems probable that this will be the
area where progress can next be made...

The Myth of Unencumbered Technology: Given the situation we are evidently
in, it is clear that no technology is safe. The brazen claims of
corporations, the lack of diligence by the US Patent Office, and the
capriciousness of courts means that any technology, at any time, may
suddenly become patent encumbered. Technical people - being logical and
reasonable - often make the mistake of thinking the system is bound by
logic and reason; they assume that because they can see 'obvious' prior
art, then it will apply; however as the case of the i4i patent vividly
illustrates, this is simply not so.

While the 'broken stack' of patents is beyond repair by any single
standards body, at the very least the correct application of the rules
can make the situation for users of document format standards more
transparent and certain. In the interests of making progess in this
direction, it seems a number of points need addressing now. (1) Users
should be aware that the various covenants and promises being pointed-to
by the US vendors need not be relevant to them as regards standards use.
Done properly, International Standardization can give a clearer and
stronger guarantee of license availability—without the caveats,
interpretable points and exit strategies these vendors' documents
invariably have. (2) In particular it should be of concern to NBs that
there is no entry in JTC 1's patent database for OOXML (there is for
DIS 29500, its precursor text, a ZRAND promise from Microsoft); there
is no entry whatsoever for ODF... (3) In the case of the i4i patent,
one implementer has already commented that implementing CustomXML in
its entirety may run the risk of infringement—and this is probably,
after all, why Microsoft patched Word in the field to remove some
aspects of its CustomXML support).... (4) When declaring their patents
to JTC 1, patent holders are given an option whether to make a general
declaration about the patents that apply to a standard, or to make a
particular declaration about each and every itemized patent which
applies. I believe NBs should be insisting that patent holder enumerate
precisely the patents they hold which they claim apply.. There is
obviously much to do, and I am hoping that at the forthcoming SC 34
meetings in Stockholm this work can begin...