The W3C Web Security Context Working Group has published a Proposed
Recommendation specification for Web Security Context: User Interface
Guidelines. Public comment on the document is invited through
20-July-2010. The WG notes that substantive technical comments were
received during the Last Call review period that ended 31-March-2010.
The goal of this W3C Working Group is to enable users to come to a better
understanding of the context that they are operating in when making
trust decisions on the Web; e.g., giving up passwords or other sensitive
information to possibly malicious sites.

The specification deals with the trust decisions that users must make
online, and with ways to support them in making safe and informed
decisions where possible. It specifies user interactions with a goal
toward making security usable, based on known best practice in this
area. The document is intended to provide user interface guidelines...
Since this document is part of the W3C specification process, it is
written to clearly lay out the requirements and options for conforming
to it as a standard. User interface guidelines that are not intended
for use as standards do not have such a structure. Readers more
familiar with that latter form of user interface guideline are
encouraged to read this specification as a way to avoid known mistakes
in usable security.

In order to achieve its goal, the specification includes recommendations
on the presentation of identity information by user agents. It also
includes recommendations on conveying error situations in security
protocols. The error handling recommendations both minimize the trust
decisions left to users, and represent known best practice in inducing
users toward safe behavior where they have to make these decisions.
To complement the interaction and decision related parts of this
specification, Section 7 'Robustness Best Practices' addresses the
question of how the communication of context information needed to make
decisions can be made more robust against attacks.

This specification comes with two companion documents. 'Web Security
Experience, Indicators and Trust: Scope and Use Cases' documents the
initial assumptions about the scope of this specification. It also
includes an initial set of use cases the Working Group discussed.
'Web User Interaction: Threat Trees' documents the Working Group's
initial threat analysis. This document is based on current best
practices in deployed user agents, and covers the use cases and
threats in those documents to that extent..."

On behalf of the Apache ODE Development Team, Tammo van Lessen announced
the Version 1.3.4 release of ODE (Orchestration Director Engine), which
provides a web-service capable workflow engine. Highlights of this
release include (1) Instance replayer: Message exchanges between
partners and processes can be recorded, retrieved and replayed. This
allows for migrating running process instances to newer versions of a
process model or to another ODE instance. (2) Process OSGi bundles:
Process models can be packaged and deployed as OSGi bundles (ServiceMix).
(3) Spring-based properties: Spring properties can be accessed via
XPath extensions in BPEL (ServiceMix). ODE also offers side-by-side
support for both the WS-BPEL 2.0 OASIS standard and the legacy BPEL4WS
1.1 vendor specification. It supports 2 communication layers: one
based on Axis2 (Web Services http transport) and another one based on
the JBI standard (using ServiceMix)..."

Apache ODE (Orchestration Director Engine) "executes business processes
written following the WS-BPEL standard. It talks to web services,
sending and receiving messages, handling data manipulation and error
recovery as described by your process definition. It supports both
long and short living process executions to orchestrate all the services
that are part of your application.

WS-BPEL is an XML-based language defining several constructs to write
business processes. It defines a set of basic control structures like
conditions or loops as well as elements to invoke web services and
receive messages from services. It relies on WSDL to express web
services interfaces. Message structures can be manipulated, assigning
parts or the whole of them to variables that can in turn be used to
send other messages.

The principal objective in the development of ODE was to create a
reliable, compact, and embedable component capable of managing the
execution of long-running business processes defined using the BPEL
process description language. The focus has been on developing small
modules with minimal dependencies that could be assembled (and easily
reassembled) to construct a full featured BPMS. The key components of
the ODE architecture include the ODE BPEL Compiler, ODE BPEL Engine
Runtime, ODE Data Access Objects (DAOs), ODE Integration Layers (ILs),
and user tooling... The BPEL compiler is responsible for the conversion
of the source BPEL artifacts (i.e. BPEL process documents, WSDLs, and
schemas) into a compiled representation suitable for execution. The
output of the compiler is either a "good" compiled representation,
or a list of error messages indicating problems with the source
artefacts... The runtime handles the dirty work of process execution
by providing implementations of the various BPEL constructs. The
runtime also implements the logic necessary to determine when a new
instance should be created, and to which instance an incoming message
should be delivered. Finally, the runtime implements the Process
Management API that is used by user tooling to interact with the
engine..."

Members of the IETF Geographic Location/Privacy (GEOPRIV) Working Group
have published a revised draft of the Standards Track specification
Dynamic Host Configuration Protocol Options for Coordinate-Based
Location Configuration Information. If approved, this IETF
specification will obsolete IETF RFC #3825, published in July 2004.

The IETF GEOPRIV working group was chartered "to continue to develop
and refine representations of location in Internet protocols, and to
analyze the authorization, integrity, and privacy requirements that
must be met when these representations of location are created, stored,
and used. Many applications are emerging that require geographic and
civic location information about resources and entities, and that
the representation and transmission of that information has significant
privacy and security implications... The IETF has also begun working
on creating applications that use these capabilities, for emergency
services, general real-time communication, and other usages.

This document specifies Dynamic Host Configuration Protocol Options
(both DHCPv4 and DHCPv6) for the coordinate-based geographic location
of a client. The Location Configuration Information (LCI) includes
Latitude, Longitude, and Altitude, with resolution or uncertainty
indicators for each, where separate parameters indicate the reference
datum for each of these values... Appendix A ('GML Mapping') defines
an XML-based GML representation of a decoded DHCP option, which depends
on what fields are specified. The DHCP format for location logically
describes a geodetic prism, rectangle, or point, depending on whether
Altitude and uncertainty values are provided...

From the Introduction: "The physical location of a network device has
a range of applications. In particular, emergency telephony applications
rely on knowing the location of a caller in order to determine the
correct emergency center. The location of a device can be represented
either in terms of geospatial (or geodetic) coordinates, or as a civic
address. Different applications may be more suited to one form of
location information; therefore, both the geodetic and civic forms may
be used simultaneously... Typically DHCP clients refresh their
configuration in response to changes in interface state or pending
lease expirations. As a result, when a mobile host changes location
without subsequently completing another DHCP exchange, location
configuration information initially obtained via DHCP could become
outdated..."

"The W3C Cheatsheet for Web Developers is a compact Web application
that provides quick access to useful information from various W3C specs.
Making that Web app mobile friendly has always been one of its design
goals: it uses a very compact layout, the JavaScript-based auto-complete
search was tweaked to work reasonably well with mobile keyboards
(including virtual keyboards), it uses HTML5's ApplicationCache to
be usable off-line in browsers that support it.

One of the W3C Working Groups, the Web Applications Working Group is
developing a stack of specifications to make it easier to develop
applications with widgets. There are quite a few similar efforts in
various communities: Nokia's Web runtime engine, Firefox add-ons,
Chrome extensions, and Safari extensions to name a few. It will be
interesting to see if all these efforts end up converging toward the
current (or a future revision of) the W3C widgets specifications.

The W3C Cheat Sheet on Android is obviously not particularly an
endorsement of Android, even less so an endorsement of the world of
applications markets; a growing number of people seem to see these
markets as in opposition to the Web—my personal opinion is that
they're probably complementary, the same way a Web portal or a social
bookmarking service are complementary to search engines

This Cheat Sheet on Android allows quick access to: (1) the
description of the various language tokens (elements, attributes,
properties, functions, etc) of HTML, CSS, SVG and XPath, through
the text entry box on the Search tab; when you start typing a string,
a drop down menu appears, allowing to select a token among those
that match what you have typed; (2) the summary of the Mobile Web
Best Practices, under the mobile tab; (3) the Web Content Accessibility
Guidelines 2.0 at a glance, under the accessibility tab; (4) the
internationalization quicktips under the I18N tab; (5) and some
typography reminders in the typography tab..."

"DMTF has announced the opening of the Common Diagnostic Model 1.0
Conformance Program (CDM 1.0). Interested companies can now begin
testing their products using the CDM 1.0 Conformance Test Suite (CTS)
and submitting their results to the CDM Conformance Program
Administrator for validation.

The CDM Conformance Program (CDM CP) is designed to validate CDM
implementations to a particular version of the CDM Implementation
Requirements Specification and is managed and sponsored by the CDM
Forum. CDM is used to evaluate the health of computer system components
in multivendor environments. It specifies diagnostics instrumentation
that can be utilized by vendors (OEMs and system builders) and platform
management applicants to determine the health of a computer system
components.

Companies interested in participating in the CDM CP self-test their
implementation using the applicable CDM Conformance Test Suite and
submit their digitally signed results to the CDM Conformance Program
Administrator (an independent third party) for validation. The results
will be validated by the Conformance Program Administrator. Certified
results may be submitted for inclusion in the DMTF Certification
Registry.

The CDM 1.0 Conformance Test Suite (CTS) software is provided by DMTF
to industry leading vendors developing diagnostics within the DMTF
DSP 1002 Profile Specification 1.0 using the CIM-XML protocol. A
WS-Management protocol version of the CTS will be available in the
near future... The conformance programs are a key piece of DMTF's mission
to promote interoperable IT management solutions. DMTF is committed
to helping members develop and test standards-based products for their
customers..."

Real-time web applications allow users to receive notifications as
soon as information is published, without needing to check the original
source manually for updates. They have been popularized by
social-notification tools like Twitter and Friendfeed, web-based
collaboration tools like Google Wave, and web-based chat clients
like Meebo.

The Extensible Messaging and Presence Protocol (XMPP) is an XML-based
set of technologies for real-time applications, defined as networked
applications that continually update in response to new or changed
data. It was originally developed as a framework to support instant
messaging and presence applications within enterprise environments...

This tutorial introduces you to the real-time web and takes you
through some of the reasons for building real-time web applications.
You learn techniques that allow you to create responsive, continually
updated web applications that conserve server resources while providing
a slick user experience.

An initial level -00 IETF Internet Draft has been published for the
Standards Track specification HTTP Strict Transport Security. This
specification defines a mechanism enabling Web sites to declare
themselves accessible only via secure connections, and/or for users to
be able to direct their user agent(s) to interact with given sites
only over secure connections. This overall policy is referred to as
Strict Transport Security (STS). The policy is declared by Web sites
via the Strict-Transport-Security HTTP Response Header Field. Use
cases illustrated include: (1) A Web browser user wishes to discover,
or be introduced to, and/or utilize various web sites (some arbitrary,
some known) in a secure fashion. (2) A Web site deployer wishes to
offer their site in an explicitly secure fashion for both their own,
as well as their users', benefit.

From the Introduction: "The HTTP protocol may be used over various
transports, typically the Transmission Control Protocol (TCP). However,
TCP does not provide channel integrity protection, confidentiality,
nor secure server identification. Thus the Secure Sockets Layer (SSL)
protocol and its successor Transport Layer Security (TLS) were
developed in order to provide channel-oriented security, and are
typically layered between application protocols and TCP. RFC 2818
specifies how HTTP is layered onto TLS, and defines the Universal
Resource Identifier (URI) scheme of 'https' (in practice however,
HTTP user agents (UAs) typically offer their users choices among
SSL2, SSL3, and TLS for secure transport)... UAs employ various
local security policies with respect to the characteristics of their
interactions with web resources depending on (in part) whether they
are communicating with a given web resource using HTTP or
HTTP-over-a-Secure-Transport. For example, cookies may be flagged as
Secure. UAs are to send such Secure cookies to their addressed server
only over a secure transport. This is in contrast to non-Secure
cookies, which are returned to the server regardless of transport,
although modulo other rules...

UAs typically annunciate to their users any issues with secure connection
establishment, such as being unable to validate a server certificate
trust chain, or if a server certificate is expired, or if a server's
domain name appears incorrectly in the server certificate. Often,
UAs provide for users to be able to elect to continue to interact with
a web resource in the face of such issues. This behavior is sometimes
referred to as 'click(ing) through' security, and thus can be described
as 'click-through insecurity'..

Jackson and Barth proposed an approach (ForceHTTPS) to enable web sites
and/or users to be able to declare that such issues are to be treated
as fatal and without direct user recourse. The aim is to prevent users
from unintentionally downgrading their security. This specification
embodies and refines the approach proposed in 'ForceHTTPS: Protecting
High-Security Web Sites from Network Attacks', e.g., a HTTP response
header field is used to convey site policy to the UA rather than a
cookie..."

The W3C User Agent Accessibility Guidelines Working Group has published
an updated Working Draft of the User Agent Accessibility Guidelines
(UAAG) 2.0. This document provides guidelines for designing user
agents that lower barriers to Web accessibility for people with
disabilities.

User agents include browsers and other types of software that retrieve
and render Web content. A user agent that conforms to these guidelines
will promote accessibility through its own user interface and through
other internal facilities, including its ability to communicate with
other technologies, especially assistive technologies. Furthermore,
all users, not just users with disabilities, should find conforming
user agents to be more usable.

In addition to helping developers of browsers and media players, this
document will also benefit developers of assistive technologies
because it explains what types of information and control an assistive
technology may expect from a conforming user agent. Technologies not
addressed directly by this document (e.g., technologies for braille
rendering) will be essential to ensuring Web access for some users
with disabilities

The Working Group requests comments now in preparation for Last Call.
Members of the Working Group also published a Working Draft of the
Implementing UAAG 2.0 supporting Note. It provides explanation of the
intent of UAAG 2.0 success criteria, examples of implementation of
the guidelines, best practice recommendations and additional resources
for the guideline..."

"A U.S. Federal Trade Commission representative recently delivered a
stern indictment of current privacy laws, saying they fail to protect
American consumers and instead place too much of a burden on them. The
existing constellation of privacy laws, which relies heavily on
disclosure of data collection and use practices and on informed
consumer choice [but] to compare the privacy policies of two companies
is an almost impossible task...

These sentiments are likely to be reflected in a widely anticipated
report that the agency plans to publish later this year. The report is
expected to offer to Congress recommendations on new laws and may
state that the FTC intends to expand its current authority around
policing 'deceptive' practices to address more Internet-related business
practices...

Kathryn Ratte, a senior attorney in the FTC's consumer protection
bureau: 'In an area like cloud computing, which demonstrates some of
the limits of these traditional structures, the current notice and
choice model in some very basic sense isn't working...'

More hints have come in the form of an FTC document that suggests cloud
computing services could be targeted for more regulation. The ability
of these services to collect and centrally store increasing amounts of
consumer data, combined with the ease with which such centrally stored
data may be shared with others, create a risk..."

"Modern cloud computing platforms vary, but they share two critical
features: they abstract the underlying compute components and they
typically charge users incrementally based on their usage. The
'pay-as-you-go' billing strategy isn't new, and it has many potential
advantages, especially for scientists who don't require 24/7
accessibility. Many academic computational researchers have used
shared compute facilities for decades and are accustomed to being
billed per CPU-hour. What makes cloud architectures a compelling new
product for scientific computing—and what differentiates them from
existing supercomputing facilities—is the way they abstract the
underlying compute components...

We explored a variety of promising methods that let users interact
with custom AMIs—ranging from predefined scripts that manage AMIs
locally to tools that let users remotely ssh into the instances and
control them directly. This virtualization lets code developers
optimize and preinstall scientific codes on AMIs, thus facilitating
control over the computational environment.

In terms of performance, we've demonstrated that the EC2 cloud clusters
can provide access to reliable, high-performance computation for
general scientific users without requiring that they purchase and
maintain hardware on their own, provided that their application doesn't
demand high-performance network interconnects. Serial performance of
scientific codes was comparable to bare-metal runs on similar hardware.

However, the network that connects the EC2 compute hardware in the
same Amazon availability zone has similar latency and bandwidth
characteristics to a gigabit Ethernet network in a large office
building. Although network performance could have been even worse
(Amazon offers no guarantees), it's still a far cry from the
capability of the highperformance interconnects found on most
academic high-end computing clusters..."