"The Department of Homeland Security's Federal Emergency Management
Agency (FEMA) today announced the adoption of a new digital message
format for the Integrated Public Alert and Warning System (IPAWS),
the nation's next generation emergency alert and warning network.
The goal of IPAWS is to expand upon the traditional Emergency Alert
System by allowing emergency management officials to reach as many
people as possible over as many communications devices as possible,
such as radio, television, mobile phones, personal computers and
other communications devices. The current Emergency Alert System
relies largely on radio and television to communicate to people.

The new digital message format being adopted by FEMA is the
Organization for the Advancement of Structured Information Standards
(OASIS) Common Alerting Protocol (CAP) v1.2 Standard. This open
standard will enable alert messages to be easily composed by emergency
management officials for communication with citizens using a much
broader set of devices to reach as many people as possible.

In order to assist officials in evaluating new alert and warning
systems, FEMA is conducting an assessment program to ensure products
adhere to the IPAWS CAP profile. A list of pre-screened products that
meet the profile will be published at the FEMA Responders Knowledge
Base, to aid federal, state, territorial, tribal and local officials
in purchasing emergency alert products that comply with IPAWS CAP.
Vendors can apply for these assessments...

The three documents defining the FEMA IPAWS technical standards and
requirements for CAP and its implementation are: (1) OASIS CAP
Standard v1.2; (2) IPAWS Specification to the CAP Standard (CAP v1.2
IPAWS USA Profile v1.0); (3) CAP to EAS Implementation Guide.
Additional information and documentation on CAP technical standards
can be found on the OASIS web site. The CAP to EAS Implementation Guide can be found on the web site of the EAS-CAP Industry Group..."

W3C has announced the formation of a Points of Interest Working Group
(POI WG) as part of the Ubiquitous Web Applications Activity. For the
purposes of this Working Group, a 'Point of Interest' is defined
simply as an entity at a physical location about which information is
available. For example, the Taj Mahal in India is a point of interest,
located at 27.174799° N, 78.042111° E in the WGS84
geodetic system. Additional information could be associated with it,
such as: it was completed around 1653, has a particular shape, and
that it is open to visitors during specific hours.

Points of Interest data has many uses, including augmented reality
browsers, location-based social networking games, geocaching, mapping,
navigation systems, and many others. This group will primarily focus
on POI use within AR applications but will strive to ensure reusability
across applications. The group will also explore how the AR industry
could best use, influence and contribute to Web standards.

The Working Group is chartered through December 2011, and is initially
chaired by Andrew Braun (Sony Ericsson). The POI WG will deliver the
following documents: (1) Points of Interest Recommendation, which will
define a format for the common components of a Point of Interest; (2)
the 'Points of Interest: Augmented Reality Vocabulary', planned as a
Working Group Note which specifies a vocabulary of properties to enable
Augmented Reality applications by attaching additional information to
POI data, e.g. logo, business hours, or social media related properties;
(3) 'Augmented Reality and Web Standards', documenting how Augmented
Reality applications may best re-use and/or influence current and
future W3C Recommendations... The POI WG will be judged a success if
it produces a Point of Interest format Recommendation that has two or
more complete, independent and interoperable implementations.

In addition to the deliverables listed above, the Working Group is
intending to produce a test suite for the POI Recommendation, and the
AR Vocabulary Note to assist in ensuring interoperability. The WG may
also publish use case and requirements, primers and best practices for
Points of Interest as Working Group Notes. The Working Group may also
explore the Augmented Reality landscape with regards to Web standards
and publish these findings as a Working Group Note..."

Members of the IETF Internationalized Resource Identifiers (IRI)
Working Group have released an initial level -00 Internet Draft
for Guidelines and Registration Procedures for New URI/IRI Schemes.
This document "updates the guidelines and recommendations for the
definition of Uniform Resource Identifier (URI) schemes, and extends
the registry and guidelines to apply when the schemes are used with
Internationalized Resource Identifiers (IRIs). It also updates the
process and IANA registry for URI/IRI schemes, and if accepted,
obsoletes RFC 4395... The draft has been written in response to the
errata and the issues in the Trac issue tracker." An an official 'call
for consensus' has been issued to to adopt this document as an official
Working Group draft.

RFCs 2717 and 2718 "drew a distinction between 'locators' (identifiers
used for accessing resources available on the Internet) and 'names'
(identifiers used for naming possibly abstract resources, independent
of any mechanism for accessing them). The intent was to use the
designation 'URL' (Uniform Resource Locator) for those identifiers
that were locators and 'URN' (Uniform Resource Name) for those
identifiers that were names. In practice, the line between 'locator'
and 'name' has been difficult to draw: locators can be used as names,
and names can be used as locators. As a result, recent documents have
used the terms 'URI'/'IRI' for all resource identifiers, avoiding
the term 'URL' and reserving the term 'URN' explicitly for those
URIs/IRIs using the 'urn' scheme name. URN 'namespaces' are specific
to the 'urn' scheme and not covered explicitly by this specification.

This document eliminates RFC 2717's distinction between different
'trees' for URI schemes; instead there is a single namespace for
registered values. Within that namespace, there are values that are
approved as meeting a set of criteria for URI schemes. Other scheme
names may also be registered provisionally, without necessarily
meeting those criteria. The intent of the registry is to: (1) provide
a central point of discovery for established URI/IRI scheme names,
and easy location of their defining documents; (2) discourage use of
the same scheme name for different purposes; (3) help those proposing
new scheme names to discern established trends and conventions, and
avoid names that might be confused with existing ones; (4) encourage
registration by setting a low barrier for provisional registrations.

RFC 3987 introduced a new protocol element, the Internationalized
Resource Identifier (IRI), by defining a mapping between URIs and
IRIs. 'Internationalized Resource Identifiers (IRIs)' of September
2010 updates this definition, allowing an IRI to be interpreted
directly without translating into a URI. There is no separate,
independent registry or registration process for IRIs: the URI Scheme
Registry is to be used for both URIs and IRIs. Previously, those who
wish to describe resource identifiers that are useful as IRIs were
encouraged to define the corresponding URI syntax, and note that the
IRI usage follows the rules and transformations defined in RFC 3987
(2006) This document changes that advice to encourage explicit
definition of the scheme and allowable syntax elements within the
larger character repertoire of IRIs..."

The U.S. Federal Identity, Credential and Access Management Subcommittee
(ICAM) has published Version 1.0 of the "Security Assertion Markup
Language (SAML) 2.0 Web Browser Single Sign-on (SSO) Profile." This
profile "has been adopted by ICAM for the purpose of Level of Assurance
(LOA) 1, 2, and 3 identity authentication, as well as holder-of-key
assertions for binding keys or other attributes to an identity at LOA 4.

The Profile is a deployment profile based on the OASIS SAML 2.0
specifications and the Liberty Alliance eGov Profile v.1.5. This
Profile relies on the 'SAML 2.0 Web Browser SSO Profile' to facilitate
end user authentication. This Profile does not alter these standards,
but rather specifies deployment options and requirements to ensure
technical interoperability with Federal government applications. Where
this Profile does not explicitly provide guidance, the standards upon
which this Profile is based take precedence. In addition, this Profile
recognizes the Liberty Alliance eGov Profile conformance requirements,
and to the extent possible reconciles them with other SAML 2.0 Profiles.

The objective of the document is to define the ICAM SAML 2.0 Web Browser
SSO Profile so that persons deploying, managing, or supporting an
application based upon it can fully understand its use in ICAM
transaction flows. In general, the SAML 2.0 protocol facilitates
exchange of SAML messages (requests and/or responses) between endpoints.
For this Profile, messages pertain primarily to the exchange of an
identity assertion that includes authentication and attribute
information. Message support for additional features is also available.
In ICAM, the endpoints are typically the Relying Party (RP) and the
Identity Provider (IdP). SAML 2.0 Profile defined herein includes the
following features: single sign-on, session reset, and attribute
exchange. In addition, this Profile defines two main SAML 2.0 use cases:
the end user starting at the RP, and the end user starting at the IdP.
Use case diagrams and sequence diagrams are provided to illustrate the
use cases. Privacy, security, and end user activation are also discussed.
Programmed trust (a mechanism to indicate to RPs which IdPs are
approved for use within ICAM) is also discussed, and a high-level
process flow diagram is provided to illustrate the concept..."

From Google: "Most of the common image formats on the web today were
established over a decade ago and are based on technology from around
that time. Some engineers at Google decided to figure out if there was
a way to further compress lossy images like JPEG to make them load
faster, while still preserving quality and resolution. As part of this
effort, we are releasing a developer preview of a new image format,
WebP, that promises to significantly reduce the byte size of photos
on the web, allowing web sites to load faster than before.

Images and photos make up about 65% of the bytes transmitted per web
page today. They can significantly slow down a user's web experience,
especially on bandwidth-constrained networks such as a mobile network.
Images on the web consist primarily of lossy formats such as JPEG, and
to a lesser extent lossless formats such as PNG and GIF. Our team
focused on improving compression of the lossy images, which constitute
the larger percentage of images on the web today.

To improve on the compression that JPEG provides, we used an image
compressor based on the VP8 codec that Google open-sourced in May 2010.
We applied the techniques from VP8 video intra frame coding to push
the envelope in still image coding. We also adapted a very lightweight
container based on RIFF. While this container format contributes a
minimal overhead of only 20 bytes per image, it is extensible to allow
authors to save meta-data they would like to store.

While the benefits of a VP8 based image format were clear in theory,
we needed to test them in the real world. In order to gauge the
effectiveness of our efforts, we randomly picked about 1,000,000 images
from the web (mostly JPEGs and some PNGs and GIFs) and re-encoded them
to WebP without perceptibly compromising visual quality. This resulted
in an average 39% reduction in file size. We expect that developers will
achieve in practice even better file size reduction with WebP when
starting from an uncompressed image..."

"The core Semantic Web technology is RDF, a W3C standard that reduces
all data to three-part statements known as triples. If your data fits
into the triple data model and is stored in one of the specialized
databases known as triplestores, the advantages of Semantic Web
technology are obvious. This doesn't mean, though, that the technology
has nothing to offer you if your data is in more traditional formats
such as relational databases and spreadsheets. Open source and commercial
tools are available to convert data in these formats to triples, giving
you an easy way to combine data from multiple sources using different
formats. Temporary conversion to triples is a great way to do ad-hoc
data integration if you want to cross-reference between disparate sources
or enhance data from one source with data from another.

The parts of a triple are officially known as the subject, predicate,
and object. If you're from a more traditional database background, you
can think of them as a resource identifier, an attribute name, and an
attribute value. For example, if a relational database or spreadsheet
says that employee 94321 has a hire-date value of 2007-10-14, it would
be easy to express this as a triple.

A triple's subject and predicate must be expressed as URIs to make them
completely unambiguous, and standard schemas and best practices for
doing this with popular domains are gaining maturity. The extra context
provided by a unique identifier means that schemas to specify data
structures in a collection of triples—although useful for adding
metadata that can enable inferencing and constraint checking—are
optional. When you consider that the greatest difficulty in combining
multiple sets of relational or XML data is lining up the corresponding
parts of their schemas, you can see that the lack of a need for RDF
schemas makes combining multiple sets of RDF data much simpler-- often
as simple as concatenating files.

Different sets of RDF data are much easier to combine than different sets
of data in other common formats. You can easily convert disparate non-RDF
data sets to RDF and then combine them to create new content... To
implement application logic by extracting subsets of data and sorting and
rearranging this data, relational databases have SQL, and XML has XSLT
and XQuery. RDF has the companion W3C SPARQL standard for querying triples,
which is especially handy after you combine a few sets of triples..."

"REST, short for 'Representational State Transfer,' is an architecture
paradigm for creating scalable services. A RESTful Web Service is one
that conforms to the REST architecture constraints. Microsoft's Windows
Communication Foundation (WCF) is a service-oriented framework that can
be used to expose a RESTful service. A RESTful Web Service exposes
resources URIs, then uses the HTTP methods to perform CRUD operations.
In this article, I examine the basic principles of REST, explain what
a RESTful service is, and show how a RESTful Service can be exposed
using Windows Communication Foundation.

Although REST is based on the stateless HTTP protocol, resources are
cacheable—you can also set expiration policies for your cached data.
In a typical REST-based model, the client and the server communicates
using requests and responses—the client send a request to the server
for a resource which in turn sends the response back to the client.

A request in a REST-based model contains an Endpoint URL, a Developer
ID, Parameters and the Desired Action. The Endpoint URL contains the
complete address of the script. The Developer ID is a key which
identifies each request origin and is sent with each request. You can
pass Parameters to a REST request just as you do with your method calls
in any programming language. The Desired Action in a REST request is
used to denote the action to be performed for the particular request.

WCF is a Microsoft framework that you can use to implement connected,
service-oriented, reliable, transacted services that are reliable and
secure. In WCF you have a framework that provides an excellent
unification of Microsoft's distributing technologies (Web Services,
Remoting, COM+, and so on) under a single umbrella. The three main
concepts related to WCF are: address, binding, and contract. While
address denotes the location of the service, binding specifies the
communication protocol and the security mechanisms that apply. To
implement a RESTful Service using WCF, you start by using Visual
Studio 2010 to create a WCF service and then make the service RESTful
using the necessary attributes..."

The web site 'Legislation.gov.uk' uses FRBR (Functional Requirements for Bibliographic Records) as part of a huge, fascinating, online
publishing project that's based on linked data. For some background,
check Pete Johnston's short blog post, the official blog post about
their API, and then read the full explanation from John Sheridan...

'At the moment, the RDF from legislation.gov.uk is limited to largely
bibliographic information. We have made use of the FRBR and the MetaLex
vocabularies, primarily to relate the different types of resource we
are making available. FRBR has the notion of a work, expressions of
that work, manifestations of those expressions, and items. Similarly,
MetaLex has the concepts of a BibliographicWork and
BibliographicExpression. In the context of legislation.gov.uk, the
identifier URIs relate to the work. Different versions of the
legislation (current, original, different points in time, or
prospective) relate to different expressions. The different formats
(HTML, HTML Snippets, XML, and PDF) relate to the different
manifestations. We have also made extensive use of Dublin Core Terms,
for example to reflect that different versions apply to geographic
extents. This is important as, for example, the same section of a
statute may have been amended in one way as it applies in Scotland
and in another way for England and Wales. We think FRBR, MetaLex, and
Dublin Core Terms have all worked well, individually and in combination,
for relating the different types of resource that we are making
available...'"

From John Sheridan, Head of e-Services and Strategy at The U.K.
National Archives: "...Open standards have played an important role
throughout the development of legislation.gov.uk. All the data is
held in XML, using a native XML database. The application logic is
similarly constructed using open standards, in XSLTs and XQueries.
Data and application portability were key objectives. We made
considerable use of open source software like Orbeon Forms, Squid, and
Apache... The simplest way to get hold of the underlying data on
legislation.gov.uk is to go to a piece of legislation on the Website,
either a whole item, or a part or section, and just append '/data.xml'
or '/data.rdf' to the URL. We have taken a similar approach with lists,
both in browse and search results. When looking at any list of
legislation on legislation.gov.uk, it is easy to view the data. Simply
append '/data.feed' to return that list in Atom.

The XML conforms to the Crown Legislation Markup Language (CLML)
and associated schema. More general interchange formats for legislation
such as CEN MetaLex lack the expressive power we need for UK legislation,
but could relatively easily be wrapped around the XML we are making
available. We have sought to surface richer metadata about legislation
using RDF, but we would welcome feedback from users of the XML data
about whether a MetaLex wrapper would be useful. And ee have used the
MetaLex vocabulary in our RDF along with FRBR... Similarly, it should
be relatively easy to add a wrapper for the OAI-PMH protocol on top of
the API we have built. We are not yet clear who would make use of such
a service, if we built one, or whether we should leave the creation of
an OAI-PMH interface to others..."