Members of the IETF NETCONF Data Modeling Language (NETMOD) Working Group
have published an initial -00 level Internet Draft for "Mapping YANG to
Document Schema Definition Languages and Validating NETCONF Content." The
document provides the specification of a mapping that translates YANG
data models to XML schemas utilizing a subset of the Document Schema
Definition Languages (DSDL) schema languages... "Since NETCONF uses XML
for encoding its protocol data units (PDU), it is natural to express
the constraints on NETCONF content using standard XML schema languages.
For this purpose, the NETMOD WG selected the Document Schema Definition
Languages (DSDL) that is being standardized as ISO/IEC 19757. The DSDL
framework comprises a set of XML schema languages that address grammar
rules, semantic constraints and other data modeling aspects but also, and
more importantly, do it in a coordinated and consistent way... The mapping
procedure is divided into two steps: In the first step, the structure of
the data tree, RPC signatures and notifications is expressed as a single
RELAX NG grammar with simple annotations representing additional data model
information (metadata, documentation, semantic constraints, default values
etc.). The second step then generates a coordinated set of DSDL schemas
that can validate specific XML documents such as client requests, server
responses or notifications, perhaps also taking into account additional
context such as active capabilities... The main objective of this work is
to complement YANG as a data modeling language by validation capabilities
of DSDL schema languages, primarily RELAX NG and Schematron. The ultimate
goal is to be able to capture all substantial information contained in YANG
modules and express it in DSDL schemas. While the mapping from YANG to DSDL
described in this document is in principle invertible, the inverse mapping
from DSDL to YANG is not in its scope. XML-encoded data appear in several
different forms in various phases of the NETCONF workflow - configuration
datastore contents, RPC requests and replies, and notifications. Moreover,
RPC methods are characterized by an inherent diversity resulting from
selective availability of capabilities and features. YANG modules can also
define new RPC methods. The mapping should be able to accommodate this
variability and generate schemas that are specifically tailored to a
particular situation and thus considerably more efficient than generic
all-encompassing schemas. In order to cope with this variability, we
assume that the schemas can be generated on demand from the available
collection of YANG modules and their lifetime will be relatively short.
In other words, we don't envision that any collection of DSDL schemas
will be created and maintained over extended periods of time in parallel
to YANG modules. The generated schemas are primarily intended as input
to the existing XML schema validators and other off-the-shelf tools.
However, the schemas may also be perused by developers and users as a
formal representation of constraints on a particular XML-encoded data
object.

Members of the OASIS Content Management Interoperability Services (CMIS)
Technical Committee are working on designs to support unified search in
the CMIS specification. From the initial draft: "This document is a
proposal for a modification to the draft CMIS specification. The new
service described in this proposal will allow search crawlers to navigate
a CMIS repository." From the 'Introduction' section: "CMIS has introduced
a capability that allows repositories to expose what information inside
the repository has changed in an efficient manner for applications of
interest, like search crawlers, to leverage to facilitate incremental
indexing of a repository. In theory, a search crawler could index the
content of a CMIS repository by using the navigation mechanisms already
defined as part of the proposed specification. For example, a crawler
engine could start at the root collection and, using the REST bindings,
progressively navigate through the folders , get the document content
and metadata, and index that content. It could use the CMIS date/time
stamps to more efficiently do this by querying for documents modified
since the last crawl. But there are problems with this approach. First,
there is no mechanism for knowing what has been deleted from the
repository, so the indexed content would contain 'dead' references.
Second, there is no standard way to get the access control information
needed to filter the search results so the search consumer only sees
the content (s)he is supposed to see. Third, each indexer would solve
the crawling of the repository in a different way (for example, one
could use query and one could use navigation) causing different
performance and scalability characteristics that would be hard to control
in such system. Finally, the cost of indexing an entire repository can
be prohibitive for large content, or content that changes often, requiring
support for incremental crawling and paging results..." See also "CMIS
Unified Search Design Discussion" (11-February-2009): [We] "need to
ensure that this service, which is really an observation pattern, can
be extensible in the future to satisfy other use cases than search. For
example, if this were to be applied to audit as well as search, then it
would need to also answer who changed the item, why it was changed (e.g.,
comments), what was changed (specific properties that were changed).
Another use case to which this could be applied is replication..."

ISO standards can, at committee request, be marked stable, which means
they are not expected to change. That this is not the default may be
surprising to some people, especially if you have some kind tablets of
stone view of standards. In the particular cases of OOXML and ODF, I
think there is a strong need to find maintenance models that are both
workable, don't disenfranchise their champions, give all stakeholders an
equal voice, and prevent death marches. I suspect ODF and OOXML are both
very prone to the death march (i.e., where a project just drags on,
swamped by feature creep and committee paralysis. I suggest there are two
important practical issues, apart from the organizational behaviour of
committees. (1) Schedule releases, and (2) Variants. As to scheduled
releases: The first is that I think the ODF and OOXML standards should
move to a strictly timed release cycle. So ODF 2009, ODF 2010, ODF 2011,
OOXML 2009, OOXML 2010, and so on. ISO standards are already year stamped.
At a particular deadline point in the year, all the maintenance items
agreed to in the committee would get collected and rolled into a new
version. As to variants: schema languages can support variants and
evolution with first class constructs. Schematron's phase mechanism lead
the way here, but I see no reason why grammars might not usefully have
some other kind of mechanism. For example, here is a possibility for
RELAX NG: this could be implemented by a macro pre-processor of the schema,
or combined with the language... We allow patterns (on the condition that
they only are 'true()' or 'false())' to be specified in the command-line,
overriding pattern values in the schema. This gives us, in effect, a
boolean conditional language. Dave Peterson was advocating that SGML
needed something like this in the mid 1990s: I suppose that the variant
issues that SGML DTD faced after 8 years or so of the release of SGML
is similar to the problems we face now with schemas, 8 years or so after
the release of XML and XSD. This gives the ability to capture various
different schemas in a single language, which makes the history of the
changes more explicit.

This week, Sun Microsystems enters the open source middleware space with
GlassFish Portfolio, a pre-integrated and easily configurable platform
that offers an ESB, management and monitoring capability, a web portal
for community development, and LAMP stack. Sun's GlassFish Portfolio
release looks to leverage its successful open source GlassFish Application
Server, which now boats some 900,000 downloads. The GlassFish Portfolio
includes the GlassFish Application Server, along with the following new
components: (1) Sun GlassFish ESB: A lightweight, open source ESB platform
for department-scale and enterprise SOA deployments that connects existing
and new applications to deliver content and services to the Web. The
technology is based on Sun's Open ESB and the Java Composite Application
Platform Suite (Java CAPS). (2) Sun Enterprise Manager: For enterprise
scale management and monitoring of the GlassFish Portfolio including SNMP
(Simple Network Management Protocol) support. Support to use GlassFish
Enterprise Server also meets high-availability and high-scale mission-
critical requirements. (3) Sun GlassFish Web Stack: A complete and fully
integrated LAMP stack designed for developers wanting a light-weight Web
solution. The GlassFish Web Stack includes Tomcat, Memcached, Squid and
Lighttpd with support for PHP, Ruby and the Java platform. (4) Sun GlassFish
Web Space Server: Based on Liferay Portal, the leading open source portal
technology, helps companies simplify Web site development and build
collaborative work spaces, including portals and social networking sites.
Mark Herring (Sun VP of Software Infrastructure Marketing): "In conventional
LAMP stack implementations, for instance, sometimes a patch can come out
for one of the components and it can destroy a lot of hard work. The
GlassFish Portfolio is battle-tested for enterprise deployment, with
special attention to the precise pre-integration work needed to ensure
performance and reliability." From the announcement: "Companies
developing Web applications with the Sun GlassFish Portfolio can expect
to deploy quickly, see a seven fold improvement in application price/
performance at only 10 percent of the cost - over proprietary offerings.
Built on leading open source projects including: Apache Tomcat, Ruby,
PHP, Liferay Portal and GlassFish, the Sun GlassFish Portfolio packages
these components into a complete, pre-integrated and fully-tested open
source platform, resulting in increased productivity and faster time to
market. Because the Sun GlassFish Portfolio is based on the industry's
highest performing application server, GlassFish Enterprise Server, it
is suited for extremely high-scale mission-critical environments, as
well as departmental applications... GlassFish Portfolio and MySQL
Enterprise are both available from Sun with consistent pricing and
subscription support models so customers have a single vendor to stand
behind their open source deployments."

XBRL changes everything in terms of business intelligence, and Altova
is making XBRL generation and reporting much simpler... It's not like
XBRL is a simple XML schema: that would unfortunately not be able to
capture the complexity of standard financial reports, much less the
individualized extensions that most companies make to the standards to
reflect their own charts of accounts. If you look in the overview
section of the [example] screen shot you see that there are about ten
different files feeding into one XBRL document. Most of them are
US-GAAP and XBRL standards that companies never touch; only a few are
company-specific extensions. However, the overall complexity is daunting,
given that an "item" can be a whole reporting hypercube, as you can see
near the lower left. Given all that complexity, how could a company ever
comply with the XBRL reporting mandate? According to Alexander Falk
(Altova, CEO) some of his competitors provide tools for adding tags to
PDF files to generate XBRL from existing reports. Altova takes the
opposite approach: it provides tools for generating XBRL from the
accounting database, and also for generating formatted reports in PDF
and several other formats from the XBRL... Is this important beyond
compliance with SEC requirements? W. David Stephenson, who is writing
a book about data transparency, says that access to real-time data
through a format such as XBRL "changes everything" in terms of business
intelligence. Stephenson points to the Netherlands, which has been a
pioneer in the use of XBRL, where companies have the option of filing
one XBRL report instead of separate written reports to 30-40 different
agencies. The Dutch government is projecting enormous savings from this
report consolidation; there's also a tremendous additional advantage that
the required data elements have been consolidated from 200,000 to 8,000...
[From the Altova announcement:] "Comprehensive support for working with
Extensible Business Reporting Language (XBRL) data is now available in
the Altova MissionKit Version 2009 (v2009), its integrated suite of XML,
database, and UML tools. A host of powerful, new features allow users
to view, edit, validate, map, and publish XBRL data. With intelligent
wizards, graphical drag-and-drop design models, and various code generation
capabilities, the MissionKit Version 2009 gives developers, technical
professionals, and power users one easy-to-use suite of tools for
transforming XBRL data into content that can be shared with business
partners, stakeholders, and regulatory commissions... Altova's XML editor
for modeling, editing, transforming, and debugging XML technologies now
delivers new support for XBRL validation and taxonomy editing. A new
engine in XMLSpy supports the validation of documents created based on
XBRL 2.1 and XBRL Dimensions 1.0. This allows users to view and analyze
XBRL taxonomies as well as validate XBRL instance documents against
taxonomies. A powerful, graphical XBRL taxonomy editor has also been
added in XMLSpy 2009. The XBRL taxonomy editor uses the same editing
paradigm as the popular XMLSpy graphical XML schema editor, providing a
visual representation of XBRL taxonomies. Altova's graphical data mapping,
conversion, and integration tool now supports drag-and-drop mapping of
XBRL taxonomies as a source or target in any data mapping project..."

A few weeks ago, I decided to build a conformant AtomPub server
implementation on MarkLogic Server. Mosty for fun, but partly with an
eye towards using it for some future reimplementation of this weblog.
In any event, it's up and running on my test server... The executive
summary: dead easy to implement in MarkLogic Server. I built a flexible,
conformant AtomPub server in less than a thousand lines of XQuery. The
only tricky part, really, was getting the security right. But when
isn't it tricky to get security right? It's very convenient in a lot of
applications to rely on 'application level' security. You give all your
XQuery code full privileges to the whole system and rely on your coding
skills to manage access. This is very flexible and convenient, but it
doesn't work for AtomPub. AtomPub clients expect to use HTTP
authentication to gain access to the server, so that's what you have
to provide. Unlike a human user on a web browser, where you might
implement a floating, 'web 2.0' style login box, or its accessible
equivalent, for a machine operating over a wire protocol, you have to
reply with and respond to the proper HTTP authentication challenges.
Generally speaking, what this means is that you have to provide two URIs
for each resource on the server: one URI provides read-only, public
access, the other provides authenticated read-write access. If you're
developing on an Apache server (and I assume the same is true for a
lot of other servers), it's often convenient to do this by hacking the
path component and using '.htaccess' files... My implementation passes
Joe Gregorio's APP Test Client and Tim Bray's Atom Protocol Exerciser
so I think it's ready for real world use. Feel free to give it a try
on Microwave (Experimental AtomPub Server)...

Members of the IETF Transport Area Working Group (TSVWG) released a
new version of the specification "Resource Reservation Protovol (RSVP)
Extensions for Emergency Services." Summary: "An Emergency
Telecommunications Service (ETS) requires the ability to provide an
elevated probability of session establishment to an authorized user in
times of network congestion (typically, during a crisis). When supported
over the Internet Protocol suite, this may be facilitated through a
network layer admission control solution, which supports prioritized
access to resources (e.g., bandwidth). These resources may be explicitly
set aside for emergency services, or they may be shared with other
sessions. This document specifies extensions to the Resource reSerVation
Protocol (RSVP) that can be used to support such an admission priority
capability at the network layer. Note that these extensions represent
one possible solution component in satisfying ETS requirements. Other
solution components, or other solutions, are outside the scope of this
document. The mechanisms defined in this document are applicable to
controlled environments formed by either a single administrative domain
or a set of administrative domains that closely coordinate their network
policy and network design. The mechanisms defined in this document can
be used for a session whose path spans over such a controlled environment
in order to elevate the session establishment probability through the
controlled environment, thereby elevating the end to end session
establishment probability."

Web services have opened opportunities to integrate the applications
at an enterprise level irrespective of the technology they have been
implemented in. IBM's CICS transaction server for z/OS v3.1 can support
web services. It can help expose existing applications as web services
or develop new functionality to invoke web services. One of the commonly
used protocols for CICS web services is SOAP for CICS. It enables the
communication of applications through XML. It supports as a service
provider and service consumer independent of platform and language.
SOAP for CICS enables CICS applications to be integrated with the
enterprise via web services as part of lowering the cost of integration
and retaining the value of the legacy application. SOAP for CICS also
comes along with the implementation encoder and decoder. This article
describes two cases where CICS acts as a service provider and also as a
consumer for complex datatype objects. CICS SOAP 1.2 is the SOAP
implementation and encoding and decoding is done by the PIPELINE programs.
Two exclusive PIPELINEs need to be defined, one for the provider and
the other for the consumer. IBM provides CICS Web Services Assistants,
namely, DFHLS2WS and DFHWS2LS. The DFHLS2WS utility takes a language
data structure used by the service provider and generates the WSDL and
WSBIND files. The WSBIND file is used at runtime to convert a SOAP body
to a language data structure and vice-a-versa. The DFHWS2LS takes the
WSDL provided by a service and generates the language data structure
and a WSBIND file. Complex data types are handled by these utilities.
The languages supported by these utilities include COBOL, Java, C++,
and PL1. The web services are registered in the CICS region using the
PIPELINE SCAN command... With the increasing demand for integration
of enterprise applications with complex data type structures that have
an advantage for manipulating large and complex data, developers have
looked at various options one of which is web services. In this article,
we have developed a CICS-based program to act as web service provider
and consumer using complex data types.

The Collaborative Software Initiative (CSI) today posted an open letter
to U.S. President Barak Obama on open source software. The letter urges
him to mandate that the U.S. government consider open source software
for federal IT initiatives. The letter was signed by top executives of
companies with a vested interest in open source, including Alfresco,
Ingres, Jaspersoft, OpenLogic and Unisys Open Source Business. It was
subsequently signed by several dozen others. "We urge you to make it
mandatory to consider the source of an application solution (open or
closed) as part of the government's technology acquisition process,
just as considering accessibility by the handicapped is required today
(as defined by section 508)," the letter said. CSI helps companies and
public organizations build solutions based on open source software and
methodologies. For example, the CSI-supported TriSano effort is an open
source system designed to support infectious disease surveillance and
outbreak management. The letter was the brainchild of David Christiansen,
a CSI senior developer, who decided to write the letter upon reading
that creating electronic medical records was a priority for the president.
In an interview on Tuesday, Christiansen emphasized that the letter is
not intended to suggest that open source software be required. Instead,
CSI's view is that open source should be considered in RFPs and federally
funded programs.