W3C announced that the XML Core Working Group has published Extensible
Markup Language (XML) 1.0 Fifth Edition as a W3C Recommendation. This
fifth edition of the widely deployed standard XML incorporates
corrections to errata found in previous versions. In particular, one
correction relaxes the restrictions on element and attribute names,
thereby providing in XML 1.0 the major end user benefit currently
achievable only by using XML 1.1. As a consequence, many possible
documents that were not well-formed according to previous editions of
this specification are now well-formed, and previously invalid documents
using the newly-allowed name characters in, for example, ID attributes,
are now valid. XML has been designed for ease of implementation and
for interoperability with both SGML and HTML. The Extensible Markup
Language is a simple, flexible text format derived from SGML (ISO 8879).
The W3C created, developed and continues to maintain the XML
specification. XML documents are made up of storage units called
entities, which contain either parsed or unparsed data. Parsed data
is made up of characters, some of which form character data, and some
of which form markup. Markup encodes a description of the document's
storage layout and logical structure. XML provides a mechanism to
impose constraints on the storage layout and logical structure. The
W3C is also the primary center for developing other cross-industry
specifications that are based on XML. Some of these are done within
the XML Activity, such as XML Query and XML Schema, and some are
being done in other W3C Activities, such as Web Services, SVG and
XHTML. The XML Activity tries to keep a balance between maintaining
stability and backwards compatibility, making improvements that help
to encourage interoperability, and bringing new communities into
the world of XML.

NetBeans 6.5, was formally launched last week at Sun Microsystems's
Tech Day's event in Beijing, China. Overview: "In addition to full
support of all Java platforms (Java SE, Java EE, Java ME) and JavaFX,
the NetBeans IDE 6.5 is the ideal tool for software development with
PHP, Ajax and JavaScript, Groovy and Grails, Ruby and Ruby on Rails,
and C/C++. The 6.5 release provides enhanced support for web frameworks
(Hibernate, Spring, JSF, JPA), the GlassFish application server, and
databases. Additionally it includes a new IDE-wide QuickSearch shortcut,
a more user-friendly interface, and automatic Compile on Save." The
NetBeans community released an early access version for Python runtimes
with V6.5. Sun, the primary corporate supporter of the NetBeans project,
chose Beijing for the launch because 6.5 came out in fully localized
versions of Chinese, Japanese and Brazilian Portuguese. "We chose
Beijing to launch this version because, said David Folk, group marketing
manager for developer tools product marketing, "is because we were able
to do this simultaneous release. It's a single release not delayed by
the localized versions." The 6.5 version also comes with enhanced support
for several Web frameworks, including Hibernate, Spring, JSF, JSF CRUD
generator, and the Java Persistence API. There's also a new editor for
JavaScript development, with supports CSS/HTML code completion and the
ability to debug client-side JavaScript code within both Firefox and
Internet Explorer browsers. Look, too, for a new ability to debug
multithreaded Java technologies. This version comes with the latest
generation of the open-source GlassFish application server. But it's
the IDE's support for the leading dynamic scripters that is turning
some heads. Sun offered users a preview of its PHP support in version
6.1 earlier this year. This release formalized the tool's support for
the language. The list of early access Python tools includes an editor,
debugger and choice of Python runtimes... NetBeans is gaining some
ground among non-Java developers. It's now one of the top two Ruby
IDEs on the market, according to Gartner analyst Mark Driver. With
the arrival of Eclipse a few years ago, many industry watchers expected
NetBeans to fade away, as did other Java IDEs. But the toolset continues
to stand as perhaps the Eclipse alternative...

An announcement from the Object Management Group reports on the approval
of several OMG specifications. The OMG Board of Directors voted to
approve "Reference Metamodel for the EXPRESS Information Modeling
Language" as a revised specification. The RFC was supported by NIST,
Fraunhofer Institut fuerr Produktions- und Konstruktionstechnik (IPK),
Fachhochschule Vorarlberg, AIDIMA, Electronic Commerce Promotion Council
of Japan, John Deere, LKSoftWare Gmbh, NASA Goddard Space Flight Center,
New University of Lisbon (UNINOVA), and PDTEC. From the RFC Introduction:
"The information modeling language EXPRESS was standardized in 1994 as
Part 11 of the ISO 10303 Standards for the Exchange of Product Data. It
was revised in 1999 and in 2004. It was used for every information model
in the STEP series, and in 3 other standards series in ISO TC184
(Industrial Data), and for information models in standards developed by
other ISO Technical Committees. As of 2005, there were over 300 major
information models for manufacturing and construction information that
are formally specified in EXPRESS and standardized by ISO. These models,
and the EXPRESS language are in wide use in the manufacturing industry,
and the exchange models are supported by dozens of software tools. In
the more recent past, in order to make these models useful to an industry
in which programmers and modelers are not commonly taught EXPRESS,
further ISO projects have been undertaken to produce mappings from
EXPRESS to XML Schema (ISO 10303-28) and UML (ISO 10303-25). But each
of these mappings was specified entirely in text and targeted version 1
of XML Schema and UML respectively...Eurostep developed tooling to map
a subset of the metamodel to OWL. This was a first step toward the goals
of the third MEXICO project component. Further work in this area is
continuing with Eurostep and other partners. At the same time, a number
of other tool vendors who support the EXPRESS modeling community have
developed independent internal models of EXPRESS and mappings to various
languages, including UML, OWL, and XML Schema. Many of them are listed
as "supporters" of this specification. We all agree that the time has
come to standardize an XMI representation of EXPRESS, so as to permit
these tools to interoperate around a common representation. This
specification is the metamodel of the semantics of the EXPRESS language
that was developed and tested in the MEXICO project. It represents
completion of the first subproject in the MEXICO trilogy. And it has
value in its own right to other EXPRESS tool developers. For this reason,
we are bringing it to OMG for standardization. Participants in the
metamodel development activity include four "technical experts" who
participated in the development of the EXPRESS language itself. It also
includes technical experts who were principal developers of the Part 25
(mapping to UML) and Part 28 (mapping to XML Schema) standards. This
expertise gives us confidence that the metamodel is faithful to the
semantic intent of the EXPRESS standard..."

"A few weeks ago I had the pleasure of presenting as interactive session
on the subject of equipping your Identity Provider with an STS. Many
businesses are natural identity providers. Countries, banks, airlines,
clubs, credit reporting agencies, social networks... those are all
examples of entities involved in a 1:many relationship with subjects
and knowing a great deal of interesting facts about them. Once you
realize that yes, you want to be an IP, you've got to make that happen.
That basically means that you need the capability of minting portable
identities for your users whenever you are asked to and you deem
appropriate: and yes, in the current backbone architecture of the
metasystem that means that you need an STS. A security token service,
or STS if you are in a hurry, is the tool that the IP uses for
fulfilling its role: the security tokens are in a sense the reification
(sort of) of identities, hence being able of processing requests for
issuing tokens does the trick... the STS plays an absolutely pivotal
role for the IP: no STS, no party; it is a key asset to secure; high
availability is of essence... Let's say that you are now aware of
the importance of getting the STS right: from where you should start?
I suggest that the rough steps you may follow are: (1) Derive
requirements from what you have; (2) Pick an off-the-shelf product
that satisfies your requirements; (3)If there is no perfect fit,
consider how to leverage product's extensibility points (4) If
extensibility can't solve your problem, consider writing your own STS...
At the end of the day, an STS can be just a web service that is able
to issue security tokens. Or is it? Perhaps your scenario is a
federation in which only passive clients are allowed: in that case,
the STS is actually a web page rather than a WS-* service. And what
"able to issue security tokens" means? Using which protocols?
Authenticating against which kind of credentials? And which token
format should be produced, by the way? Factors driving STS implementation
decisions: (a) Attributes Stores: you want to be an IP, you've got to
have some identities in your stash. (b) Authentication Factors: the
authentication factor of choice will influence the protocols that your
STS can use, and impose further requirements in the context of the
protocol of choice, i.e., a specific token type. (c) Authentication
Stores: authentication factors and authentication stores represent
quite different requirements. (d) Requestors: how do we envision our
users to access the STS? (e) Intended RPs: the relying party
applications we foresee will require our tokens can influence, again,
the protocol hosting and the supported protocol through which we'll
expose our STS. If one of the apps we want to serve is a web service,
we better be prepared to expose a WS-Trust STS which issues holder-of-key
token types; if another is a web app which supports SAML-P, let's get
ready to support browser redirects and to process SAML-P compliant
requests. Another way in which the list of intended RPs can influence
the behavior of the STS, though not the wire, is the fact that such
list can (and should) be consulted for making decisions about if a
token should or should not be released for a specific RP... (f) Other
Authorities: it is pretty common to expect token issuance requests
secured with tokens obtained by other STSes... You may have your own
reason [for writing your own], and I don't want to discourage you from
writing your own STS: I just want to make sure that you are aware of
the implications of doing so..."

The U.S. government has established a common interoperable identity
card for use throughout civilian agencies, and the National Institute
of Standards and Technology is providing guidelines for integrating
the cards into physical access control systems. The Personal Identity
Verification Card was mandated by Homeland Security Presidential
Directive 12 (HSPD-12) as a smart credential that would be
interoperable not only across agency boundaries, but also across
physical and logical access control systems. NIST Special Publication
800-116, titled 'A Recommendation for the Use of PIV Credentials in
Physical Access Control Systems,' provides guidelines for best practices
in integrating the cards into systems used to control access to
facilities... PIV cards are smart cards that contain identifiers for
each card holder in multiple formats, including printing, photographs,
bar code and magnetic stripe, as well as digitally on a chip that also
includes fingerprints, digital certificates and encryption keys. The
technical standards for the cards are spelled out in the Federal
Information Processing Standard publication 201... In the NIST model,
risk-based access requirements for would range from unrestricted
access, through controlled and limited access, to an exclusion area,
with each level requiring additional authentication factors. A
controlled area would require a single factor; a limited access area
two factors, which might include a biometric; and an exclusion area
would require at least three factors, including a PKI and card
authentication keys. NIST recommends a phased implementation of PIV
into physical access systems. Migrations paths could include use of
multi-technology readers that can work with PIV Cards as well as other
credentials, retrofitting existing systems for use of PIV Cards, and
coexistence of PIV-enabled and existing systems in multi-tenant
facilities...

Members of the IETF Network Configuration (NETCONF) Working Group have
published a version -01 Internet Draft for "Conversion of MIB to XSD
for NETCONF." The NETCONF protocol provides mechanisms to install,
manipulate, and delete the configuration of network devices. It also
can perform some monitoring functions. It uses an Extensible Markup
Language (XML) based data encoding for the configuration data as well
as the protocol messages. NETCONF can be conceptually partitioned into
four layers; the last three layers of NETCONF have been already
standardized in RFC4741, RFC4742, RFC4743 and RFC4744. However, there
isn't a standard data modeling language or a standard data model for
NETCONF content layer. If we can't make the content layer of NETCONF
standardized, every vendor can define its own data model, which will
cause trouble and confusion in understanding the syntax and semantics
of data model in communication. Thus the NETCONF won't be applied
widely as SNMP in the future and the NETCONF defined in RFC4741 will
make no sense. Thus, NETCONF needs a data model for its process of
standardization. This documentation defines a standard expression of
SMI MIBs in XSD for NETCONF to ensure uniformity, general
interoperability and reusability of existing MIBs. In addition, we
define a XML schema to give a restriction and validation to translated
XSD files... NETCONF uses XML-based data encoding for the configuration
data as well as the protocol messages. Given such background, we
should provide a standard translation to make using the MIB's managed
objects with XSD easier... The work to standardize the content layer of
NETCONF is represented by two efforts. (1) Create a new data modeling
language and then a new data model for NETCONF. YANG is a new data
modeling language which defines a new SMI for NETCONF containing
datatypes, node statement, and syntax specification and so on. The NCX
is somewhat like YANG. It not only defines the new SMI for NETCOFN but
also has supplemented some ability to NETCONF protocol. All these new
languages are under discussion, which means that these will be a longer
term effort to create a solid SMI and then remodel some of the key
data to be carried. Conversion from MIB to XSD. This is being done by
XSDMI group. The XSDMI effort is designed to produce a XSD specification
by translating from MIB. NETCONF configuration is an improvement of CLI,
not SNMP which has been widely used for performance,monitoring and
fault management. However, some MIB-based monitoring data have become
part of the operational framework of many networks. And many of the
data names and meanings have been widely accepted by vendors for years.
For the long run, to establish a new data modeling language and new data
model is much better than simple conversion of MIB to XSD... Based on
the XSDMI's and previous smidump's work, this documentation defines a
standard expression of SMI MIBs in XSD for NETCONF to ensure uniformity
and general interoperability and reusability of existing MIBs...