For example, should all these custom XML
types being registered be required to use
the RFC 3023 +xml convention? If so, should
all the SHOULDs of section 7.1 be followed?
etc.. The question isn't restricted to RFC
3023 issues though. There may be value to
other common features between types.

Acknowledgment cycle

There are several architectural issues in
UN/CEFACT and ebXML which should probably be
solved by the W3C group. The needs are not
specific to ebXML and several other
"Registry" and XML vocabulary
groups may have similar requirements.

Acknowledgment cycle

"It seems to me that the RDFCore and
XMLSchema WGs (at the very least) ought to
develop a common, reasonably acceptable
convention as to the mapping between QNames
and URIs. Perhaps this is an issue that the
TAG ought to consider (because it is a
really basic architectural issue)."

whenToUseGet-7:
(1) GET should be encouraged, not deprecated, in XForms
(2) How to handle safe queries (New POST-like method?
GET plus a body?)
[link to this issue]

See
comments from Paul Prescod
to
Forms WG
"I know you've recently been asked
about PUT. During that discussion it arose
that HTTP GET is deprecated in the
specification. Does this mean that XForms
would be incompatible with an application
like Google that uses a form to generate a
GET URL?"

Acknowledgment cycle

This was raised in the light of lack of
consensus result from the workshop, and
specifically prompted by a question,
occurring as XEncryption made its way to
Candidate Recommendation status in W3C,
about the relationship of XEncryption to
other specs, and TAG discussion of XSLT
"templates" as an apparent corner
case in XML processing.

Second issue: namespace-based dispatching.
From TAG draft finding on issues *-{1,2,3},
the following draft text was removed for
discussion as part of this issue:

When processing XML documents, it is
appropriate for Web applications to dispatch
elements to modules for processing based on
the namespace of the element type.

Correct dispatching and processing requires
context - in general it is not reasonable
nor safe to do namespace-based processing
without knowledge of the namespace of
ancestor elements. Because of this, the
namespace of the root element of an XML
document has special status and serves
naturally as a basis for top-level software
dispatching in the case where the dispatch
information is not externally supplied.

It is acknowledged that there are exceptions
to this rule, for example XSLT documents
whose root element's namespace depends on
the desired output from application of the
XSLT.

It should be noticed that in the case of
certain sort of element including some in
XSLT, XInclude, XEncryption namespaces, that
a system conforming to the specification
will regcognize them at any point in a
document and elaborate them in place,
typically producing more XML which replaces
the element instance in the tree.

NW

Stephen [Farrell] has asked an interesting
question below that I expect will be
important to any activity that uses URIs as
identifiers in the context of a
semantic/security application: when are two
URI variants considered identical?

Draft finding:
URI Comparison
(link not maintained but see RFC3986).
. This has been integrated into RFC2396bis (
CVS repository
); the TAG expects to follow the progress of
RFC2396bis. Commentary and resolution should
happen through the IETF process.

The IETF has recently published RFC3205,
"On the use of HTTP as a
Substrate" [1] as Best Current
Practice.

This document makes a number of
recommendations regarding the use of HTTP.
Some are reasonable, such as guidelines
about what kinds of scenarios the HTTP is
most useful in, how to use media types and
methods to extend the HTTP, etc. However, it
also bases a number of recommendations on a
fuzzily-defined concept of 'traditional use'
of the HTTP. These directives may seriously
limit the future potential of the Web,
effectively freezing its capability to
common practice in 2001."

Action history

SW

SW has discussed this with new I18N
chair. SW invited I18N reps to
participate in a TAG teleconf,
probably in Dec 2003. At
15 March 2004 teleconf
, SW took an additional action to
request a two-week extension for TAG
comments.

Transition history

Background, proposals, threads, notes

The TAG believes it has addressed a majority
of points about the issue in the 11 Nov 2003
draft, with pointers to relevant sections
3.4 and 1.2.2, as well as the section on
versioning and extensibility. The TAG
declines at this time to handle the
following questions raised by the reviewer:
(1) Extension of XML. Answer: Application
dependent. (2) Handling of deprecated
elements.

Acknowledgment cycle

Type-augmented XML is a good thing
and a recommendation should be
prepared describing it both at the
infoset and syntax level. (I gather
there is already some work along
these lines in XML Schema?). Serious
consideration should be given to
80/20 points rather than simply
re-using the plethora of primitive
types from XML Schema.

Type-augmented XML has nothing to
say about default values created in
any schema.

Any software can create and/or use
type-augmented XML, whether or not
any validation is being performed.

Work on XQuery and other things that
require a Type-Augmented Infoset
must not depend on schema
processing, and should not have
normative linkages to any schema
language specifications.

Background, proposals, threads, notes

For now, the TAG has decided the issue by
withdrawing it. From TB: "I learned
that while there are linkages between xquery
and xml schema, they are non-normative; you
can implement xquery with other schema
languages; so I don't see an architecture
issue at the moment. I submitted a large
comment to the xquery process that there
does remain too much intermingling with xml
schema that could easily go away. If the two
specs aren't made sufficiently independent,
I expect to come back to the TAG."

Acknowledgment cycle

For me this questions depends on whether the
document type is a human-readable hypertext
document, when generic hypertext xml tools
would benefit from knowing what is a link,
and whether significance of the URI in
question is a hypertext link or something
different.

For me this questions depends on whether the
document type is a human-readable hypertext
document, when generic hypertext xml tools
would benefit from knowing what is a link,
and whether significance of the URI in
question is a hypertext link or something
different.

Maybe a compromise is to only allow the link
to specify the content-type when the server
is FTP (or something else with no
content-type control) or the HTTP server
returns text/plain or octet-steam, which
seem to be used for "don't know"
types.

The architecture of the World Wide Web does
not support the notion of a "home
page" or a "gateway page",
and any effort in law to pretend otherwise
is therefore bad policy. The publication of
a Uniform Resource Identifier is, in the
architecture of the Web, a statement that a
resource is available for retrieval. The
technical protocols which are used for Web
interaction provide a variety of means for
site operators to control access, including
password protection and the requirement that
users take a particular route to a page. It
would be appropriate to bring the law to
bear against those who violate these
protocols. It is not appropriate to use it
in the case where information consumers are
using the Web according to its published
rules of operation.

I would however, support an assertion in the
architecture document that important
information SHOULD be stored and
(optionally) delivered with markup that is
as semantically rich as achievable, and that
separation of semantic and presentational
markup, to the extent possible, is
architecturally sound.

Action history

CL

The XML Core WG would like TAG input on
whether the desirability of adopting IRIs
into the web infrastructure early outweighs
the anticipated disruption of legacy
systems.

The XML Core WG would also like TAG input on
the wisdom of early adoption given the
"Internet Draft" status of the
IRI draft
. So far adoption has relied on "copy
and paste", but there is potential for
these definitions to get out of sync.

The SVG spec states "This form of
addressing specifies the desired view of the
document (e.g., the region of the document
to view, the initial zoom level) completely
within the SVG fragment specification."

From Dan Connolly:

Do you consider the quoted paragraph above
in error?

Or do you disagree with my interpretation of
it, i.e. that
MyDrawing.svg#svgView(viewBox(0,200,1000,1000))
identifes a view of the drawing, and not any
particular XML element (nor other syntactic
structure) in the document.

Per
13 Jan 2003 teleconf
, note that the TAG considers
XInclude issues raised by M.
Murata
to be related to this issue. Which
media type should be used for
interpreting fragment identifiers?
Section 2.4 in the architecture
document says the media type of the
retrieval result, but Sections 4.2
and 4.3 in the XInclude CR says
text/xml or text/plain.

Glenn Adams email
on the existence of a number of
standards in the television domain
that: (1) disallow internal
declaration subsets; (2) require
standalone="no"; (3)
require a document type declaration,
with a specifically enumerated set
of public FPIs to be supported;

Action history

VQ

Given that binary infosets (currently,
binary PSVIs
) is what I work on daily and that I am
currently investigating ways in which they
could fit naturally into the web
(content-coding registration for instance),
I would be very interested in knowing what
-- if anything at this point -- the TAG
thinks of them and of how they could best
fit in.

I would like to raise a new issue to the
TAG. The issue is how to determine ID
attributes in any new work on XML, such as a
new profile or subset as dealt within issue
xmlProfiles-29
. I understand that this issue will be
normatively referred to in any
communications on issue #29.

Chris Lilley has started an
excellent discussion
on the various options for ID attributes, so
I won't duplicate that work. A number of
responders have said they are quite
supportive of providing a definition of IDs
as part of any new work on XMLProfiles, such
as the Web Services Architecture Working
Group. There is also some pushback, so it
seems worthy to have a continued discussion,
and the TAG should attempt to quickly reach
consensus.

At their 12 May 2004 ftf meeting, the TAG
accepted the proposed finding "How should
the problem of identifying ID semantics in
XML languages be addressed in the absence of
a DTD?". The issue is deferred while the XML
Core WG continues work on this issue.

The architecture of the web is that the
space of identifiers on an http web site is
owned by the owner of the domain name. The
owner, "publisher", is free to
allocate identifiers and define how they are
served.

Any variation from this breaks the web. The
problem is that there are some conventions
for the identifies on websites, that

/robots.txt is a file controlling
robot access

/w3c/p3p is where you put a privacy
policy

/favico is an icon representative of
the web site

and who knows what others. There is of
course no list available of the assumptions
different groups and manufacturers have
used.

Transition history

Background, proposals, threads, notes

Action history

NW

Write to David Orchard saying that
XInclude no longer uses frag ids and
the TAG is unable to construct from
its meeting record what the issue
was. We will discuss this further if
we get help, but otherwise expect to
close without action.

The XML architecture has tended to be built
according to a motto that all kinds of
things are possible, and the application has
to be able to chose the features it needs.
This is fine when there are simply the XML
toolset and a single "application". However,
real life is more complicated, and things
are connected together in all kinds of ways.
I think the XML design needs to be more
constraining: to offer a consistent idea of
what a chunk of XML is across all the
designs, so that the value of that chunk can
be preserved as invariant across a complex
system. Digital Signature and RDF transport
are just intermediate parts of the design
which need to be transparent. This required
a notion of equality, and a related
canonical serialization.

NW

Write up a named equivalence
function based on today's discussion
(e.g., based on infoset, augmented
with xml:lang/xml:base, not
requiring prefixes, etc.).: Write up
a named equivalence function based
on today's discussion (e.g., based
on infoset, augmented with
xml:lang/xml:base, not requiring
prefixes, etc.).

It increases the number of
characters that may legally appear
in Names.

Adds several new characters that may
appear in text if they are encoded
as numeric character references (C0
controls except NUL).

Removes several characters so that
they may not appear in text if they
are not encoded as numeric character
references (C1 controls).

Adds as a line-end character.

XML Schema 1.0 normatively refers to XML
Namespaces 1.0 for the definition of QName
and XML Namespaces 1.0 normatively refers to
XML 1.0 for the definition of Name and XML
1.0 has fewer Name characters than XML 1.1.

That means that by a strict interpretation
of the Recommendations, it is impossible to
write an XML Schema for a document that uses
the "new" Name characters. And by extension,
it is impossible for an XPath expression or
a protocol document to use XML 1.1.

"In a nutshell, it [
WS-Addressing - SOAP Binding
] requires that the URI in the "Address"
component of a WS-Addressing EPR be
serialized into a wsa:To SOAP header,
independent of the underlying protocol. IMO,
a Web-architecture consistent means of doing
this would be to serialize it to the
Request-URI when using SOAP with HTTP, or
the "RCPT TO:" value when using SOAP with
SMTP, etc.."

VQ

The question is about the identity of a
namespace, in particular, the xml:
namespace. One perspective is that the xml:
namespace consists of xml:space, xml:lang,
and xml:base (and no other names) because
there was a point in time in which those
where the only three names from that
namespace that had a defined meaning.
Another perspective is that the xml:
namespace consists of all possible local
names and that only a finite (but flexible)
number of them are defined at any given
point in time.

SW

A generic resource is a conceptual resource
which may stand for something which has
different versions over time, different
translations, and/or different content-type
representations. How should one indicate the
relationship between these?

Is the indefinite persistence of 'tag soup'
HTML consistent with a sound architecture
for the Web? If so, what changes, if any, to
fundamental Web technologies are necessary
to integrate 'tag soup' with SGML-valid HTML
and well-formed XML?